The European Union (EU) now reportedly seeks to fully realize its regulations for the newly enacted AI Act by engaging with leading artificial intelligence model providers.

According to Tech Crunch, the EU has started the consultation on regulations for general-purpose AI models (GPAIs) like Anthropic, Google, Microsoft, and OpenAI, as part of the bloc's AI Act for regulating AI applications based on risk. Legislators aim for the Code of Practice to assist in ensuring the credibility of GPAIs by offering developers instructions on meeting their legal responsibilities.

European Commission President Holds Press Conference During China Visit
(Photo : Kevin Frayer/Getty Images)
A member of the Peoples Armed Police stands guard in frpnt of the flag of the European Union at the European Delegation before a press conference by European Commission President Ursula von der Leyen on April 6, 2023 in Beijing, China.

The EU AI Act, which was approved earlier this year, will be activated on August 1 very soon. However, it will be implemented in phases to meet compliance deadlines, with Codes of Practice set to take effect nine months later in April 2025. That allows the group to create the direction.

The Commission is seeking feedback on the consultation from GPAI providers operating in the EU, as well as from businesses, civil society representatives, rights holders, and academic experts.

Read Also: AI Deepfake Technology Fuels New Wave of Child Abuse Imagery: Here's the Alarming Part 

AI Act Consultation

The survey is separated into three sections for the consultation. One section focuses on transparency and copyright issues for GPAIs; another addresses regulations on risk classification, evaluation, and reduction for GPAIs with systemic risk; and a third part handles the assessment and supervision of Codes of Practices for GPAIs.

The Commission announced that a preliminary version of the Code will be created using the feedback and responses to specific inquiries.

Respondents to the consultation can impact the design of the template that the AI Office will offer to GPAI providers to meet the legal obligation of providing a summary of model training information. It will be fascinating to see the level of detail in that template. 

US AI Safety Updates

In the same way, Apple is now reportedly one of the tech companies that align with the US AI Safety rules introduced by the Biden administration, focusing on the benefits it provides to the public. Cupertino has vowed its loyalty to Apple Intelligence ahead of the potential launch with the availability in iOS 18 expected later this year.

Tech companies must ensure that their AI systems do not promote discrimination or pose security threats under these regulations.

Apple is the newest addition to this group of regulations, joining other tech companies such as Google, Microsoft, OpenAI, Meta, Amazon, and others who had previously committed.

Apple's dedication to providing a top-notch experience for the public is voluntary and not legally binding.
As per the report, Apple needs to conduct testing on its AI systems to guarantee they do not exhibit any form of prejudice towards various communities. In addition, Cupertino must ensure that its Apple Intelligence and language model comply with US AI Safety regulations regarding national security.

The public has shown great enthusiasm for the upcoming Apple Intelligence feature, set to debut with iOS 18, due to its promised benefits for users. Primarily, it will be an AI-driven system that will provide valuable features to users, including the ability to generate compositions and access writing tools in the near future. 

Related Article: Unauthorized ChatGPT Usage, Punished by China 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion