Crash Test systems will be introduced by the European Union, specifically made for artificial intelligence. Through this process, EU regulators will ensure that innovations and technology are safe to use before hitting the market.
Launching Crash Test Systems
The European Union launched four permanent testing and experimental facilities across the continent in an effort to ensure that innovations powered by artificial intelligence are safe before hitting the market, Bloomberg reported that the trade bloc injected around $240 million (€220 million) into this project.
Through the crash tests systems and facilities that will be launched from next year, it will give technology providers a space to test AI and robotics in different fields, including manufacturing, health care, agriculture and food, and cities. Testing the technology is the only right choice for the trade block, knowing that it rapidly emerges every day.
EU Director for Artificial Intelligence and Digital Industry Lucilla Sioli stated during a launch event in Copenhagen that innovators are expected to offer new AI-powered tools in the market, labeled as "trustworthy" products. Sioli also highlighted disinformation as one of the risks introduced by AI to the people.
Last week, consumer groups across European countries urged regulators to launch investigations into the potential risks of generative artificial intelligence, including ChatGPT. Through this effort, the consumer groups believe that it will enforce existing legislation to keep the consumers safe.
BEUC Deputy Director General Ursula Pachl stated her concerns surrounding the technology, noting the potential deception, manipulation, and harm affecting individuals. According to the released statement, this will also contribute to disinformation, bias amplification, and fraudulent activities facilitated by the systems.
Pachl stated, "We call on safety, data, and consumer protection authorities to start investigations now and not wait idly for all kinds of consumer harm to have happened before they take action. These laws apply to all products and services, be they AI-powered or not and authorities must enforce them."
EU's AI Act
This effort came after the European Union head started the AI Act. According to a report from The Guardian, this is already two years in the making as its first serious attempt to regulate the technology. The act classifies AI systems according to the risk they pose to users, including unacceptable risk, high risk, limited risk, and minimal or no risk.
Once the trade block poses a technology as an unacceptable risk, this will eventually be banned as it promotes manipulating people, encouraging dangerous behavior in children, social scoring, policing systems based on profiling, location or past criminal behavior, and identifying via biometric systems.
EU aims to agree on the final draft by the end of this year after MEPs set their votes in the middle of June for this bill to push through an amended version of the draft tabled by the commission. As of the moment, there are no trilateral talks between the EU, the EU Parliament's AI Committee Chairs, and the European Union Council to push the legislation through.