Two top Japanese companies, Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun Group Holdings, are reportedly demanding the quick adoption of artificial intelligence (AI) regulation after they claimed unregulated AI could collapse social order and cause wars.
First reported by the Wall Street Journal, the Japanese firms' manifesto, although highlighting the potential benefits of generative AI in enhancing productivity, was largely dubious of the technology.
(Photo: KIRILL KUDRYAVTSEV/AFP via Getty Images) A photo taken on February 26, 2024, shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, western Germany.
Without providing examples, it stated that AI technologies have already begun to harm human dignity since they are sometimes designed to capture users' attention without concern for morality or veracity.
"In the worst-case scenario, democracy and social order could collapse, resulting in wars," the manifesto claimed unless AI is restricted.
It stated that Japan should respond promptly, including passing rules to safeguard elections and national security against the exploitation of generative AI.
This new development comes after a recent research by the Voting Rights Lab, a nonpartisan voting rights monitor, indicating that AI's rapid growth has prompted multiple US state governments to implement safeguards against it, all in preparation for future AI-riddled elections.
Read Also: California Says It Will 'Learn' from Europe for AI Laws
US States' AI Safeguards
The Voting Rights Lab reported that it was tracking over 100 pieces of legislation in 39 state legislatures, including provisions targeting limiting AI's ability to produce election disinformation.
The bill follows many high-profile cases involving "deep-fake" video technology, computer-generated avatars, and voices in political campaigns and ads.
United AI Safety Testing
The United States and the United Kingdom have also announced intentions to collaborate on AI safety testing. According to a press release, the two countries' AI safety testing groups will develop a common approach that requires the use of the same procedures and supporting infrastructure.
The institutes want to perform a joint testing exercise on a publicly available AI model. The organizations would attempt to swap workers and communicate information following national laws, regulations, and contracts.
The UK and US AI Safety Institutes were established on the first day of the U.K.-hosted AI Safety Summit at Bletchley Park in November 2023.
Legislators and industry executives will likely rely largely on AI Safety Institutes to mitigate the hazards associated with quickly evolving AI systems.
The companies that produced ChatGPT and Claude, OpenAI, and Anthropic have both provided detailed plans explaining how safety testing would inform future product development.
The recently finalized European Union's AI Act and the United States. President Joe Biden's executive order requires firms that create powerful AI models to disclose the results of their safety testing.
A global effort is underway to govern AI, with the European Union at the forefront. The EU's new rule requires creators of the most powerful AI models to do safety assessments and alert authorities of major accidents. It also intends to prohibit the use of emotion recognition AI in schools and businesses.
Related Article: AI-Enhanced Videos are Not Evidence in Court, Says Washington State Judge
(Photo: Tech Times)