The Group of Seven (G7) industrial nations is poised to endorse a code of conduct for firms involved in the development of advanced artificial intelligence (AI) systems, according to a document from the G7 and reported first by Reuters.
This voluntary code is a significant step in establishing regulatory frameworks for AI across major countries, addressing concerns over privacy and security risks.
11-Point AI Code
Leaders from the G7 economies, including Canada, France, Germany, Italy, Japan, the UK, the US, and the European Union, initiated this process in May during a ministerial forum known as the "Hiroshima AI process."
The 11-point code aims to promote the global adoption of safe, secure, and trustworthy AI. It offers voluntary guidance for organizations engaged in developing cutting-edge AI systems, including foundational models and generative AI systems.
The objective is to harness the benefits of AI while effectively managing associated risks and challenges.
The code calls on companies to implement measures to identify, assess, and mitigate risks throughout the AI lifecycle. It also emphasizes the importance of addressing incidents and patterns of misuse after AI products have been deployed.
Furthermore, companies are encouraged to publish public reports detailing the capabilities, limitations, and potential misuse of their AI systems, alongside investments in robust security controls.
Vera Jourova, digital chief of the European Commission, noted at an internet governance forum in Kyoto that a Code of Conduct provides a solid foundation for ensuring safety and acts as an interim measure until formal regulations are established.
Read Also : OpenAI's New 'Preparedness' Team Focuses on Countering 'Catastrophic' AI Risks, 'Human Extinction'
UN Establishes High-Level AI Advisory Board
In a related development, United Nations Secretary-General António Guterres has taken a step toward addressing global AI governance by assembling a panel of 39 experts to deliberate on these critical matters.
This diverse advisory group comprises prominent academics from the US, Russia, and Japan, industry leaders in technology, and representatives from foreign governments. Notable figures from Microsoft, OpenAI, and Sony, among others, are also part of the panel, emphasizing their expertise and influence in the field.
With members spanning six continents, this multi-dimensional AI advisory panel aims to enhance global AI governance. It includes distinguished AI specialist Vilas Dhar from the US, esteemed Chinese professor Yi Zeng, and Egyptian lawyer Mohamed Farahat, reflecting a wide array of perspectives.
Secretary-General Guterres underscored the transformative potential of AI for societal progress, while also acknowledging the possible risks associated with its malicious use.
He emphasized that responsible AI governance is crucial in maintaining trust in institutions, preserving social cohesion, and safeguarding democracy. This initiative aligns with the growing global interest in AI and efforts to address its potential risks through collaboration between policymakers and tech experts.