Australia Confronts Trust Issues on AI, Aims to Create Own AI Advisory Body

The latest country to increase AI regulation.

Australia has reportedly announced it is set to create its own artificial intelligence (AI) advisory body as well as guidelines to mitigate AI risks set to be created in consultation with different industry bodies and experts.

As per the official news release, the government is presently considering about enacting new legislation specifically focused on AI or changing current ones to impose required safeguards on the research and application of AI in high-risk environments.

Aside from the expected advisory body formation, the government is also reportedly taking immediate actions in collaborating with different industries to create a voluntary AI safety standard and to create choices for the optional watermarking and labeling of goods produced by artificial intelligence.

AI-Generated Sports Articles Suffer Mockery on Social Media Prompting Gannett to Halt LedeAI Experiment
This illustration photograph taken in Helsinki on June 12, 2023, shows an AI (Artificial Intelligence) logo blended with four fake Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence software. OLIVIER MORIN/AFP via Getty Images

Reuters states that although AI is expected to boost the economy, Science and Industry Minister Ed Husic states its application in business remains sporadic. The Minister states that there is a problem with trusting the technology itself, and a lack of trust is impeding the adoption of new technologies. He continues by saying that these technological problems need to be confronted.

Husic also reportedly stated that new and tougher regulations are needed for some applications of AI, such as self-driving vehicles or systems that evaluate job applications, even if the government wants to see "low risk" uses of AI continue to develop.

As to what classifies as "high risk" and "low risk," Reuters states that the government said it aims to differentiate between "high risk" examples of AI usage, such as the production of modified information, or "deep fakes," and "low risk" ones, such as screening spam emails.

A separate report stated that Husic says anything that jeopardizes people's safety or their chances of being hired or getting in trouble with the law would be the first step in determining what constitutes a high risk. Future modifications are then most likely to affect technologies in those areas.

Australia's Projected AI Legislation

Mandatory safeguards to ensure the safe design, development, and implementation of AI systems will reportedly be taken into consideration. These safeguards may include testing requirements for products to guarantee safety both before and after release, transparency regarding the model design and data supporting AI applications.

Safeguards may also include training programs for AI system developers and deployers, potential certification programs, and more precise expectations of accountability for organizations creating, implementing, and relying on AI systems.

Global Influences on Australia's AI Regulation

Looking forward, the government is reportedly keeping a careful eye on how other nations, including the US, Canada, and EU, are addressing the issues raised by artificial intelligence. Expanding upon its participation in the UK AI Safety Summit in November, Australia's government will persist in collaborating with other nations to influence global endeavors in this domain.

The Guardian states that these recently announced efforts would be added to the federal government's ongoing AI risk mitigation risk techniques, including communications minister Michelle Rowland's promise to amend internet safety regulations and mandate that tech firms remove damaging content produced by AI, such as deepfake intimate photos and hate speech.

AI use in Government task force and in schools, reviews of the application of generative AI are also underway.

ChatGPT Privacy Guide: Here Are Some Tips to Protect Your Data in OpenAI's Chatbot
Here are some tricks that you can do to have more privacy when using OpenAI's ChatGPT. Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics