Australia Struggles to Combat Political AI Deepfakes with Limited Legal Authority

Political AI Deepfakes remain legally gray in Australia.

Artificial intelligence deepfakes cannot reportedly be fully prevented by the Australian Electoral Commission due to the limited legal powers of the commission regarding booming technology.

According to Australian Electoral Commissioner Tom Rogers, deepfakes with a political bent are not currently illegal in Australia. He argues that if they were properly approved, the messages would not currently violate the Election Act.

The AEC has little authority to interfere with political materials but can take action when false information regarding the election process is disseminated.

US-IT-MEDIA-POLITICS
A AFP journalist views a video on January 25, 2019, manipulated with artificial intelligence to potentially deceive viewers, or "deepfake" at his newsdesk in Washington, DC. - "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences. Photo by ALEXANDRA ROBINSON/AFP via Getty Images

(Photo by ALEXANDRA ROBINSON/AFP via Getty Images) An AFP journalist views a video on January 25, 2019, manipulated with artificial intelligence to potentially deceive viewers, or "deepfake" at his newsdesk in Washington, DC. - "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic due to advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences.

According to Rogers, generative AI, the technology behind the majority of deepfakes, is becoming a bigger problem for elections domestically and globally.

Senator David Shoebridge of the Greens stated that regulators needed more authority to remove deepfake content. He was worried voters could be duped by generative AI content that grows more lifelike by the month.

He goes on to say that it is one thing to witness an unscrupulous player fabricating information about and disparaging their political rival; it is quite another to coerce a political rival into telling lies on their own and coming up with the lies being spread by that rival.

Global AI Deepfake Concerns

Australia's deepfake concerns show the growing threat AI poses during the elections. A recent US study conducted by the Department of Homeland Security and shared with law enforcement partners across the country raises the possibility that foreign and local actors may utilize AI to create major barriers in the run-up to the 2024 election cycle.

Federal bulletins are sporadic alerts sent to law enforcement partners about specific dangers and issues. This bulletin warns that AI capabilities may facilitate attempts to influence the US election cycle in 2024.

During this election cycle, various threat actors are anticipated to attempt to exert influence and cause disruption.

Thanks to generative AI approaches, the 2024 election cycle will likely be more vulnerable to manipulation by foreign and domestic threat actors.

These approaches could intensify emerging events, tamper with election protocols, or target election infrastructure.

Continuous Warnings on AI

Director of National Intelligence Avril Haines also warned Congress about the risks associated with generative AI during a hearing by the Senate Intelligence Committee last week, pointing out that the technology can create realistic "deepfakes" whose source can be concealed.

According to the bulletin, the timeliness of AI-generated media relevant to an election can be just as critical as the content itself because it may take some time to refute or contradict the erroneous material spreading online.

In November 2023, an AI-generated video persuaded voters in a southern Indian state to favor a certain candidate on election day, giving officials little time to dispute the video.

According to the research, this is just one example of the threat that still exists abroad.

ChatGPT Privacy Guide: Here Are Some Tips to Protect Your Data in OpenAI's Chatbot
Here are some tricks that you can do to have more privacy when using OpenAI's ChatGPT. Tech Times

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics