India is taking a proactive stance against the rising tide of deepfake content on social media platforms by initiating the drafting of regulations aimed at detecting and curbing the spread of harmful AI media.
The announcement came from Ashwini Vaishnaw, India's IT Minister, following extensive meetings with major social media companies, industry body Nasscom, and academic experts. The consensus reached is that regulations are imperative to effectively combat the proliferation of deepfake videos and apps that facilitate their creation.
Deepfake: A Threat to Democracy
Deepfakes, a form of synthetic media generated using artificial intelligence to convincingly replace a person's likeness or voice, have raised ethical concerns regarding consent and misinformation.
Vaishnaw underscored the shared concerns of social media companies, acknowledging the harmful impact of fake news on society. As reported by TechCrunch, the Indian official emphasized, "They understood that it's [deepfakes], not free speech. They understood that it's something that's very harmful to society. They understood the need for much heavier regulation on this, so we agree that we will start drafting the regulation today itself."
The impending regulations will focus on strengthening reporting mechanisms for individuals to report such videos, ensuring proactive and timely actions by social media companies. Vaishnaw stressed the necessity of more proactive measures, citing the immediate and potentially irreversible damage caused by deep fakes. "The actions need to be more proactive because the damage can be very immediate," he said, emphasizing that even action "hours" after reporting might not be sufficient.
In addition to potential fines for non-compliance, the government is contemplating holding individuals accountable for creating such misleading content.
The Ministry of Electronics and Information Technology is poised to commence assessments and draft regulations on deepfakes. A follow-up meeting with stakeholders in the first week of December will finalize the four-pillared structure of the regulations.
India Joins Efforts to Combat Misuse of AI
India's move to draft regulations follows Prime Minister Narendra Modi's expressing concerns about the rapid spread of fake videos.
Indian Prime Minister Modi remarked at the G20 Virtual Summit that deepfake is a grave threat to society in this age of modern technology.
As quoted by La Prensa Latina, the Indian leader emphasized the "need to use technology in a responsible manner. There is growing concern about the negative use of AI all over the world. We have to move forward, understanding the seriousness of how dangerous deepfake is for society and for the individual."
India's efforts align with global initiatives addressing the risks of AI technology. The country, along with others such as the United Kingdom, the United States, Australia, and China, recently signed the Bletchley Declaration during the AI Safety Summit 2023 in the UK, according to WION.
The nations committed to collaborative research on AI safety, emphasizing the importance of responsible technology use in the face of growing concerns worldwide.