Protecting Our Digital Future — The Impact of Deepfakes

LinkedIn – TEDAI
LinkedIn – TEDAI

Last week, the inaugural TEDAI conference in Vienna brought together the industry's leading minds and innovators to discuss the rapidly changing artificial intelligence landscape.

The issue of deepfakes has been at the forefront of the AI discourse in recent months, not least because of the number of elections in 2024. This was no different at TEDAI, wherein a session exploring the question of how to protect our digital future, Jonas Andrulis, CEO of Aleph Alpha, stressed that the public is yet able to recognise if online content is deepfake.

During the session, Andrulis commented on how "there is a toxic influence that a certain group of people using AI have." This is certainly true. It would not be unfair to label 2024 as the 'year of the deepfake' as consumers, voters, and users are, for the first time, being confronted with deepfake technology as a means of exploiting them for money or votes, amongst other things.

The rise of deepfakes presents both technological challenges and opportunities in protecting our digital future. We must balance innovation with the need for trust and security.

The proliferation of deepfake technology has had serious real-world complications. During the same session at TEDAI, security expert and former TED speaker Mikko Hypponen pointed to the 2023 Slovak elections, where on the eve of election day, an alleged audio recording of the liberal Progressive Slovakia party's leader, Michal Šimečka, and reporter Monika Tódová was posted on Facebook. The recording allegedly showed the two discussing buying votes. The two immediately condemned this audio as fake.

As AI develops and we begin to understand its capabilities better, there are ways to combat this threat using both technological and ethical solutions.

Also speaking at TEDAI, EMEA Leader of Tech, Media, and Telecoms at PWC, Mary Shelton, argued that ethical frameworks and responsibilities can and should be enhanced to ensure that the content that we consume, and share is "credible, explainable, [and] reliable."

One route towards this is to develop corporate responsibility. We have already seen moves to ensure following the Governor of California, Gavin Newsom's legislative moves that require robocalls to disclose when deepfake-created voices are used, and which ensure what Forbes described as a "range of control measures from platform labelling to removal and disclosure by creators."

This is one step towards answering Shelton's concerns that "we [need to] put the right governance in place" to make sure that this technology is built with "trust by design."

AI entrepreneur and cybersecurity expert Rotem Farkash, who was also in attendance in Vienna, echoes Shelton's view, arguing that "robust corporate governance frameworks are one of the key first steps to protect consumers and protect developers from their technology being used for purposes outside of their initial mission."

"As a developer," Farkash continued, "it is a big concern that our technology can be used by malicious actors for purposes that the tech was never designed for. This really goes against our principles as innovators and creators, and so as leaders in this space, we have to make sure that we have the right systems in place to mitigate these problems."

AI deepfakes were front of mind at TEDAI, and it is a testament to the event, its experts, and the industry as a whole that these vital issues are getting the coverage we need them to receive. It is critical that developers and leaders, both inside and outside the industry, that the momentum generated by this event continues to push forward vigilance in protecting our digital future from the manipulation of reality.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics