Tech Giants Unite at AI Summit 2.0: Promise Implementation of Safety Measures

Safety over AI.

The safety of developing artificial intelligence continues to be at the forefront of the world, with the 16 biggest companies reportedly making safety pledges during the second global AI summit in Seoul.

Google, Microsoft, Meta, and OpenAI were among the businesses that pledged voluntary safety at the AI Seoul Summit. These pledges included shutting down their state-of-the-art systems if they could not control the most severe hazards.

SPAIN-WIRELESS-TELECOMS-INTERNET-MOBILE
An AI (artificial intelligence) logo is pictured at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on February 27, 2024. JOSEP LAGO/AFP via Getty Images

(Photo: JOSEP LAGO/AFP via Getty Images) An AI (artificial intelligence) logo is pictured at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on February 27, 2024.

xAI, Mistral AI in France, Zhipu.ai in China, G42 in the United Arab Emirates, Amazon, Samsung, IBM, and other AI businesses signed up for the safety promises. They promised responsible governance and openness to the public to guarantee the security of their most sophisticated AI models.

The two-day conference follows the AI Safety Summit held in Bletchley Park, United Kingdom, in November. Concerns about the possible harm that technology poses to humanity and daily life have prompted governments and international organizations to work quickly to create safeguards for it.

Countries on AI Safety

They received support at a virtual summit led by South Korean President Yoon Suk Yeol and British Prime Minister Rishi Sunak from the Group of Seven major economies, the EU, Singapore, Australia, and South Korea.

According to the South Korean presidential office, governments have also decided to prioritize AI safety, innovation, and inclusivity.

The British government, which co-hosted the event, said in a statement that leaders from ten nations as well as the European Union will create a shared understanding of AI safety and coordinate their efforts on AI research.

It added that the safety institution network will comprise those established by the United States, the United Kingdom, Japan, and Singapore since the Bletchley meeting.

Insufficient AI Regulations

Yoshua Bengio, the "godfather of AI" and computer scientist, praised the agreements but pointed out that the voluntary pledges would need regulation-support.

This comes after Bengio and other AI experts just released a study that warned present AI regulations are insufficient in the face of significant breakthroughs.

The warning comes from 25 scientists, including Bengio and Geoffrey Hinton, who was honored with the ACM Turing Award, the Nobel Prize equivalent in computer science.

The document states that "we" are not ready to successfully handle these risks. It's been said that although humans are investing a lot of money in enhancing AI systems' capabilities, they're spending much less on making sure these systems are secure and minimizing any potential damage.

The best-selling author of Sapiens, Yuval Noah Harari, the late Nobel laureate in economics Daniel Kahneman, Sheila McIlraith, an AI professor at the University of Toronto, and Dawn Song, an academic at the University of California, Berkeley, are among the other co-authors of the concepts.

ChatGPT Privacy Guide: Here Are Some Tips to Protect Your Data in OpenAI's Chatbot
Here are some tricks that you can do to have more privacy when using OpenAI's ChatGPT. Tech Times

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics