Y Combinator's CEO and President, Garry Tan, recently stated that artificial intelligence will likely need regulation but that AI bills in San Francisco and California are "concerning." Y Combinator is one of the top firms that provide seed funding for startups. 

Tan stated that he generally supported the National Institute of Standards and Technology's (NIST) effort to create a framework for mitigating GenAI risk and believed that a good deal of the Biden Administration's EO was probably in the correct direction.

SPAIN-WIRELESS-TELECOMS-INTERNET-MOBILE
An AI (artificial intelligence) logo is pictured at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on February 27, 2024.
(Photo : JOSEP LAGO/AFP via Getty Images)

NIST's framework suggests defining GenAI as a company that should abide by current laws covering data privacy and copyright, disclosing to end users how GenAI is used, and enacting rules prohibiting GenAI from producing materials that promote child sexual abuse.

Biden's executive order includes a long list of directives, such as making AI businesses provide the government with safety data and guaranteeing equitable access for small developers.

Like many other Valley VCs, Tan was cautious about other regulatory initiatives. He referred to the AI-related measures that are making their way through the legislatures in San Francisco and California as extremely alarming. 

He claims that the main topic of discussion in terms of policy at the moment is what constitutes a decent version of this and that "we" can look to smart individuals like Ian Hogarth in the UK for guidance. Tan states s they are also conscious of the concept of power concentration. Simultaneously, they are attempting to determine ways in which we may both mitigate the worst-case damages and foster innovation.  

Read Also: "Godfathers of AI" Warn Regulations on Booming Technology are Insufficient 

AI Safety Pledges 

Tan's sentiments regarding AI regulations come as 16 of the biggest companies recently made safety pledges concerning AI development during the recently concluded second global AI summit.

Google, Microsoft, Meta, and OpenAI were among the companies that promised voluntary safety at the AI Seoul Summit. As part of these promises, they promised to turn off their cutting-edge technology if they could not control the most dangerous situations.

Amazon, Samsung, IBM, xAI, Mistral AI in France, Zhipu.ai in China, G42 in the United Arab Emirates, and other AI companies all signed the safety assurances. They pledged transparent public relations and responsible governance to ensure the security of their most advanced AI models. 

AI Regulation in the Second AI Summit

At the virtual summit, the leaders of the Group of Seven major economies, Korea, the EU, Singapore, and Australia, and Australia, as well as British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol, offered their support.

The South Korean presidential office reports that governments have likewise decided to prioritize AI safety, innovation, and inclusivity.

The British government, which co-hosted the event, said in a statement that leaders from ten countries will collaborate on AI research and develop a common understanding of AI safety.

The two-day meeting follows the November AI Safety Summit, which took place in Bletchley Park, United Kingdom. Governments and international organizations are racing to protect technology because of worries about the potential harm it poses to people and daily life.

Related Article: Leading Scientist Warns: Big Tech Downplays Existential Threat of AI 

Written by Aldohn Domingo

(Photo : Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion