Meta's President Downplays AI Risks, Calls Current Models 'Quite Stupid'

Concerns on AI dangers are just hype.

Nick Clegg, president of worldwide affairs at Meta, has spoken out against the present state of Artificial Intelligence (AI) models, calling them "quite stupid."

Speaking on the BBC's Today Programme, Clegg downplayed the dangers posed by AI, saying that the "hype has somewhat run ahead of the technology." He asserted that the AI models now cannot reach genuine autonomy or make independent decisions.

Following Meta's announcement that it will make its extensive language model, Llama 2, freely available for anyone, Clegg commented.

Large text databases are pieced together by large language models to anticipate the next word in a sequence, enabling chatbots like ChatGPT. While some AI researchers worry about possible existential risks from undeveloped AI systems, The decision of Meta, the parent company of Facebook, to make Llama 2 open-source has caused controversy in the IT world.

Artificial Intelligence Can Be Abused

Llama 2's availability to researchers and for-profit companies has sparked worries about abusing such a formidable tool. Previous generations of chatbots have been programmed to promote hate speech, provide fake information, and issue dangerous commands. Whether Llama 2's safeguards are adequate to stop such abuse and how Meta will address any possible problems is yet to be seen.

Interestingly, Llama 2's accessibility is greatly influenced by Meta's collaboration with Microsoft. As a result of Microsoft's significant investment in OpenAI, the originator of ChatGPT, Llama 2, will be useable on Microsoft systems, such as Azure.

The University of Southampton's Dame Wendy Hall, a regius professor of computer science, questioned whether the IT sector could be relied upon to self-regulate complex language models, particularly open-source models. She emphasized the necessity for careful management of such potent AI technology by comparing it and providing individuals with a blueprint to construct a nuclear weapon.

Addressing Safety Issues

In response to worries, Clegg reaffirmed that Meta had taken protective measures to guarantee Llama 2's safety. For months, the organization had 350 people rigorously stress-test the model to find any possible problems. Llama 2 is more secure than previous open-source big language models, according to the Meta official.

Clegg also remarked that not all AI models should be open-sourced, despite his belief that AI should be controlled. He added Meta's voice-generating model Voicebox will not be publicly launched, according to The Guardian.

The decision by Meta to open-source Llama 2 was made as a result of its expanding collaboration with Microsoft. Observers see the move as a deliberate attempt to counter OpenAI, a business that has benefited from significant investments from Microsoft.

With Llama's release in February, Meta has been highlighting its dedication to AI technology, per Business Insider. Although the business has been working on additional robust AI technologies, its absence of consumer-facing AI products has reduced its prominence in the AI market.

The open-sourcing of Llama 2 offers a glimmer of optimism for future advancements in AI technology and its potential to improve lives as Meta delves further into AI.

byline -quincy
techtimes
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics