Is AI turning into a Skynet-level threat? Google and Microsoft's AI race heated up even more after the search engine giant confirmed the arrival of Bard.
Other tech giants are joining the race by advancing their AI technology. For example, China's Baidu has already introduced its Erniebot, and Meta is working on their version of the new tech.
AI experts, on the other hand, are concerned about the unintended consequences of these innovations.
Elon Musk, Tech Execs Call to Pause AI Development
Elon Musk and several tech leaders and AI experts have signed an open letter calling for a temporary halt in the development of advanced AI systems, CNET reports.
The letter, which currently has more than 1,000 signatories, urges AI labs to halt training of systems more advanced than GPT-4 for at least six months. The request for a pause follows the recent public debut of OpenAI's GPT-4, the most advanced AI system to date.
Is AI a threat to society?
AI systems with human-competitive intelligence, according to the letter, pose significant risks to society and humanity. While AI labs race to develop more powerful AI systems, the letter claims that there is a lack of planning and management in place to ensure their safe and responsible deployment.
The signatories express concern about the potential consequences of artificial intelligence systems flooding information channels with propaganda, automating jobs, and eventually outsmarting and replacing humans.
Also stated in the letter, AI labs should use the proposed pause to collaborate on the development and implementation of a set of shared safety protocols for advanced AI design and development that are audited and overseen by independent outside experts. These protocols must ensure that AI systems are absolutely safe.
Read Also : Twitter Boosts VIPs in the Feed? Reports Claim Elon Musk, MrBeast, POTUS Biden, and MORE Benefits
In addition, the letter urges policymakers to expedite the creation of robust AI governance systems, such as new regulatory authorities dedicated to AI, oversight, and tracking of highly capable AI systems, and liability for AI-caused harm.
Stepping Back from the Dangerous AI Race
The signatories stress that the call for a pause does not imply a halt to AI development in general but rather a step back from the risky race to develop unpredictable black-box models with emergent capabilities.
Instead, they argue that AI research and development should be refocused on improving the accuracy, safety, interpretability, transparency, robustness, alignment, trustworthiness, and loyalty of today's powerful, cutting-edge systems.
Several experts have agreed to sign the open letter, including Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, and Sapiens author Yuval Noah Harari. The call for a pause has yet to be responded to by OpenAI, which recently announced a revamp to the system by adding plugins.
Too late?
While the statement is supported by research and legitimate concerns, slowing the advancement of AI technology may not be the best solution. AI technology has already been introduced to millions of users, and industries are beginning to integrate it into their critical services. Stopping or slowing down the latest AI technology will undoubtedly disrupt the lives of many people.
Stay posted here at Tech Times.
Related Article : Majority of Jobs in US, EU May Be Impacted by AI, New Study Claims