The UK's Science, Innovation, and Technology Committee chief warned that artificial intelligence (AI) regulators lacked financial support, which is necessary to keep up with the technology's growth.
A report by the committee into the governance of AI said the £10 million the government announced in February to support the Office of Communications and other authorities in responding to the technology's rise was "clearly insufficient," according to a report written by the committee on the governance of AI.
According to the assessment, the upcoming administration ought to declare additional financial assistance appropriate for the magnitude of the undertaking. The committee expressed concern over reports that some developers' models were unavailable to the recently established AI Safety Institute for pre-deployment safety testing.
It further noted that, in violation of the agreement made at the Bletchley Park conference in November 2023, the incoming administration should identify the developers rejecting access and provide a reason for their refusal.
According to the Department for Science, Innovation and Technology, the UK is taking steps to upskill regulators and oversee AI as part of a larger £ 100 million financing package.
Highlighted AI Dangers
The report also highlighted AI's deceptive capabilities. It stated that deepfake content is intended to harm the democratic process, and thus, the government and authorities should take strong enforcement measures against online platforms that host it to protect the integrity of the general election campaign.
However, the report continued to caution about AI's potential to function as a "black box," meaning that its output's logic and foundation may be unknown. This presents perhaps the most significant difficulty.
Warning on Insufficient AI Safeguards
The report comes just a week after a warning was released on May 20 regarding the insufficient safeguards currently in place should significant AI breakthrough occur.
Twenty-five experts, including two of the three "godfathers of AI," Geoffrey Hinton and Yoshua Bengio, who have won the ACM Turing award, referred to as the "Nobel Prize of Computing," for their work, have issued the warning.
The scholarly research offers government safety frameworks that impose harsher standards if Technology evolves swiftly to address extreme AI threats during rapid advancement.
It also calls for stricter risk-checking standards for tech corporations, more funding for recently established organizations such as the AI safety institutes in the US and the UK, and restrictions on the use of autonomous AI systems in critical societal roles.
The best-selling author of "Sapiens," Yuval Noah Harari, the late Nobel laureate in economics Daniel Kahneman, Sheila McIlraith, an AI professor at the University of Toronto, and Dawn Song, an academic at the University of California, Berkeley, are also among the other co-authors of the concepts.
The study warned that "we" are not ready to handle these dangers appropriately. The human race is investing a lot of money in making AI systems more capable but considerably less in ensuring they are safe and reducing any harmful impacts they may have.