AI Researchers Pen Letter to OpenAI, Meta, for More Accessible AI Systems

More than 200 top AI researchers join the plea.

OpenAI, Meta, and other AI companies are reportedly being called upon by more than 200 artificial intelligence (AI) researchers to change their policies and make their AI systems more open to independent evaluations.

The penned open letter says that the strict restrictions meant to prevent malicious actors from misusing AI systems are instead chilling independent research. If these auditors attempt to safety-test AI models without a company's approval, they risk having their accounts blocked or facing legal action.

Experts in AI research, policy, and law signed the letter: Percy Liang of Stanford University; Julia Angwin, a journalist who won the Pulitzer Prize; Renée DiResta from the Stanford Internet Observatory; Deb Raji, a Mozilla fellow who has pioneered research into auditing AI models; and Marietje Schaake, a former government official and member of the European Parliament, along with other prominent AI researchers.

(Photo: KIRILL KUDRYAVTSEV/AFP via Getty Images)
A photo taken on February 26, 2024 shows the logo of the Artificial Intelligence chat application on a smartphone screen (L) and and the letters AI on a laptop screen in Frankfurt am Main, western Germany.

The letter makes the case that since hundreds of millions of people have utilized generative AI, particularly in the last two years, research-independent assessment of the hazards associated with AI is a crucial warm of ensuring accountability.

The letter adds that AI holds great promise, but there are also significant concerns associated with it, including non-consensual intimate imagery, copyright infringement, and bias.

The researchers then cautioned companies using generative AI not to make the same mistakes as social media platforms, many of which have essentially outlawed certain kinds of research meant to hold them responsible by threatening legal action, sending cease-and-desist letters, or using other tactics to discourage research.

In many cases, generative AI companies have already disabled researcher accounts and changed their terms of service to discourage specific types of reviews.

AI Research Indemnification

As a call to action, the letter suggested that AI companies first establish a legal safe harbor. The letter stated that research will be covered so long as it is conducted independently and in good faith and complies with established norms for vulnerability disclosure on AI safety, security, and reliability.

Researchers further suggested that companies make a commitment to more fair access by using unbiased reviewers to filter submissions for researcher assessment. This would protect rule-abiding safety research against ineffective account bans and reduce concerns that firms would select their own evaluators.

These fundamental commitments are an important first step in the lengthy process of building and evaluating AI in the public interest, even if the researchers state they are aware that these will not solve every issue with responsible AI as it is currently structured.

OpenAI and Meta's Research Accessibility

The attempt comes at a time when AI firms are becoming more assertive in their efforts to bar external auditors from their networks.

In recent court filings, OpenAI asserted that the New York Times was "hacking" its ChatGPT chatbot by looking for possible copyright breaches. According to Meta's updated conditions, if a user claims the system violates their intellectual property rights, Meta will cancel their license for its newest big language model, LLaMA 2.

Another signatory, movie studio artist Reid Southen, had several accounts banned as he investigated the possibility of using the Midjourney image generator to produce copyrighted images of movie characters. Following his presentation of his results, the business removed ominous language from its terms of service.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics