A group of ethicists have criticized the call for a "pause" on the development of AI systems backed by Elon Musk, saying that the proposal distracts from the real harm caused by AI systems today.
Here's What Ethicists Have to Say
In a letter signed by over 2,000 people, including Musk and Turing award winner Yoshua Bengio, the Future of Life Institute called for a six-month minimum moratorium on "training AI systems more powerful than GPT-4."
However, the group of ethicists, including Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell, argue that the focus on hypothetical risks of "powerful digital minds" with "human-competitive intelligence" ignores the real harm caused by the deployment of AI systems today.
The letter, they say, addresses none of the ongoing harms from these systems, including worker exploitation, massive data theft, and the concentration of power in the hands of a few people, exacerbating social inequities.
The group of ethicists, who are currently working together at the DAIR Institute to study and expose AI-associated harms, argue that the call for an "AI pause" is dangerous because it distracts from the need for regulation that enforces transparency.
They argue that organizations building these systems should be required to document and disclose the training data and model architectures, and that the onus of creating tools that are safe to use should be on the companies that build and deploy generative systems.
Call for Inclusion
While they agree that "such decisions must not be delegated to unelected tech leaders," they also note that such decisions should not be up to the academics experiencing an "AI summer," who are largely financially beholden to Silicon Valley.
Instead, they claim that those most impacted by AI systems must be heard in this conversation, such as immigrants subjected to "digital border walls," women being forced to wear specific clothing, workers experiencing PTSD while filtering outputs of generative systems, artists seeing their work stolen for corporate profit, and gig workers scraping to make ends meet.
"The current race towards ever larger 'AI experiments' is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people," reads the statement.
"We should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."