AI experts disown Musk-backed campaign citing their research

  • Whatsapp

Four AI experts have voiced concern after their work was cited in an open letter, which Elon Musk also signed, calling for an immediate halt to research.

The letter, dated March 22 and with more than 1,800 signatures by Friday, demanded a six-month time limit on the creation of “more powerful” systems than OpenAI’s new GPT-4, which is supported by Microsoft (MSFT.O) and can hold human-like conversations, create music, and summarize lengthy documents.

Read More

Since the release of ChatGPT, the predecessor to GPT-4, last year, competitors have hurried to issue comparable products.

The open letter, which cites 12 pieces of study from experts including academics from universities and current and past staff members of OpenAI, Google (GOOGL.O), and its subsidiary DeepMind, claims that AI systems with “human-competitive intelligence” pose grave risks to humanity.

Since then, civil society organizations in the EU and the US have urged lawmakers to limit OpenAI’s study. Requests for comment from OpenAI did not quickly receive a response.

The Future of Life Institute (FLI), the group that wrote the letter and is mainly supported by the Musk Foundation, has come under fire for allegedly putting imaginary end-of-the-world scenarios above more pressing AI-related worries like racial or gender biases being built into the machines.

The well-known article “On the Dangers of Stochastic Parrots,” co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google, was one of the studies cited.

The letter was criticized by Mitchell, now the chief ethical scientist at the AI company Hugging Face, who told Reuters that it wasn’t obvious what constituted “more powerful than GPT4”

The letter “asserts a set of priorities and a narrative on AI that benefits the supporters of FLI by treating a lot of questionable ideas as a given,” she claimed. Some of us don’t have the luxury of ignoring current harms.

Timnit Gebru, one of her co-authors, and Emily M. Bender criticized the letter on Twitter; the latter called some of its assertions “unhinged.”

According to FLI president Max Tegmark, the effort was not intended to undermine OpenAI’s competitive advantage.

I’ve heard people say, ‘Elon Musk is attempting to slow down the competition,’ he said, adding that Musk had nothing to do with writing the letter. “It’s quite hilarious,” he said. One business is not the focus of this.

A University of Connecticut assistant professor named Shiri Dori-Hacohen also objected to the note mentioning her work. In a study paper she co-authored last year, she made the case that the widespread use of AI already carried significant risks.

Her study made the case that the use of AI systems today could affect how people decide how to respond to existential threats like nuclear conflict and climate change.

“AI does not need to reach human-level intelligence to exacerbate those risks,” she told Reuters.

There are very significant non-existential risks that aren’t given the same degree of Hollywood attention.

When asked to respond to the critique, FLI’s Tegmark said it is important to consider both the immediate and long-term risks of AI.

It doesn’t mean they are endorsing the letter or that we agree with everything they believe, he told Reuters, “if we cite someone, it just means we claim they are endorsing that sentence.”

Dan Hendrycks, head of the California-based Center for AI Safety, who was also mentioned in the letter, defended its points and told Reuters that it was prudent to take into account black swan events—those that seem improbable but would have grave repercussions.

Additionally, the open statement expressed concern that “propaganda and untruth” could be spread online using generative AI tools.

It was “pretty rich,” according to Dori-Hacohen, for Musk to sign it, claiming a purported increase in false information on Twitter after his purchase of the social media site, which has been confirmed by the civil society organization Common Cause and others.

Twitter will shortly introduce a new fee structure for access to its research data, which might impede further investigation.

“That has directly affected the work done in my lab and by others who are studying misinformation and disinformation,” said Dori-Hacohen. We are unable to use one hand because it is bound behind our backs.

Requests for comments were not quickly answered by Musk or Twitter.

Related posts