Artificial intelligence luminaries issued a warning on Tuesday about the technology they are developing: it could one day pose an existential threat to humanity; it is as serious a risk as pandemics and nuclear war.

That’s according to a one-sentence statement released by the nonprofit Center for AI Safety. It is signed by 350 executives, researchers and engineers working in artificial intelligence (AI), according to whom “mitigating the risk of extinction linked to AI should be a global priority”.

Among the signatories are the CEOs of three leading AI companies: Sam Altman, of OpenAI; Demis Hassabis, Google DeepMind; and Dario Amodei of Anthropic.

Montrealer Yoshua Bengio and Torontonian Geoffrey Hinton, who are often described as the “godfathers” of modern AI, signed the declaration. MM. Bengio and Hinton recently won the Turing Prize for their work on neural networks (along with Frenchman Yann Le Cun, head of AI research at Meta, who had yet to sign on Tuesday).

This statement highlights the growing concern about AI for its potential dangers. The recent prowess of “large language models” – the type of AI used by ChatGPT and other chatbots – raises fears that AI will soon be used to spread fake news and propaganda, or that it cuts millions of white collar jobs.

Some researchers think AI could cause societal-wide upheaval within a few years if left unchecked (but they don’t always explain how that would happen).

This month, Messrs. Altman, Hassabis and Amodei met with President Joe Biden and Vice President Kamala Harris to urge them to regulate AI. In Senate hearings, Altman has warned that serious risks from advanced AI systems warrant state intervention and called for regulation.

Dan Hendrycks, director of the Center for AI Safety, believes signing the open letter is like “coming out of the closet” for some industry executives who have expressed concerns – privately – about the risks of the technology they are developing.

Others, more skeptical, say AI is still too immature to pose an existential threat. Current AI systems worry them more because of short-term issues like biased and incorrect answers.

On the side of the worried, we note that AI is progressing quickly. It has already surpassed human performance in some areas and it will soon surpass them in others. According to them, AI shows signs of advanced understanding: it is approaching artificial general intelligence (AGI), a type of AI capable of matching or exceeding human performance in a wide range of tasks.

Last week, Altman and two other OpenAI executives offered ways to responsibly manage AI systems. They call for cooperation between major AI designers, more research on large language models, and the formation of an international organization for AI similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.

Altman also supports requiring developers of large, state-of-the-art AI models to be licensed by the government.

In March, a thousand researchers signed another open letter calling for a six-month pause in AI development, worrying about an “uncontrolled race to develop and deploy increasingly powerful digital brains.” “.

This letter, sponsored by another AI-focused nonprofit, the Future of Life Institute, was signed by Elon Musk and other big names in tech, but had few signatories who worked in major organizations. AI labs.

The brevity of the new statement from the Center for AI Safety – just 22 words – was intended to bring together also AI experts who might disagree about the nature of the specific risks or the measures to be taken, but who share general concerns about the risks. AI systems, says Hendrycks: “We didn’t want to come up with a detailed program with 30 potential interventions. When we do that, the message is diluted. »

The statement was first shared with a few top AI experts, including Mr. Hinton, who had just resigned from Google to be able to speak more freely about the potential dangers of AI. The text was then forwarded to several major AI labs, where some employees signed it.

The urgent warning from AI leaders comes as millions turn to chatbots for entertainment and increased productivity. And technology is improving at a rapid pace.

“I think if this technology goes bad, it’s going to be very bad,” Mr. Altman said during the Senate hearings. We want to work with the government to prevent that from happening. »