(San Francisco) China and the United States have not committed to banning lethal autonomous weapons, as some experts had hoped, after press rumors on the subject during Wednesday’s presidential summit in California.

Presidents Joe Biden and Xi Jinping have nevertheless agreed to have their respective experts speak about the risks linked to the rapid progress of artificial intelligence (AI), which is disrupting many sectors.

In the field of military equipment, this technology could constitute the third major revolution, after the invention of gunpowder and the atomic bomb.

Non-exhaustive review of AI applications in military equipment.

Robots, drones, torpedoes… thanks to technologies ranging from computer vision to sophisticated sensors, all kinds of weapons can be transformed into autonomous systems, governed by AI algorithms.

Autonomy does not mean that a weapon “wakes up in the morning and decides to go to war,” says Stuart Russell, professor of computer science at the University of California at Berkeley.

“This means they have the ability to locate, select and attack human targets, without human intervention. »

These lethal autonomous weapons systems are also nicknamed “killer robots,” a phrase that evokes androids straight out of science fiction.

“This is one of the options explored but in my opinion it is the least useful of all,” remarks the specialist.

Most of these weapons are still ideas or prototypes, but Russia’s war in Ukraine offers a glimpse of their potential.

Due to telecommunications problems, armies have been pushed to make their drones more autonomous.

As a result, “people are taking refuge underground,” notes Stuart Russell, and this foreshadows a major change in the nature of war, “where being visible anywhere on the battlefield will be a death sentence.”

Autonomous weapons have several potential advantages: efficiency, mass production at reduced costs, absence of human emotions such as fear or anger, absence of radioactive craters in their wake, etc.

But they raise major ethical questions in terms of evaluation and commitment.

And above all “as it does not require human supervision, you can launch as many as you want”, underlines Stuart Russell, “and therefore potentially destroy an entire city or an entire ethnic group at once”.

Autonomous submarines, boats and planes must enable reconnaissance, surveillance or logistical support in dangerous or remote areas.

These vehicles, like drones, are at the heart of the “Replicator” program launched by the Pentagon to counter China in terms of manpower and military equipment, particularly in the Asia-Pacific region where the United States is trying to regain power.

The goal is to deploy several thousand “inexpensive and easy to replace autonomous systems in many areas over the next 18 to 24 months,” Kathleen Hicks, Deputy Defense Minister, said at the end of August.

She cited the example of space, where such devices “will be projected by the dozens, to the point where it will be impossible to eliminate them all.”

Many companies are developing and testing autonomous vehicles, like California-based Anduril, which touts its human-free submarines “optimized for a variety of defense and commercial missions such as long-range oceanographic detection, submarine battlespace, mine countermeasures, anti-submarine warfare,” etc.

Powered by AI and capable of synthesizing mountains of data collected by satellites, radars, sensors and intelligence services, tactical software serves as powerful assistants for staffs.

“The Pentagon must understand that in an AI war, data is the ammunition,” argued Alexandr Wang, head of Scale AI, during a Congressional hearing in July.

“We have the largest fleet of military equipment in the world. It generates 22 terabytes of data per day. If we manage to properly organize this data to analyze it with AI, we will have a pretty insurmountable advantage in terms of using this technology for military purposes.”

Scale AI has been awarded a contract to deploy a language model on a classified network of a major US Army unit. Its chatbot “Donovan” should allow commanders to “plan and act in minutes instead of weeks.”

Washington, however, has set limits.

“AI should not make decisions about how and when to use a nuclear weapon, or even be in the loop,” Secretary of State Antony Blinken said in Tokyo on November 8.