(San Francisco) American companies at the forefront of artificial intelligence (AI) are committed to making this technology safer and more transparent, in particular by working on systems to mark content created with AI to reduce the risk of fraud and misinformation.

The White House announced on Friday that seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – have agreed to abide by several principles in the development of AI.

In particular, they promised to test their computer programs internally and externally before their launch, to invest in cybersecurity and to share relevant information about their tools, including possible flaws, with authorities and researchers.

They must also “develop robust techniques to ensure that users know when content has been generated by AI, such as a watermark tagging system,” said a statement from the US administration.

“This will allow AI creativity to thrive while reducing the dangers of fraud and trickery,” she said.

Fake photographs and advanced montages (deepfake) have been around for years, but generative AI, capable of producing text and images on simple request in everyday language, raises fears of a flood of fake content online.

These can be used to manufacture ultra-credible scams or even to manipulate public opinion. A particularly worrying prospect as the 2024 US election approaches.

“It’s a complex subject,” a senior White House official acknowledged at a press conference.

The watermark mark “should work for visual as well as audio content,” he explained. “It has to be technically robust, but also easy for users to see.”

“This is a good first step in helping the public identify content created with AI,” commented James Steyer, founder of the NGO Common Sense Media.

“But this type of tagging alone will not be enough to prevent malicious actors from using such content for harmful or illegal purposes,” he said, citing the existence of hacked generative AI programs available online.

In May, the White House stressed the “moral duty” of AI companies to ensure the safety and security of their products.

Political tensions in Congress make new AI laws unlikely anytime soon, but the government has said it is working on an executive order.