Exclusive Content:

Home Office Blunder: Thousands of Deportation-Intended Migrants Missing Before Rwanda Flights

A recent revelation has cast a glaring spotlight on...

Taxes: here is the (large) amount of the advance that the tax authorities will pay you on Monday January 15

The end-of-year holidays have just ended and it is...

Weather: what will the weather be like in February, March and April?

At the start of 2024, the temperatures on the...

The White House wants content created with AI to be identifiable

spot_img

(San Francisco) American companies at the forefront of artificial intelligence (AI) are committed to making this technology safer and more transparent, in particular by working on systems to mark content created with AI to reduce the risk of fraud and misinformation.

The White House announced on Friday that seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – have agreed to abide by several principles in the development of AI.

In particular, they promised to test their computer programs internally and externally before their launch, to invest in cybersecurity and to share relevant information about their tools, including possible flaws, with authorities and researchers.

They must also “develop robust techniques to ensure that users know when content has been generated by AI, such as a watermark tagging system,” said a statement from the US administration.

“This will allow AI creativity to thrive while reducing the dangers of fraud and trickery,” she said.

Fake photographs and advanced montages (deepfake) have been around for years, but generative AI, capable of producing text and images on simple request in everyday language, raises fears of a flood of fake content online.

These can be used to manufacture ultra-credible scams or even to manipulate public opinion. A particularly worrying prospect as the 2024 US election approaches.

“It’s a complex subject,” a senior White House official acknowledged at a press conference.

The watermark mark “should work for visual as well as audio content,” he explained. “It has to be technically robust, but also easy for users to see.”

“This is a good first step in helping the public identify content created with AI,” commented James Steyer, founder of the NGO Common Sense Media.

“But this type of tagging alone will not be enough to prevent malicious actors from using such content for harmful or illegal purposes,” he said, citing the existence of hacked generative AI programs available online.

In May, the White House stressed the “moral duty” of AI companies to ensure the safety and security of their products.

Political tensions in Congress make new AI laws unlikely anytime soon, but the government has said it is working on an executive order.

Latest articles

Tragic Crash at White House Perimeter Gate Claims Driver’s Life, Secret Service Clarifies Incident

Tragic Accident at White House Gate In a tragic turn of events, a driver lost...

Anne Hathaway Captivates in The Idea of You: A Deep Dive Film Analysis

Anne Hathaway's Compelling Performance: Delving into the Heart of "The Idea of You" Anne Hathaway's...

Nvidia and AMD Stocks React as Semiconductor Sector Faces Turbulence

The semiconductor market experienced significant fluctuations as Nvidia and AMD stocks reacted to industry...

Adrian Newey Announces Departure: Red Bull Racing Faces Transition in F1 Design Leadership

End of an Era: Adrian Newey Announces Departure from Red Bull Racing In a significant...

More like this

Home Office Blunder: Thousands of Deportation-Intended Migrants Missing Before Rwanda Flights

A recent revelation has cast a glaring spotlight on the Home Office, as it...

Taxes: here is the (large) amount of the advance that the tax authorities will pay you on Monday January 15

The end-of-year holidays have just ended and it is nice to benefit from an...

Weather: what will the weather be like in February, March and April?

At the start of 2024, the temperatures on the thermometer are enough to make...