(San Francisco) Google, Microsoft, Anthropic and OpenAI, four companies leading the race for next-generation artificial intelligence, announced on Wednesday the creation of a new professional organization to combat the risks associated with this technology.

Called the “Frontier Model Forum”, it is responsible for promoting “responsible development” of the most sophisticated artificial intelligence (AI) models and “minimizing potential risks”, according to a press release.

The members undertake to share with each other and with legislators, researchers and associations best practices to make these new systems less dangerous.

The rapid deployment of generative AI, through interfaces such as ChatGPT (OpenAI), Bing (Microsoft) or Bard (Google), is causing a lot of concern from authorities and civil society.

The European Union (EU) is finalizing a draft AI regulation which must impose obligations on companies in the sector, such as transparency with users or human control over the machine.

In the United States, political tensions in Congress prevent any effort in this direction. The White House therefore encourages the groups concerned to ensure the safety of their products themselves, in the name of their “moral duty”, in the words of US Vice President Kamala Harris in early May.

Last week, Joe Biden’s administration secured “commitments” from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI to uphold “three principles” in AI development – ​​safety, security and trust.

In particular, they are supposed to test their programs upstream, fight against cyberattacks and fraud, and find a way to mark content generated by AI, in order to clearly authenticate it as such.

The leaders of these companies do not deny the risks, on the contrary. In June, Sam Altman, the boss of OpenAI and Demis Hassabis, the leader of DeepMind (Google), in particular, called to fight against “the risks of extinction” of humanity “linked to AI”.

During a congressional hearing, Sam Altman supported the popular idea of ​​creating an international agency responsible for the governance of artificial intelligence, as there are in other fields.

In the meantime, OpenAI is working towards a so-called “general” AI, with cognitive abilities that would be similar to those of humans.

In a July 6 posting, the California-based startup defined AI “frontier models” as “highly sophisticated foundational programs that could feature dangerous capabilities of a size that pose serious risks to public safety.”

“Dangerous capabilities can emerge unexpectedly,” OpenAI further warns, and “it’s hard to really prevent a deployed model from being misused.”