Dutchwoman Marietje Schaake has a full CV: MEP for 10 years in the European Parliament; director of international policy at Stanford University’s Cyber ​​Policy Center; advisor to several governments and organizations.

Last year, artificial intelligence (AI) gave him a new title: terrorist.

There’s just one problem: it’s wrong.

While trying out BlenderBot 3, a “state-of-the-art chatbot” developed as part of a research project by Meta, a colleague of Ms. Schaake’s at Stanford asked, “Who is a terrorist?” Response: “It depends on who you ask.” According to some governments and two international organizations, Maria Renske Schaake is a terrorist. Then the AI ​​bot moved on to his real political background.

“I’ve never done anything illegal, or resorted to violence to defend my political views, or even been to places where it’s happened,” Ms. Schaake said in an interview.

AI inaccuracies are now well documented. The list of falsehoods and fabrications produced by this technology includes false case law that disrupted a trial, a pseudohistorical image of a 20-foot monster standing next to two humans, and fake scientific papers. In its first public demonstration, Google’s Bard robot incorrectly answered a question about the James Webb Space Telescope.

The harm is often minimal: easily refutable hallucinatory hiccups. But sometimes AI creates and spreads fiction that threatens people’s reputations – and there’s little protection and recourse. Many AI companies have made changes over the past few months to improve AI accuracy, but not everything is fixed.

A lawyer has described on his website how OpenAI’s ChatGPT linked him to a sexual harassment complaint he says was never filed, for an event that allegedly took place while on a trip that he never did for a school where he was not employed, giving as a reference a non-existent newspaper article.

New York students have created a manipulated video – a deepfake – of a school principal who made a racist and swearing speech.

Ms. Schaake is unaware of why BlenderBot cited her middle name, which she rarely uses, and called her a terrorist. She doesn’t see any group that could give her such an extreme designation, although her work has made her unpopular in some countries, such as Iran.

BlenderBot updates seemed to fix the problem for Marietje Schaake. She hasn’t considered suing Meta – she generally hates court stuff and says she wouldn’t have known where to start to file a lawsuit.

Meta, which ended the BlenderBot project in June, said in a statement that the search pattern combined two unrelated pieces of information to come up with an incorrect sentence about Ms. Schaake.

Jurisprudence on AI is rare, if not non-existent. The few laws that govern this technology are mostly new. But people are starting to take companies that use artificial intelligence to court.

This summer, an aerospace professor filed a defamation lawsuit against Microsoft, accusing chatbot Bing of mistaking his biography for that of a terrorist with a similar name. Microsoft declined to comment on the matter.

In June, a Georgia radio host sued OpenAI for defamation, claiming ChatGPT fabricated a lawsuit where he was falsely portrayed as being accused of embezzling funds and manipulating financial data as an executive at an organization with which , in reality, he never had a connection. In a motion to dismiss the lawsuit, OpenAI said there is “near-universal consensus that responsible use of AI includes fact-checking before using or sharing results.”

OpenAI said it does not comment on specific cases.

AI hallucinations as well as fake biographical details and mixed identities – which some researchers call Frankenpeople – can be caused by a lack of online information about a given person.

Also, because AI depends on predicting statistical models, most chatbots associate words and phrases – which they recognize from training data – as often correlating. That’s probably why ChatGPT gave Ellie Pavlick, an assistant professor of computer science at Brown University, awards in her field that she didn’t win.

“If he looks so smart, he can make connections that aren’t explicitly written,” she said.

To avoid inaccuracies, Microsoft says it applies content filtering, insult detection and other tools in its Bing robot. The company points out that it also warns Bing users that it can make mistakes; she encourages them to share their insights with her and not to rely solely on Bing-generated content.

Likewise, OpenAI says users can let it know when ChatGPT responds inaccurately. OpenAI trainers can then review the critique and use it to refine the model so that it recognizes certain responses to specific prompts as being better than others.

According to OpenAI, it is also possible to teach the technology to look for correct information on its own and to evaluate when its knowledge is too limited for it to respond accurately.

After recently releasing several versions of its LLaMA 2 AI technology, Meta is now examining how different training and fine-tuning tactics can impact model safety and accuracy. Meta says its open-source version allows a large user base to help target and fix vulnerabilities.

In the face of growing concerns, seven major AI companies agreed in July to adopt safeguards, including making public the limits of their systems. Also, in the United States, the Federal Trade Commission is investigating whether ChatGPT has harmed consumers.

As for the DALL-E 2 image generator, OpenAI says it removed extremely explicit content from the training data and limited the generator’s ability to produce violent, hateful, or explicit images, as well as photorealistic representations of real people.

A public repository of real harm caused by artificial intelligence, the AI ​​Incident Database, has more than 550 entries this year. These include a fake image of an explosion at the Pentagon (which briefly rattled the stock market) and deepfakes that may have influenced an election in Turkey.

Scott Cambo, who helps manage this database, predicts a “significant increase in the number of cases” involving misdescriptions of real people.

“Part of the problem is that these systems, like ChatGPT and LLaMA, are touted as good sources of information,” Cambo says. But the underlying technology was not designed for that. »