When the Google Photos app launched in May 2015, it caused a stir: it could analyze images and tag the people, places, and things in them. At the time, it was amazing. Two months later, a software designer, Jacky Alciné, who is Black, along with a friend, discovered in a photo that Google had labeled them as “gorillas”.
The label is particularly insulting, especially since it can have a racist connotation. Mr. Alciné posted a screenshot on Twitter and criticized Google. During the ensuing controversy, Google blocked the Photos app’s “gorilla” categorization and promised to fix the bug. Eight years later, taking note of advances in artificial intelligence, we checked in and looked at the competition: Apple, Amazon, and Microsoft.
There’s one member of the primate family that Google and Apple can recognize: the lemur, that long-tailed, perpetually amazed-looking animal that, like humans, has opposable thumbs, but is a more distant relative than the great apes.
The tools from Google and Apple were clearly the most advanced in terms of image analysis.
Yet Google, whose Android software is at the heart of most phones around the world, has disabled visual search for primate images for fear of mistakenly labeling a person as an animal. Apple, whose technology performs on par with Google, in our test also seems to have blocked the search function for monkeys and primates.
Consumers may not need to identify a primate all that often – though in 2019, an iPhone user complained on Apple’s forum that the software “can’t identify monkeys in [its ] Pictures “. This problem, however, raises questions about other unpatched, or unpatchable, flaws that lurk in services using machine vision as well as other AI-powered products.
Alciné said he was dismayed to learn that Google still hadn’t fixed the problem. Society trusts technology too much, he says. “I will never trust AI,” he said.
Computer vision is now used for tasks as mundane as sending an alert when a package has been delivered to the door, and as far-reaching as car navigation and police searching for suspects.
In the gorilla case, two ex-Google employees who worked on the technology said the problem was that the company didn’t put enough photos of black people in the huge image collection used to train its artificial intelligence system. The technology had not been sufficiently exposed to dark-skinned people and mistook them for gorillas.
Google didn’t discover the “gorilla” problem at the time because it didn’t have enough employees to test the feature before its public launch, the two former employees said.
AI is coming into our lives and it raises fears of unintended consequences. Machine vision products and chatbots like ChatGPT are different, but all depend on the data that drives the software. All can malfunction due to flaws in the data or biases embedded in their code.
Microsoft has just imposed limits on the conversational robot integrated into its Bing search engine, after it had discussed inappropriate subjects with users.
This decision by Microsoft, like Google’s on gorilla identification, illustrates a common industry approach: blocking faulty technology features instead of fixing them.
Google disabled the identification of monkeys and gorillas from its Photos app because the benefit was less than the risk of harm, says Google’s Michael Marconi.
Apple declined to comment on this issue.
At Amazon and Microsoft, we simply indicated that we were always looking to improve the products.
Years after the Google Photos error, the company had a similar problem with its Nest home security camera during testing, a former employee said. The Nest Camera, which uses AI to determine if a person in the scene is familiar or not, mistook black people for animals. Google quickly corrected the problem before the product was launched, the employee said.
But on Google forums, customers are complaining about other flaws. In 2021, a customer received alerts that his mother was ringing the doorbell. When he opened the door, it was his mother-in-law. When users complained that the system was confusing faces they had tagged as “familiar,” customer service told them to delete all of their tags and start over.
Google has been working behind the scenes to improve this technology, but it hasn’t allowed users to judge these efforts.
Researcher Margaret Mitchell, co-founder of Google’s “Ethical AI” group, joined the company after the gorilla incident and worked with the Photos team. The decision to drop “the gorilla tag, at least for a while,” was the right one, she said.
“You have to weigh the usefulness of being able to label a gorilla against the harms of perpetuating harmful stereotypes,” Ms. Mitchell explained. “The harm a malfunction can cause outweighs the benefits. »
These systems are never infallible, points out Ms. Mitchell, who left Google. Since billions of people use Google, even glitches that only affect one in a billion users will surface.
“A single mistake can have massive social ramifications. It’s the poison needle in a haystack. »



