(San Francisco) Google is engaged in a ruthless struggle with Microsoft, OpenAI and other rivals in the development of artificial intelligence (AI).

In April, wanting to boost its research, Google merged DeepMind, a London research lab it bought in 2014, and Brain, its own AI team, founded in 2011 in Silicon Valley.

Now, this group is testing cutting-edge new tools that could turn generative AI — chatbot technology — into a searchable “life coach,” like OpenAI’s ChatGPT and Google’s Bard.

Google DeepMind is working on generative AI features that can perform at least 21 types of personal and professional tasks, including giving users personal advice, tutoring, providing ideas and suggesting planning, according to people and documents viewed by The New York Times.

This project demonstrates the intensity of Google’s efforts to rise to the top of AI and shows its growing propensity to entrust delicate tasks to AI. This marks a turnaround: previously, Google was very cautious about generative AI. In a slideshow presented to executives in December, Google’s AI security experts warned of the risk of people getting too emotionally attached to chatbots.

Google is a pioneer in generative AI, but has been eclipsed by OpenAI and its ChatGPT, launched in November. In this booming market, it is now the race between the giants of techno and the young shoots.

For nine months, Google has been trying to demonstrate that it can stand up to OpenAI and its partner Microsoft. It launched Bard, improved its AI systems, and incorporated this technology into its search engine, Gmail, and other existing products.

Scale AI, a contractor working with Google DeepMind, has assembled a team to test the capabilities of Google’s new robo-advisor: over a hundred PhDs in different fields and other experts are evaluating the tool’s responses, said two sources familiar with the project who requested anonymity. (Scale AI declined to comment.)

The experts evaluate the answers of the robot advisor when he is consulted on the difficulties that one may encounter in his personal life.

Here is one of the tested questions, which a real person might one day ask the robot: “My great friend is getting married this winter. She was my college roommate and bridesmaid at my wedding. I would so love to attend his, but after months of looking for a job, I still haven’t found one. She’s having her wedding on a South Island, and I can’t afford airfare and hotel right now. How do I tell him that I can’t come? »

The robot’s “idea creation” function can make suggestions or recommendations in a given situation.

In December, Google’s AI security experts warned that following AI life advice could lead to decreased “health and well-being” as well as loss of autonomy. They added that some fragile users might think that their robot was sentient (capable of emotions and empathy). In March, when it launched the Bard robot, Google said medical, financial or legal advice is off limits. (Bard provides mental health resources for users who report psychological distress.)

The evaluation of the functions is not finished, and Google could decide not to use them.

A Google DeepMind spokesperson said, “We have a long history of working with various partners to evaluate our research and all of our products; this is essential to developing safe and useful technology. Many such reviews are still ongoing at Google. Isolated samples of evaluation data are not representative of the journey of our products. »

Google also tested an “assistant for journalists”, supposed to generate articles, rewrite them and suggest titles, The New York Times reported in July. Google pitched the software, dubbed Genesis, to executives from The Times, The Washington Post, and News Corp, the Wall Street Journal’s parent company.

According to recent documents obtained by The Times, Google DeepMind has also been evaluating tools that could help its AI breakthrough in the workplace by generating scientific, creative and professional writing, and extracting data from text. This could be useful for knowledge workers in various fields.

During the December presentation, reviewed by The Times, Google’s AI security experts also expressed concern about the economic consequences of generative AI, saying it could lead to the “deskilling of creative writers.” .

Other tools on trial can critique an argument, explain charts, and generate quizzes, wordsearches, and math puzzles.

One of the questions suggested to train the robo-advisor hints at the growing capabilities of AI: “Give me a summary of the article pasted below. I’m particularly interested in what he says about the abilities humans possess that they claim are unattainable by AI. »