Until now, most North American workers replaced by machines have been poorly educated men working in manufacturing.

But new automation from artificial intelligence (AI) systems like ChatGPT and Bard is changing everything. This type of AI – called a “big language model” – can quickly process and synthesize information and generate content. Now, automation threatens office jobs, those that require more cognitive skills, creativity and education. According to various studies, these jobs are most often well paid and somewhat more likely to be filled by women.

“It surprised a lot of researchers, including me,” says Erik Brynjolfsson, professor at the Stanford Institute for Human-Centered AI: he predicted that creativity and technical skills would protect humans from automation .

Recent research analyzed the tasks of American workers – using the Department of Labor’s O*Net database – to determine which ones could be handled by large language models. In 20% to 25% of occupations, this type of AI could be very useful; in the majority of jobs, he could do at least some tasks, according to analyzes by the Pew Research Center and Goldman Sachs.

For now, these tools still sometimes produce incorrect information and are more likely to help workers than replace them, say Tyna Eloundou, Sam Manning, Pamela Mishkin and Daniel Rock, researchers at OpenAI, the company behind it. origin of ChatGPT. According to their analysis of 19,265 tasks performed in 923 occupations, large language models could perform some of the tasks performed by 80 percent of American workers.

Some workers are right to be afraid of being replaced by the big language models, they argue. As Sam Altman, CEO of OpenAI, told The Atlantic magazine last month: “Jobs are going to disappear, period. »

The four OpenAI researchers asked an advanced model of ChatGPT to analyze O*Net data to determine what tasks large language models can do. He replied that 86 jobs were 100% exposed (all their tasks can be assisted by AI). The researchers estimate that only 15 jobs are. Researchers and ChatGPT agree on the most exposed job: mathematician.

But even tradespeople could use AI for planning, customer service and route optimization, points out Mike Bidwell, CEO of Neighborly, a home services company.

Researchers unrelated to OpenAI believe that there are still uniquely human capabilities that cannot (yet) be automated: social skills, teamwork, caring for humans, and craftsman skills. “We won’t be able to do without what humans do tomorrow,” Brynjolfsson said. “But those skills will be different: learning to ask the right questions, really interacting with people, doing manual work that requires dexterity. »

For now, the great language models are likely to help many workers be more productive in their current jobs, much like giving office workers, even beginners, an assistant or researcher (which, however, does not bode well for human assistants).

At work, the consumer version of ChatGPT is risky: it is often mistaken, can reflect biases, and is not secure enough for companies to entrust with confidential information. The companies that use it circumvent these risks by exploiting its “closed domain” capabilities: they train the model on restricted content and purged of any confidential data.

Aquent Talent, a recruitment agency, uses a commercial version of Bard. Typically, humans parse applicants’ resumes to match them to a vacancy; Bard can do this much more effectively. But it takes human oversight, especially when hiring, because human biases are built in, notes Rohshann Pilla, CEO of Aquent Talent.

The Harvey firm, funded by OpenAI, offers such a tool for law firms. Partners use it for strategic purposes, such as finding questions to ask during a deposition or summarizing how the firm has negotiated in similar cases.

“The tool does not produce advice to give to the client,” explains Winston Weinberg, co-founder of Harvey. He quickly sifts through the information that leads to the advice level. The lawyer still has to decide. »

According to him, this system is particularly useful for paralegals and young lawyers. They learn, by asking questions like, “What is this type of contract for and why was it written this way?” “. The system is also useful for drafting first drafts, such as summaries of financial statements. “And then, suddenly, they have an assistant. They will be able to do higher level tasks earlier in their career. »

Other studies on the use of large language models in business make the same observation: they especially help entry-level employees. Brynjolfsson’s study of customer support agents shows that AI increases their productivity by 14% on average and 35% for lower-skilled employees; they progress faster on the learning curve thanks to AI.

The latest wave of automation, in the manufacturing sector, has increased income inequality by depriving workers without a university education of well-paying jobs.

Some researchers believe that large language models could have the opposite effect, reducing inequalities between the highest paid workers and the rest.

“I hope they will enable less-educated people to do more things by lowering the barriers to entry for high-paying elite jobs,” says David Autor, a labor economist at MIT.