Artificial Intelligence (AI) Does Not Pose an Immediate Threat to Humanity’s Existence, Microsoft President Says, But He Believes Governments and Businesses Need to Act More Quickly to Address the Technology’s Risks implementing what he calls “safety brakes.”

“We don’t see any risk in the coming years, over the next decade, that artificial intelligence poses an existential threat to humanity, but… let’s solve this problem before it happens,” says Brad Smith during an interview with La Presse Canadienne.

Mr. Smith, a Microsoft stalwart who joined the company in 1993, stresses the importance of getting a handle on the problems posed by technology so the world isn’t “constantly worrying and talking about it.”

He believes the solution to potential problems lies in “safety brakes,” which could act like the emergency mechanisms built into elevators, school buses and high-speed trains.

They are expected to be integrated into high-risk artificial intelligence systems that control critical infrastructure such as power grids, water networks and traffic.

“Let’s learn from art,” Smith says.

“All films in which technology imposes an existential threat end the same way: human beings unplug the technology. We must therefore provide a switch, a safety brake and ensure that the technology remains under human control. Let’s embrace this and do it now. »

Smith’s remarks come as a race for AI usage and innovation has begun across the technology sector and beyond, following the release of ChatGPT, a bot conversational designed to generate human-like responses to text prompts.

Microsoft has invested billions in San Francisco-based ChatGPT creator OpenAI and also has its own AI-based technology, Copilot, which helps users create drafts of content, suggests different ways to formulate a text they have written and helps them create PowerPoint presentations from Word documents.

But many are concerned about the pace of progress in AI. For example, Geoffrey Hinton, a British-Canadian pioneer of deep learning often considered the “godfather of AI,” said he believed the technology could lead to prejudice and discrimination, unemployment, echo chambers, fake news, combat robots and other risks.

Several governments, including that of Canada, have begun to develop safeguards around AI.

In a 48-page report released Wednesday by Microsoft, Smith said his company supports Canada’s efforts to regulate AI.

These efforts include a voluntary code of conduct released in September, whose signatories — including Cohere, OpenText, BlackBerry and Telus — promise to assess and mitigate the risks of their AI-based systems, monitor them for incidents and act on the problems they develop.

Although the code has critics, like Tobi Lütke, founder of Shopify, who sees it as an example of the country using too many “referees” when it needs more “builders”, Mr. Smith noted in the report that by developing a code, Canada has “demonstrated early leadership” and is helping the entire world work toward a common set of shared principles.

The voluntary code is expected to be followed by Canada’s upcoming Artificial Intelligence and Data Act, which would create new criminal provisions to prohibit uses of AI that could cause serious harm.

The legislation, known as Bill C-27, passed first and second reading but is still being considered in committee. Ottawa has said it will come into force no earlier than 2025.

Asked why he thinks governments need to move faster on AI, Smith says the world has had an “extraordinary year” since ChatGPT went live.

“When we say go faster, that’s frankly not a criticism,” he says.

“It’s about recognizing the current reality, where innovation has advanced at a faster pace than most people expected.”

However, he sees Canada as one of the countries best prepared to keep up with the pace of AI, as universities have long focused on the technology and cities like Montreal, Toronto and Vancouver have been hotbeds. innovation in the field.

“If there’s any government that I think has a tradition that they can draw on to pass something like this, I think it’s Canada. I hope it’s the first,” says Smith.

“It won’t be the last if it’s the first. »

However, while Canada’s AI law is under “deep consideration,” Smith says Canada should consider how it can adopt additional safeguards in the meantime.

For example, during the process of acquiring high-risk AI systems, he believes partners seeking contracts may be forced to resort to third-party audits to certify that they meet relevant international standards in terms of AI.

In the report, Mr Smith also supports an approach to AI that will be “developed and used across borders” and which “ensures that an AI system certified as safe in one jurisdiction can also be described as safe in another”.

He compared this approach to that of the International Civil Aviation Organization, which uses uniform standards to ensure that a plane does not need to be refitted mid-flight between Brussels and New York to meet to the varying requirements of each country.

An international code would help AI developers attest to the security of their systems and boost compliance globally because they would be able to use internationally recognized standards.

“The voluntary code model offers Canada, the European Union, the United States, other G7 members as well as India, Brazil and Indonesia the opportunity to move forward together on the basis of a set of common values ​​and principles,” he said in the report.

“If we can work with others on a voluntary basis, we will all move forward faster and with more attention and focus. This is not just good news for the tech world, but for the entire world. »