Advertisement

AI not an immediate existential threat but ‘safety brakes’ needed: Microsoft

Click to play video: 'AI pioneer reflects on future of technology after week of OpenAI turmoil'
AI pioneer reflects on future of technology after week of OpenAI turmoil
WATCH - AI pioneer reflects on future of technology after week of OpenAI turmoil – Nov 25, 2023

Microsoft’s president says he doesn’t think artificial intelligence poses an immediate threat to humanity’s existence, but governments and businesses still need to move faster to address the technology’s risks by implementing what he calls “safety brakes.”

“We don’t see any risk in the coming years, over the next decade, that somehow AI is going to pose some kind of existential threat to humanity, but … let’s solve this problem before the problem arrives,” Brad Smith said in an interview with The Canadian Press.

Smith – a stalwart of Microsoft who first joined the company in 1993 and now doubles as its vice-chair – said it’s important to get the problems posed by the technology under control so the globe doesn’t have to be “constantly worried and talking about it.”

Click to play video: 'Artificial Intelligence: Canada’s future of everything'
Artificial Intelligence: Canada’s future of everything

He feels the way to address potential problems is through safety brakes, which could act like the emergency mechanisms built into elevators, school buses and high-speed trains.

Story continues below advertisement

They should be built into high-risk AI systems that control critical infrastructure such as electrical grids, water system and traffic.

“Let’s learn from art,” Smith said.

“Every movie in which technology imposes an existential threat ends the same way – human beings turn the technology off. (So) have an on-off switch, have a safety brake, ensure that it remains under human control. Let’s embrace that and do it now.”

Click to play video: 'The New Reality explores the risks of AI'
The New Reality explores the risks of AI

The remarks from Smith come as a race to use and innovate with AI has broken out in the tech sector and beyond following the release of ChatGPT, an AI chatbot designed to generate humanlike responses to text prompts.

Breaking news from Canada and around the world sent to your email, as it happens.

Microsoft has invested billions into ChatGPT’s creator, San Francisco-based OpenAI, and also has its own AI-based technology, Copilot, that helps users create drafts of content, suggest different ways to word text they’ve written and helps create PowerPoint presentations from Word documents.

Story continues below advertisement

But many have deep concerns about the pace of AI advancement. For example, Geoffrey Hinton, a British-Canadian deep learning pioneer often referred to as the “godfather of AI,” has said he feels the technology could lead to bias and discrimination, joblessness, echo chambers, fake news, battle robots and other risks.

Several governments, including Canada’s, have begun devising guardrails around AI.

Click to play video: 'AI predicts consumer behaviour during holiday season for retailers'
AI predicts consumer behaviour during holiday season for retailers

In a 48-page report Microsoft released Wednesday, Smith said his company is supportive of Canada’s push toward regulating AI.

Those efforts include a voluntary code of conduct released in September whose signatories – including Cohere, OpenText Corp., BlackBerry Ltd. and Telus Corp. – promise they will assess and mitigate the risks of their AI-based systems, monitor them for incidents and act on issues they develop.

Though the code has detractors such as Shopify Inc. founder Tobi Lutke, who sees it as an example of the country using too many “referees” when it needs more “builders,” Smith said in the report that by shaping a code Canada has “showed early leadership” and is helping the globe work toward a common set of shared principles.

Story continues below advertisement

The voluntary code is expected to be followed by Canada’s forthcoming Artificial Intelligence and Data Act, which would create new criminal law provisions to prohibit “reckless and malicious” uses of AI that cause serious harm to Canadians.

The act, known as Bill C-27, has passed its first and second reading but is still being considered at committee. Ottawa has said it will come into force no sooner than 2025.

Asked why he thinks governments need to move faster on AI, Smith said the globe has faced an “extraordinary year” since ChatGPT’s release.

“When we say move faster, it’s frankly not meant as a criticism,” he said.

“It’s meant as a recognition of the current reality where innovation has taken off at a faster rate than most people expected.”

But he sees Canada as one of the countries most prepared to handle the pace of AI because universities have long emphasized the technology and cities such as Montreal, Toronto and Vancouver have been hotbeds for AI innovation.

“If there is a government that I think has a tradition on which it can build to adopt something like this, I think it’s Canada. I hope it’ll be the first,” Smith said.

“It won’t be the last if it’s the first.”

Story continues below advertisement
Click to play video: 'Global leaders explore how to manage AI’s risks at summit'
Global leaders explore how to manage AI’s risks at summit

But as Canada’s AI act faces “thoughtful deliberation,” Smith thinks Canada should consider how it can adopt additional safeguards in the meantime.

For example, during the procurement process for high-risk AI systems, he thinks partners seeking contracts could be compelled to use third-party audits to certify that they comply with relevant international AI standards.

In the report, Smith also threw his support behind an approach to AI that will be “developed and used across borders” and “ensures that an AI system certified as safe in one jurisdiction can also qualify as safe in another.”

He compared this approach to the International Civil Aviation Organization, which uses uniform standards to ensure an airplane does not need to be refitted midflight from Brussels to New York to meet varying requirements each country may have.

Story continues below advertisement

An international code would help AI developers attest to the safety of their systems and boost compliance globally because they would be able to use standards that are internationally agreed upon.

“The model of a voluntary code provides an opportunity for Canada, the European Union, the United States, the other members of the G7 as well as India, Brazil, and Indonesia, to move forward together on a set of shared values and principles,” he said in the report.

“If we can work with others on a voluntary basis, then we will all move faster and with greater care and focus. That’s not just good news for the technology world, but for the whole world.”

Sponsored content

AdChoices