Menu

Topics

Connect

Comments

Want to discuss? Please read our Commenting Policy first.

From deepfakes to ChatGPT, misinformation thrives with AI advancements: report

WATCH: Canadians concerned over social media misinformation, poll shows – Dec 23, 2022

Rapid-fire advancements in artificial intelligence could help misinformation thrive in the year ahead, a new report is warning.

Story continues below advertisement

That’s according to the Top Risk Report for 2023, an annual document from the U.S.-based geopolitical risk analysts at the Eurasia Group.

The “weapons of mass disruption” that are emerging from speedy technological innovations “will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the report said.

That’s why this threat ranked third on its list, bested only by risks posed by an increasingly aggressive China and a rogue Russia.

“This year will be a tipping point for disruptive technology’s role in society. A new form of AI, known as generative AI, will allow users to create realistic images, videos, and text with just a few sentences of guidance,” the report said.

“Large language models like GPT-3 and the soon-to-be-released GPT-4 will be able to reliably pass the Turing test—a Rubicon for machines’ ability to imitate human intelligence.”

Story continues below advertisement

These models, coupled with advances in deepfakes — digitally altered videos that can simulate everyone from your favourite singer to the prime minister — facial recognition technology and voice synthesis software “will render control over one’s likeness a relic of the past,” the report warned.

“User-friendly applications such as ChatGPT and Stable Diffusion will allow anyone minimally techsavvy to harness the power of AI,” it said.

The daily email you need for 's top news stories.

While revolutionary technologies have the power to “drive human progress,” that’s usually matched by their “ability to amplify humanity’s most destructive tendencies,” the report adds — and that’s exactly the risk the Eurasia Group is warning the world about.

Story continues below advertisement

Disinformation and misinformation have already made a splash on the geopolitical stage, even without a leg-up from artificial intelligence.

An analysis by academics of at least six million tweets and retweets — and their origins — found that Canada is being targeted by Russia to influence public opinion here.

The study by the University of Calgary’s School of Public Policy in June found that huge numbers of tweets and retweets about the war in Ukraine can be traced back to Russia and China, with even more tweets expressing pro-Russian sentiment traced to the United States.

Misinformation about the safety and efficacy of COVID-19 vaccines was rife at the protests that clogged downtown Ottawa streets for three weeks in February 2022 — and multiple studies have found that bots had a heavy hand in helping spread false narratives about the virus.

Story continues below advertisement

As AI technology advances, possibilities for those using this technology — including those using it to spread misinformation — advance too.

“These advances represent a step-change in AI’s potential to manipulate people and sow political chaos,” the report found.

“When barriers to entry for creating content no longer exist, the volume of content rises exponentially, making it impossible for most citizens to reliably distinguish fact from fiction. Disinformation will flourish, and trust — the already-tenuous basis of social cohesion, commerce, and democracy — will erode further.”

People with political goals might find themselves especially empowered by these rapid advancements.

“Political actors will use AI breakthroughs to create low-cost armies of human-like bots tasked with elevating fringe candidates, peddling conspiracy theories and ‘fake news,’ stoking polarization, and exacerbating extremism and even violence — all of it amplified by social media’s echo chambers,” the report warned.

“We will no doubt see this trend play out this year in the early stages of the U.S. primary season … as well as in general elections in Spain and Pakistan.”

Story continues below advertisement

The Canadian government has pledged to take steps to tackle disinformation online.

In November, the government tabled a bill to enact its promised digital charter. The legislation is aimed at modernizing protections for personal information online as artificial intelligence spreads — and it also promises to “protect against online threats and disinformation designed to undermine the integrity of elections and democratic institutions.”

Meanwhile, the government has faced calls to go even further.

Story continues below advertisement

Heritage Minister Pablo Rodriguez appointed a panel of experts to help him shape online harms legislation. Over the summer, they implored him to include disinformation — including deepfake videos and bots spreading deception — under the scope of the proposed bill.

Chief Electoral Officer Stephane Perrault also urged the government to act in a report sent to the House of Commons in June. He suggested Canada make it illegal to knowingly spread disinformation about the voting process and to try to undermine a legitimate election result.

As these calls ring out, the Eurasia Group is warning that U.S.-style division is spreading to Canada.

Thanks to a combination of “declining trust in traditional media outlets” and “Canada’s deep and unique exposure to the U.S. political and media ecosystem,” the Top Risk Report warned that Canada’s “combative partisan and regional politics are poised to take a turn for the worse.”

Story continues below advertisement

“As the political temperature rises, we will see closer coordination between American and Canadian far-right and far-left fringe groups—with an increasing risk of disruptions, protests, civil disobedience, and even violence,” it warned.

“When the U.S. sneezes, Canada catches a cold. Watch out for sniffles north of the border in 2023.”

— with files from The Canadian Press

Advertisement

You are viewing an Accelerated Mobile Webpage.

View Original Article