A wave of social media users are coming to the sad realization that the Pope isn’t as stylish as recent photos seem to suggest. Were you duped too? (Most of us were.)
Images of Pope Francis wearing an oversized white puffer jacket took the internet by storm over the weekend, with many online admitting they thought the photos were genuine.
No, the supreme pontiff is not dabbling in high-fashion streetwear. The images, though photo-realistic, were generated by artificial intelligence (AI).
The fake images, which originated from a Friday Reddit post captioned “The Pope Drip,” were created with Midjourney, a program that generates images from users’ prompts. The tool is similar to OpenAI’s DALL-E. These AI models use deep learning principles to take in requests in plain language and generate original images, after they’ve been trained and fine-tuned with vast datasets.
The fake images were soon cross-posted to Twitter, where posts from influencers and celebrities exposed the papal puffer to the masses. The original Reddit post had been posted in the r/midjourney subreddit, but devoid of that context on Twitter, many were duped into believing the images were real.
Model Chrissy Teigen admitted she had been taken in by the fake Francis.
“I thought the pope’s puffer jacket was real and didn’t give it a second thought. no way am I surviving the future of technology.”
A worrying amount of people in her replies showed that she was far from alone in having the wool pulled over her eyes.
“Not only did I not realize it was fake, I also saw a tweet from someone else saying it was AI and thought HE was joking,” one person replied.
Images generated by Midjourney earlier went viral on Twitter when Bellingcat founder and journalist Eliot Higgins posted a thread of fake images of former U.S. president Donald Trump getting arrested. Higgins was later banned from Twitter and the word “arrested” is now banned as a prompt from Midjourney.
While these were mostly innocuous cases of people being fooled by AI-generated images, its clear that advancements in AI technology are making it harder for the everyday person to parse fact from fiction.
The ease of using text and image generation tools means the bar to entry has never been lower for bad actors to sow disinformation.
Risk analysts have identified AI as one of the largest threats facing humans today. The Top Risk Report for 2023 called these technologies “weapons of mass disruption,” and warned they will “erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets.”
Montreal-based computer scientist Yoshua Bengio, known as one of the godfathers of AI, told Global News that we need to consider how AI can be abused. He suggested that governments and other groups could use these powerful tools to control people as “weapons of persuasion.”
“What about the abuse of these powerful technologies? Can they be used, for example, by governments with ill intentions to control their people, to make sure they get re-elected? Can they be used as weapons, weapons of persuasion, or even weapons, period, on the battlefield?” he asked.
“What’s inevitable is that the scientific progress will get there. What is not is what we decide to do with it.”
One way that Canada is looking to address the potential harms caused by AI is by bolstering our legal framework.
If passed, proposed privacy legislation Bill C-27 would establish the Artificial Intelligence and Data Act (AIDA), aimed at ensuring the ethical development and use of AI in the country. The framework still needs to be fleshed out with tangible guidelines, but AIDA would create nationwide regulations for companies developing AI systems with an eye towards protecting Canadians from the harm posed by biased or discriminatory models.
While the act shows a willingness from politicians to ensure that “high impact” AI companies don’t negatively affect the lives of everyday people, the regulations are focused mostly on monitoring corporate practices. There is no mention of educating Canadians on how to navigate disruptive AI technologies in daily life.
Considering the confusion caused by the puffer coat Pope, where is the next House-Hippos-style public service announcement when we need it?
Canadian political scientists Wendy H. Wong and Valérie Kindarji are calling on Canadian governments to prioritize digital literacy in the age of AI, in an op-ed.
They argue that access to high-quality information is necessary for the smooth functioning of democracy. This access to information can be threatened by AI tools that have the power to easily distort reality.
“One way to incorporate disruptive technologies is to provide citizens with the knowledge and tools they need to cope with these innovations in their daily lives. That’s why we should be advocating for widespread investment in digital literacy programs,” the authors wrote.
“The importance of digital literacy moves beyond the scope of our day-to-day interactions with the online information environment. (AI models) pose a serious risk to democracy because they disrupt our ability to access high-quality information, a critical pillar of democratic participation. Basic rights such as the freedom of expression and assembly are hampered when our information is distorted. We need to be discerning consumers of information in order to make decisions to the best of our abilities and participate politically,” they added.
AI technology is getting better every day, but for now, one strategy for discerning whether an image of a human was AI-generated is to look at the hands and teeth. Models like Midjourney and DALL-E still have a hard time generating realistic-looking hands and can often get the number of teeth in a person’s mouth wrong.
The federal government’s Digital Citizen Initiative is already helping groups fight disinformation on a variety of topics, including the Ukraine war and COVID-19. But with the proliferation of AI tools, the Canadian public should be prepared to see even more misinformation campaigns crop up in the future.