Advertisement

Anthropic says new Claude Mythos AI is too risky for public use

Click to play video: 'Anthropic claims its new AI model too risky to release to public'
Anthropic claims its new AI model too risky to release to public
Anthropic claims its new AI model too risky to release to public

AI developer Anthropic says its latest Claude AI model is so powerful — and potentially dangerous — that it will not be available to the general public to use.

Dubbed Claude Mythos, the software is part of the Claude AI family, an artificial intelligence model that can act like a chatbot and AI assistant, like ChatGPT and Google’s Gemini.

“It is a frontier AI model, and has capabilities in many areas—including software engineering, reasoning, computer use, knowledge work, and assistance with research—that are substantially beyond those of any model we have previously trained,” Anthropic wrote in the preview’s system card.

The system card also states that Claude Mythos “has demonstrated powerful cybersecurity skills, which can be used for both defensive purposes (finding and fixing vulnerabilities in software code) and offensive purposes (designing sophisticated ways to exploit those vulnerabilities).”

Story continues below advertisement

It is those capabilities that made Anthropic decide to not release the software to the general public.

Click to play video: 'Business Matters: Pentagon disputes bolsters Anthropic’s reputation, raises questions about AI use in warfare'
Business Matters: Pentagon disputes bolsters Anthropic’s reputation, raises questions about AI use in warfare

“Claude Mythos’s large increase in capabilities has led us to decide not to make it generally available. Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners.”

Anthropic cites these partners as “organizations that maintain important software infrastructure, under terms that restrict its uses to cybersecurity.”

It is these kinds of technologies that Branka Marijan, a senior researcher at Project Ploughshares, says should be monitored with caution.

“The implications for cybersecurity and broader national security that they are flagging, I don’t think that they’re hypotheticals,” she said. “I do think there are actual concerns that we should be paying more attention to now.”

Story continues below advertisement
Click to play video: 'Pentagon labels Anthropic a ‘supply chain risk’'
Pentagon labels Anthropic a ‘supply chain risk’

Why is Claude Mythos stirring concern?

Daniel Escott, the CEO of Formic AI, said that Anthropic is “choosing consciously” to not release Claude Mythos.

“Their argument against releasing it from the general public is that the same systems and functionality and capability to protect infrastructure using this AI system could equally be used to attack the same infrastructure,” he said.

Story continues below advertisement
However, he also said that he would make “no mistake” that “someone will have access to [Claude] Mythos.”

“Anthropic is making their own choices on who they’re willing to give access to this system for. But at the same time, I would imagine those partners are probably saying ‘you’re only allowed to sell to us,’ perhaps a limited set of other entities, but they don’t want everyone to have access to the same kinds of technology,” he said.

Get breaking Canada news delivered to your inbox as it happens so you won't miss a trending story.

Get breaking National news

Get breaking Canada news delivered to your inbox as it happens so you won't miss a trending story.
By providing your email address, you have read and agree to Global News' Terms and Conditions and Privacy Policy.

“And if Anthropic isn’t going to sell it to them, someone else will develop it and sell it.”

Click to play video: 'Anthropic vs Pentagon: Why the AI firm is pushing back against the Trump administration'
Anthropic vs Pentagon: Why the AI firm is pushing back against the Trump administration

Escott also warned that Anthropic’s system card on Claude Mythos should be taken “with a grain of salt.”

“Based on the documentation, it seems that they’ve been training this on a combination of the open-source data sets that they’d been using for all of Anthropic’s other models,” he said.

Story continues below advertisement

“This is no different than what ChatGPT or Microsoft Co-Pilot is doing, where they’re just scraping, some would argue stealing, information from all over the internet and putting it all into one big data set that they can train on.” 

Marijan said she would like to see “more clarity from Anthropic and these other companies about actually how concerning is this from what they’re telling us.”

“It is absolutely concerning,” she said. “It’s undermining all of these safeguards that companies might have in place.”

Would Anthropic ever release Claude Mythos?

Moshe Lander, an economics professor at Concordia University, said that not releasing Claude Mythos to the public just yet allows for potential flaws to be fixed without impacting users.

“If some pharmaceutical company is developing a drug, and they say, for the time being, ‘we’re not releasing it for public use,’ is there something wrong with that? I would say, actually, I think that’s probably being responsible,” he said.

Story continues below advertisement

“If the company is saying, ‘look, we’re not putting it into public use ever,’ that’s something different. What they’re saying is ‘we’re now putting it in public use now,’ I think that’s being extremely responsible, in let’s see how this thing is going to be used. Let’s see where its defects are,” he said.

“If they do find that there’s weaknesses, it has that ability to correct itself or fix any flaws, that might not be a bad thing.”

Click to play video: 'Federal government looks to support Canadian AI industry'
Federal government looks to support Canadian AI industry

There remain significant questions around the world, including in Canada, around what it will take for governments to regulate AI and provide legal frameworks for its use.

Lander also said that initial concern about AI systems not being immediately released is bound to raise questions for many, with no easy answers.

Story continues below advertisement

“I think that because people are generally worried about AI in general, that when we hear there’s an AI product that’s coming along that’s not available for public use, we hit the panic button and say, ‘wait a second, something doesn’t sound right here,'” he said.

“Before they [Anthropic] put it into public use, they want to make sure that it’s not going to go into the wrong hands, where people have maybe dishonourable intentions and that it can be used to harm society once they’ve established the protocols or safeguards that we need to put in place.”

Ransomware attacks increasing for Canadian businesses

In January, the Canadian Centre for Cyber Security (Cyber Centre) released its ransomware threat outlook for 2025-27, stating that with the growth of AI, “these threats have become cheaper and faster to conduct and harder to detect.”

Story continues below advertisement

As a result, numerous Canadian organizations, businesses “regardless of size or sector,” and individuals are susceptible to ransomware attacks. However, “critical infrastructure and large corporations” were found to be the top targets for ransomware activities.

Click to play video: 'Canadian firms paying ‘significantly’ more in ransomware attacks: data'
Canadian firms paying ‘significantly’ more in ransomware attacks: data

The report found that the reported number of ransomware incidents increased by an average of 26 per cent year over year from 2021 to 2024.

In addition, it was also found that the total recovery costs associated with cybersecurity incidents cost $1.2 billion in 2023, doubling the previous cost of $200 million from 2019 to 2021.

However, Marijan believes that there should be more protocol in place for businesses to utilize these tools.

“I think what it points to really is this clear gap in governance where we have companies that are deciding what they think is concerning. We should really have processes,” she said. 

Story continues below advertisement
“What we’ve seen over the last decade is an increase in ransomware attacks […] and that impacts all of us. So, when you’re thinking about ‘what are the implications of these,’ they’re very significant for ordinary people as well.

“So, we absolutely are in the space where these companies are deciding essentially what they think are concerns or flagging them. And there’s no process in place for this, for any guardrails really to appear.”

Sponsored content

AdChoices