Anthropic investigates report of rogue access to hack-enabling Mythos AI
‘Handful’ of people allegedly gain unauthorised access to model adept at detecting cybersecurity vulnerabilities

The AI developer Anthropic has confirmed it is investigating a report that unauthorised users have gained access to its Mythos model, which it has warned poses risks to cybersecurity. The US startup made the statement after Bloomberg reported on Wednesday that a small group of people had accessed the model, which has not been released to the public because of its ability to enable cyber-attacks. “We’re investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments,” said Anthropic. Bloomberg said a “handful” of users in a private online forum gained access to Mythos on the same day Anthropic said it was being released to a small number of companies including Apple and Goldman Sachs for testing purposes. It reported that the unnamed users got to Mythos through access that one of them had as a worker at a third-party contractor for Anthropic and by deploying methods used by cybersecurity researchers. The group has not run cybersecurity prompts on the model and is more interested in “playing around” with the technology than causing trouble, according to Bloomberg, which corroborated the claims via screenshots and a live demonstration of the model. Nonetheless, news of the potential breach will alarm authorities who have raised concerns about Mythos’s potential to wreak havoc and will raise questions about how potentially damaging technology can be kept out of the wrong hands. Kanishka Narayan, the UK’s AI minister, has said UK businesses “should be worried” about the model’s ability to spot flaws in IT systems – which hackers could then act upon. The model has been vetted by the world’s leading safety authority for the technology, the UK’s AI Security Institute (AISI), which warned last week that Mythos was a “step up” from previous models in terms of the cyber-threat it posed. AISI said Mythos could carry out attacks that required multiple actions and discover weaknesses in IT systems without human intervention. It said these tasks would normally take human professionals days to carry out. Mythos was the first AI model to successfully complete a 32-step simulation of a cyber-attack created by AISI, solving the challenge in three out of its 10 attempts.
More from Technology

US supreme court hears whether smartphone location data warrants infringe users’ privacy
Lawyer for DoJ argued actions taken in public while in possession of a smartphone afforded no expectation of privacy

Elon Musk and Sam Altman face off in court over OpenAI’s founding mission
Musk’s lawsuit accuses Altman of fraud, while OpenAI says that Musk is ‘motivated by jealousy’

If it’s only AI that’s keeping you up at night, maybe you’re doing OK | Letters
Letters: Poverty is far more pressing for many people, writes Lynsey Hanley. Plus letters from Martin Pitt and Michael Bulley

Musk and Altman’s bitter feud over OpenAI to be laid bare in court
Tesla chief believes Altman broke company’s founding agreement – and legal battle promises to be explosive
