What boards and C-suites need to understand about Claude Mythos
At the end of March, Anthropic leaked a new model called Claude Mythos. Claude Mythos is a large language model (LLM) with particularly strong capabilities in software programming languages. These same capabilities can be used to exploit vulnerabilities in computer code. Anthropic has not yet released Claude Mythos, on the grounds that these capabilities are too dangerous to be made generally available.
It is easy to dismiss this as just another LLM, with corresponding marketing framing. However, the underlying problem is that generative AI has now become so good that it can write code, identify weaknesses in code, and quickly orchestrate multi-step attacks. An attacker does not need Claude Mythos to do this, though. Available models and technologies are more than sufficient, and attacks are ongoing. The reason Claude Mythos is important is that the scale of capabilities appears to be beyond what we have seen so far.
Dual-Use Expertise at Scale
Let's use a real-world example: a legal expert on contract law is trained to identify loopholes in contract terms to protect your company from unfavourable positions. This same skillset can be used to create contracts with loopholes for one of the parties to exploit.
Good old human hackers have always been good at finding and exploiting vulnerabilities in computer code. The difference with Mythos is the massive scale at which exploitation may happen. That one legal expert from my example above now has endless clones with effectively unlimited capacity to both hunt for loopholes and to create them. Same expertise, opposite intent. The model doesn't care which one you are.
The Economics of Cybersecurity Are Breaking Down
Cyber security has always been a question of cost versus benefit, just like any other business case. There is no such thing as 100% information security. There is always a gap somewhere. The key question is whether the benefit is more attractive than the cost, and hence cyber security mitigation has always been about making it so costly and difficult to break through that attackers won't bother to try hard enough.
With Mythos, the speed and scale surpass traditional cyber security approaches. The computing cost is probably significant - for now. However, the game is on in the market, and we can safely assume that other players are already catching up. In the long run, this means that computing costs are likely to go down, whilst availability goes up. Fundamentally, because the transformer technology is open and well-known and the internet is full of training data, there is no real moat other than compute costs.
What this means for boards and C-Suites
So what does this mean for your board and C-suite? Here are three things you cannot afford to ignore:
The massive volume of LLM-generated code means that the attack surface is now several orders of magnitude larger than what it was only a few months ago. On the attack side, the expert sniper with a precision rifle has been replaced by an army of AI clones, each carrying a carpet bomb.
AI can identify vulnerabilities. Fixing them is still at least partly a human job, involving governance, compliance, and quality control. Closing a gap badly can cause more damage than the original vulnerability.
The industry-wide safety infrastructure is not keeping pace with capability gains. ISO/IEC 42001 was published in 2023 and covers AI management systems at a governance level, but says little about the implications of LLM-driven risks. ISO 27001 predates agentic AI entirely. The EU AI Act is live, but it was not written with Mythos-class capabilities in mind. Regulators are already behind. This means that there are no mature, tried, and tested governance models or practices to lean on.
Operating in an AI-driven threat landscape
Thanks to LLM technology, every company is now navigating uncharted terrain when it comes to cyber threats. Even if you reject using AI for company purposes, your adversaries are using it. Where sophisticated attacks previously required an experienced hacker, now just about anyone with intermediate computer skills and a bit of creativity can achieve the same, at scale.
Your choice is not whether to take part in this race, it's about whether you are competing blindfolded or clear-sighted. Strategy and risk are two sides of the same coin, and AI just upped the game again. If you are unsure whether your organisation is competing blindfolded, that's the right place to start. Feel free to reach out.
Further Reading:
Monica Verma, an experienced hacker and CISO, has written this sharp analysis of exactly what Mythos means in practice, well worth your time: https://monicatalkscyber.com/p/claude-mythos-cybersecurity
If your board or leadership team would benefit from more constructive and informed discussions about AI, I work with organisations through keynotes and advisory engagements to create clarity and shared understanding.
Feel free to contact me to continue the conversation.
About the Author
Elin Hauge is a keynote speaker, AI strategist, and trusted advisor to business leaders and boards. She specialises in helping organisations make sense of artificial intelligence beyond the hype, connecting technology to strategy, governance, and real-world value. With a multidisciplinary background in physics, mathematics, business, and law, Elin brings both analytical rigour and practical perspective. Her talks and advisory work empower leaders to ask better questions, make wiser decisions, and navigate AI with confidence.
FAQs
-
Claude Mythos is a large language model reportedly developed by Anthropic with advanced capabilities in software programming, including identifying and exploiting code vulnerabilities.
-
It highlights how AI can scale cyber attacks dramatically by automating vulnerability detection and exploitation, increasing both speed and attack surface.
-
Boards should reassess risk models, strengthen governance, and recognize that AI changes the economics of cybersecurity by lowering attack costs and increasing scale.
For more advisory, please contact me.