The Implications of Claude Mythos & The Ceiling We Mistook for the Sky
A joint post with Aysu Kececi
This is a joint post written by me and Aysu Kececi. In the first part, I will share my candid thoughts about the legal and geopolitical implications of Claude Mythos. In the second part, Aysu will lay out her personal thoughts about Mythos as a Sustainability Business Development Consultant and AI Enthusiast.
If Anthropic’s new model Claude Mythos does indeed represent as significant a leap in cybersecurity capabilities as claimed (it probably doesn’t, but let’s do the thought experiment), what are the legal and geopolitical implications hereof?
As to the legal implications, there is no guidance to be found in the AI Act. The obligations to providers of general-purpose AI systems were added late in the negotiation process of the AI Act as a kind of emergency response to include ChatGPT in the scope of regulation. The drafters did not foresee that advanced military and terrorist applications could soon be placed in the hands of ordinary consumers.
As to the geopolitical implications, AI technology was conceived during a time of relative peace and stability in the world under US hegemony. Conflicts were typically settled through laws, policies and diplomacy. Other nations held the US in high regard, and the success of Big Tech companies created a prosperous investment environment and a techno-optimistic outlook, which created the underlying conditions for the cutting-edge large language models we have access to today.
Now, we are no longer living in a time of peace, but a time of war. Conflicts are settled through violence, brutality and show of force while the post-WW2 democratic norms and values are neglected. As long as the US is ruled by the Trump administration, other nations cannot in good conscience adopt American technology, no matter how superintelligent it may be. They can, but it would be a strategic mistake. The Trump administration has demonstrated in both words and actions that it wants to see the EU destroyed and dissolved, so it can deal with EU countries as Russian vassal states. Traditional US allies and trading partners, including the UK, Canada, Japan, South Korea, Brazil, Australia, New Zealand are all in the same boat. Therefore, we will likely see a rejection of American cloud, software, and AI technology as the default throughout the world. This development is currently gaining massive momentum in EU countries and it will only grow stronger in coming years.
As countries move away from American software, cloud, and proprietary AI models, all while the negative impacts of social media become more apparent, the services become more unusable due to AI generated spam, clickbait, and noisy signals, the global energy crisis escalates, and people become poorer and angrier in general, the favorable investment climate which enabled the creation of large American models, is no longer favorable. How then, will the American AI labs continue to create ever bigger-and-better models? Will they even be relevant? I certainly think they will. But more so as military assets than as consumer products. We should see Claude Mythos and its alleged capability leap in this light.
The military is only bound by laws in principle. In reality, as we can currently see, courts and intergovernmental organizations have limited powers to pass meaningful judgement on war crimes and there is no enforcement mechanism to back up international public laws. For example, laws cannot prevent a country from invading another country or stop acts of genocide. By the same token, I think it’s likely that countries will have very limited options to regulate AI in a meaningful way. Laws, condemnations, and judicial processes are effective mechanisms in a time of peace, but not in a time of war, and so, superintelligent AI will not be regulated by laws in practice. Instead, it will be regulated by other measures such as financial sanctions, cyberattacks, chip bans, blacklisting of certain foreign companies, and compute governance.
Even the AI Act, the world’s most comprehensive AI regulation, does not include provisions which could directly hinder a company from developing superintelligent AI nor distributing these superhuman capabilities to the market. Like with Trump, the powers of AI companies are essentially limited by their own morality.
Anthropic’s decision to not make Claude Mythos available to the public, but only to a small selection of American stock market darlings and their partners through Project Glasswing is merciful and sensible. If some anonymous hacker in a basement managed to exploit bugs in Linux, Microsoft, Amazon, Apple or Crowdstrike with the help of Mythos, it could have fatal consequences for the Dow Jones Index. That would be a catastrophe, right?
On the other hand, the estimated training costs of Mythos are rumored to be in the area of $10 billion. The financial incentives for a public release at a later stage are thus hard to ignore. Even if Anthropic is too concerned to widely distribute a model with Mythos’ alleged capabilities for now, one or several of their competitors might not be, and eventually, open weights models from China will presumably catch up. In one way or another, models with Mythos-level capabilities and stronger, will be widely available before long. There are no legal frameworks or reference experiences we can rely on to deal with the situation, but we must educate ourselves, take a factual stance against hype to protect our sanity, take reasonable steps to protect our identity online, and be wary of what AI is becoming.
The Ceiling We Mistook for the Sky
By Aysu Kececi
Everything has been getting weirder and weirder and weirder, and it is going to keep getting weirder. As a species, we have an incredible capacity for adaptation, and most of the time we find ourselves just saying what’s going on and nothing more. We got used to our new instrument too quickly, and once again, without understanding its consequences or how we are supposed to use it, we distributed it very fast under the control of private companies. We now live in a world where, during the course of a day, we say things no one could have even imagined a few years ago: “I need to deliver the project but I can’t work because my 5-hour session hasn’t refreshed yet”, or “they nerfed this AI model, it is dumber now, so I will cancel my 200-dollar subscription.”
Even though in the last fifty years humanity has advanced faster than in its entire prior history, in the last few years we have been moving faster than ever, and we have neither the time nor the will to grasp what is happening, to pause, to produce regulation.
The human species does what it is good at: it creates something, something that could have so many positive effects, just like when we discovered sugar years ago and made it widespread. At first we thought sugar gave us so much energy, that it made us more efficient. We thought sugar was very useful, so we spread it everywhere, until at some point we discovered it was harmful to health, and by then it had already spread everywhere. Just as sugar was once the formula for fast energy, the energy source for humanity, AI is right now the most powerful formula humans have had so far for going beyond their own potential and capabilities. Sugar, over time, made people heavier and more passive. AI, by doing wonderful things for us, is making us passive too. At some point we are encountering the products of human intelligence alone for the last time. The human species is good at making new, exciting discoveries and growing, but unfortunately not good at governing. How much do the systems built to empower us actually serve us anymore, when the world is in the middle of a kind of madness? We came to understand the harms of sugar, found alternatives, and moved on with the reality of diabetes in our lives, and it seems we can manage it. But the progress of AI is far above our level of perception, and even the company leaders who create it do not understand it.
It will not all go well.
The fear and anxiety about AI is justified…
AI has to be democratized; power cannot be too concentrated.
I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.
No one understands the impacts of superintelligence yet, but they will be immense.
And yet, we live in a world where he acts in the opposite direction of what he says, a world where nothing is being done about democratizing AI. AI had become something developed entirely by these companies, something caught in a race where no one could hit the brakes. Until Anthropic’s Claude Mythos hit the brakes, or rather, had to. For the first time, a developed model could not be released. Because the model was too smart, and it was easily finding the security flaws even in the biggest companies and the most critical software, even though it was never created to find them. It was simply too smart.
The ceiling problem
There was a twenty-seven-year-old bug. It was not hidden. It sat inside one of the most security-hardened operating systems ever written, inside software that runs firewalls and the spines of critical infrastructure, sitting right there in open code. For nearly thirty years, people whose entire profession is finding bugs looked at this code. Automated systems tested it. Auditors audited it. Nobody saw it. Then, a few weeks ago, a language model read the file and noticed.
For as long as there has been software, “secure” has meant something specific and unspoken: secure given the humans and tools available to look. We did not say it that way. We said “audited,” “hardened,” “tested at scale.” We used words that implied a property of the code.
OpenBSD was not more secure than its auditors were capable of seeing. FFmpeg was not more secure than five million automated runs could detect. The entire edifice of software security rested on a ceiling nobody named, because from underneath, a ceiling looks like sky.
Human and nature were alone somehow, up until now.
What Mythos Preview has done is not find new bugs. It has raised the ceiling, and in doing so, shown us it was always there. The twenty-seven-year-old vulnerability was not newly created. It was newly visible. For twenty-seven years, “secure” meant “secure enough that we, with our capacities, cannot see what is wrong.” The code did not change. Our instruments did.
This is the pattern I want to name. Call it the visibility-capacity paradox: the things we believe are solid are only as solid as the tools we use to stress-test them. When the tools improve discontinuously, solidity is exposed as a local claim. Everything has been built exactly as well as it was; we were just measuring from inside the limits of our own vision.
This is not a story about AI, yet. It is a story about what happens to any civilization when its instruments outpace its assumptions. And more: human and nature were somehow alone, but now we have a new instrument, and it is one we cannot fully understand.
Can’t un-build the capability
Project Glasswing is the initiative Anthropic formed around Claude Mythos Preview, bringing together a coalition of major technology and security companies to use the model’s capabilities defensively, to find and fix vulnerabilities in critical software before the same kind of capability reaches actors who would use it otherwise.
Anthropic named the project Project Glasswing, inspired by the butterfly known for its transparency. The glasswing‘s wings are clear; it hides in plain sight and it survives by being seen through. The metaphor is meant two ways: vulnerabilities hide in plain sight the way the butterfly does, and the lab is advocating for a transparent approach to disclosing what it finds.
Read charitably, this is a containment strategy. You cannot un-build the capability. You cannot reliably keep it from proliferating forever. I find myself thinking: it is a good thing that, as far as we know, Anthropic was the first to arrive at this point. But how long can it stay that way…
The thing is that we have crossed a threshold at which the pace of instrument improvement now exceeds the pace of institutional adaptation, and we are going to keep crossing thresholds of this kind.
The ceiling we mistook for the sky has moved, and it will move again, and each time it moves it will reveal that the previous version of “secure” or “stable” or “understood” was a claim about our vision, not the world.










