The OpenAIpocalypse Is Nigh
Thoughts after Pentagon’s partnership with OpenAI
“Now I am become Death, the Destroyer of Worlds”
– Sam Altman, CEO of OpenAI, in a rare moment of candid self-reflection as he gazes out on the human ants below him from a tall tower of glass, probably.
OpenAI is back on top of leaderboards with its latest model, GPT-5.4. Did you notice?
The model signifies another round of impressive, yet relatively incremental and undocumented improvements on benchmark tests achieved at an extremely high cost. The public appears for good reasons to have lost interest in these incremental and alleged AI performance jumps. Meanwhile, I imagine that certain investors are in danger of needing a surgical intervention to get their hands down in excitement over GPT-5.4’s potential to replace humans with code.
We are now well into the fourth year of the AI race and foundation models have arguably failed to transform our lives, but they have partly transformed the American economy into a trillion dollar bet on a hot labor replacement fantasy.
Foundation models such as GPT-5.4 are clearly interesting tools with meaningful applications for consumers and businesses. At the same time, they have failed to produce notable scientific breakthroughs like many influencers promised we would see by now. The capabilities of AI agents remain wobbly at best. Instead of empowering knowledge workers, AI is drowning out their voices and competencies in an ocean of slop. The fortunes of tech billionaires are steadily rising along with the rates of unemployment, homelessness, poverty, and socioeconomic inequality to the electric humming of data centers, which sound less like the happy singing of working bees and more like an angry swarm of approaching wasps.
If your company is paying for tools offered by OpenAI, that is a political statement. Sorry. There are plenty of good alternatives available, but even better, you could develop your own AI solutions through a combination of open-source tools, open weights models, and the skill of human engineers. That seems to be a much more exciting and rewarding path for your company. The initial costs are outweighed by the facts that you pay less in taxes to the tech giants, besides those you pay to your government.
If you cancel your subscription to ChatGPT there is another added bonus: your company won’t be supporting the insatiable appetite for death and destruction the Trump administration shares with Israel. OpenAI has positioned itself as a loyal support to the worst of Trump’s whims. As it turns out, the major economic opportunity of AI is not to provide tools for humans, but to use the tools against humans.
Ross Anderson brought more information related to Anthropic’s refusal of Department of War’s contractual terms in an article for The Atlantic:
“On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart.”
OpenAI accepted substantially similar terms to those Anthropic declined, with blurry red lines on mass-surveillance and autonomous weapons. Sam Altman recently told OpenAI employees at an all-hands meeting that “OpenAI has no say over Pentagon decisions”. The company is practicing what Hannah Arendt once referred to as the “the banality of evil”. Hitler’s rise to power was not enabled by brutal men, but by a sufficient number of indifferent public servants who were just following orders. In the same way, OpenAI’s eagerness to ensure funding and deals may come at the expense of serious human rights violations. By continuing to support this firm, we are risking an OpenAIpocalypse. What this means exactly, I am not sure, but no one will like to find out.







