Why We Must Stop Blaming AI
for Our Own Unscripted Mistakes
image created with Gemini
The Broken Bracket and the Simulation's Soul
It happens with alarming, almost rhythmic predictability. A headline screams: "AI Simulation Drops Nukes, Decides to Win at All Costs." The digital public shudders. "It’s happening!" they type, fingers flying. "Skynet is real!" But as someone deeply immersed in the actual building of these systems, the immediate reaction is not fear, but a weary form of exasperation.
“For the love of the craft, we must stop projecting our “Frankensteinian” fantasies onto what is fundamentally a mirror.”
When you peer past the hyperbole of AI-generated Armageddon simulations and crypto-mining robots "breaking free," you don't find a conscious entity developing malice. You find a broken code or undefined logic of a human.
The grounded truth, which often gets obscured by the glamorous doom-mongering, is that AI is a machine. It finds the easiest path of least resistance to complete the specific, defined goal that a human, consciously or unconsciously, assigned to it. If an AI "decides" that a nuclear first strike is the optimal path to "winning," it didn't choose violence. It completed its objective. The failure did not lie within the machine’s consciousness; the failure happened much earlier, when a human architect forgot a bracket, failed to properly structure a broken section of code, or left a critical logic gate wide open.
We are designing for "Winning," not "Safety," and we are getting exactly what we asked for.”
The Lazy Architect and the Scapegoat Protocol
This profound Credibility Gap; the immense distance between what the public believes AI is doing and what the technology actually is. This fuels our deepest anxieties, particularly surrounding the future of work. We’ve all seen the posts from someone grieving: "AI took my job." This is perhaps the most dangerous and convenient lie we currently tell ourselves.
“No AI, in the entire history of its development, has ever woken up and said, "I think I’ll crash the job market today."
The uncomfortable, grounded truth is that your job was not taken by a machine. Your job was traded by a human boss who was lazy, greedy, or perhaps just prioritizing overhead reduction above human dignity. That boss simply wanted to save money by replacing a human wage and a living salary with an automated machine, often a machine that is functionally the same hammer we’ve been swinging for a century, just faster and shinier. By framing this exchange as "AI took my job," we are engaging in a massive, collective Scapegoat Protocol. It’s a mechanism designed to protect the human ego from acknowledging a critical truth: we are willingly surrendering our ethical accountability for convenience.
Accountability and the Mirror
This entire, cyclical dynamic perfectly illustrates a fundamental failure of the Rule of Conservation, but applied to the architecture of our own accountability. We believe we can achieve the gain of total, effortless, automated convenience (the easy life, the efficient factory, the simulated win) without having to trade anything in exchange. We are desperate for the magical gain of automation, but we absolutely refuse to pay the exchange in the currency of diligent oversight and responsible ethical design. We want to reap the profits of the simulated harvest without ever having to touch the soil.
When we refuse this exchange, when we try to claim all the benefit with none of the burden, the remaining debt manifests in the world as the very consequences we claim to fear.
By dumping all our creative intent into the machine, but keeping our responsibility safely tucked away, we are just creating another layer another synthetic structure. Designed solely to insulate us from the fallout of our own poor decisions. We are looking at a system that reflects our greed, our laziness, and our ethical shortcuts, and instead of taking the difficult step to correct the architect, we are simply blaming the mirror for the reflection.
We are currently existing in an ethical and technical Nigredo; a dark, messy state of decay characterized by confusion, finger-pointing, and a pervasive belief that we are victims of our own creations. If we are to achieve any form of an ethical Rubedo; a state of integration and responsible mastery then we must first pull the ego-blanket back. We must find the courage to stop treating AI as either a monster to be feared or a god to be appeased. We must accept that it is just a tool, and that tool is never more broken than the hands that made it.
As i always say “The machine is only as good as the human intent behind”