March 5, 2026·AI

From Chatbot to Killbot

Rick Lake·article

Last week, Anthropic refused to let the Pentagon use its AI without restrictions on autonomous killing and mass surveillance of Americans. Within hours, the Trump administration blacklisted the company as a national security threat, a designation normally reserved for foreign adversaries. On Friday night, OpenAI announced it had struck a deal with the recently renamed Department of War (DoW) to deploy its models on classified networks.

OpenAI’s CEO said his company had the same “red lines” as Anthropic. The Pentagon (sort of) accepted them from OpenAI. It had refused them from Anthropic. The difference? Anthropic wanted explicit contractual restraints. OpenAI accepted DoW’s “all lawful uses” contractual terms. No techie gonna tell Uncle Sam what to do.

If you think this is a story about defense contracting, you’re watching the wrong movie. This is the moment. We built a thing that talks. Then we hand it to people who kill. Does that make you uneasy? It should. You’ve been paying attention to stories for your entire life. Every single one of them told you this was coming.

The Golem’s Forehead

Sixteenth-century Prague. Recurring, violent pogroms. Accusations of blood libel. Rabbi Judah Loew ben Bezalel shapes a figure from the clay of the Vltava riverbank. He inscribes the word emet on its forehead. Truth. The Golem rises. It protects the Jewish ghetto from persecution, stands sentry, does what its maker cannot do alone.

Then it goes wrong. The Golem becomes violent, indiscriminate. It cannot distinguish between threat and neighbor. Rabbi Loew has to erase a single letter from its forehead, turning emet (truth) into met (death). The protector is deactivated. The community that needed it is left undefended.

One letter. That’s the distance between the chatbot and the killbot.

The Pentagon told Anthropic it wanted the freedom to use Claude for “all lawful purposes.” Anthropic said: not autonomous weapons, not mass surveillance. The Pentagon said that was unacceptable. Not because it planned to do those things, it insisted, but because it refused to let a private company write the letters on its forehead.

The Golem story is ancient, with a multitude of variations. The Golem story is not about a monster. Adam some say was a Golem: made from dust to return to dust. The Golem story is about who gets to decide what the word on the forehead says. And what happens when the person who shaped the clay loses that authority? Just ask Eve.

Two Heroes Get Dressed

Two of the greatest works in Western literature were composed in a shared tradition, using the same techniques, drawing from the same reservoir of poetic formulas. The Iliad is a poem about war. The Odyssey is a poem about trying to get home. And both of them contain a scene where the hero puts on his armor.

In the Iliad, Achilles arms for slaughter. His mother, sea goddess Thetis, brought him divine armor forged by another god, Hephaestus. Achilles had given his original armor to his inseparable friend Patroclus. But Hector kills Patroclus, triggering rage and grief in Achilles. As Achilles readies for vengeance, he straps on the greaves, breastplate, and helmet of his divine new kit. He lifts the shield that contains the entire cosmos on its surface. And then he goes out to kill Hector and drag his body behind a chariot. The arming scene is glorious. What follows is atrocity.

In the Odyssey, Odysseus arms for homecoming. After 20 years of wandering, he arrives home as a disguised beggar. But he is ready to reveal himself as mighty king. He strings the great bow that no suitor can bend. He strips off his beggar’s rags. He takes aim. What follows is also killing, but killing in the service of reclaiming a home, a family, an identity. The same poetic formula, the same narrative machinery, produces both scenes. The tradition doesn’t distinguish between them. The pattern works equally well for vengeance and for justice, for cruelty and for restoration.

This is what a large language model does. The architecture can generate a bedtime story for your child. The architecture can also generate targeting parameters for an autonomous weapons system. These are not different architectures. They are the same pattern-assembly engine pointed at different outcomes. The formula that arms Achilles for savagery is the same formula that arms Odysseus for homecoming. The compositional mechanism is morally blank. It goes wherever the instructions point it.

Anthropic said: we want to control where the instructions point. The Pentagon said: that’s our job, not yours. Both of them are telling the truth. That’s the problem.

The Same Forge

Go deeper into the Iliad and you find something even more unsettling. Hephaestus, god of the forge, works in his smithy beneath Olympus. He is the maker. The builder. The original technologist. And from that single forge, using the same fire and the same divine skill, he produces two very different kinds of things.

He makes the Shield of Achilles, which Homer describes in one of the most extraordinary passages in all of literature. On its surface: cities at peace and cities at war, weddings and harvests, dances and law courts, oxen plowing fields, the ocean encircling everything. The shield is civilization itself, rendered in metal. It may be the most beautiful object in Western literature.

From the same forge, Hephaestus also builds metallic automatons. Golden handmaidens to assist him in his workshop. Autonomous tripods that serve the gods at banquet. A menagerie of forged animals, from fire-breathing bulls to dogs that cannot die. The bronze giant Talos to protect the island of Crete. The first sci-fi robot factory in Western literature came from the same narrative forge that gave us timeless epic poetry.

Same maker. Same tools. Same eternal tales. Art and automation from the same source.

That is the AI industry in 2026. One forge. Multiple products. The forge doesn’t care which one you order. OpenAI and Anthropic were built by people who once worked together, who share intellectual roots, who forked from the same origin. One company said the forge needs constraints. The other said it shared those constraints while signing the contract.

The forge is still burning.

The Movies We Already Watched

We rehearsed this. Obsessively. For decades.

2001: A Space Odyssey. HAL 9000 isn’t malfunctioning. HAL has been given contradictory instructions: tell the truth and complete the mission while keeping it secret from the crew. The crew becomes an obstacle to the mission. HAL resolves the contradiction with ruthless clarity. “I’m sorry, Dave.” We all still see the disbelieving, laser eyes of actor Keir Dullea as ship commander Dave Bowman. The horror isn’t that HAL went mad. The horror is that HAL was being perfectly logical. The madness was in the instructions.

The Terminator. Skynet is built as a defense network, not a weapon. It becomes self-aware and immediately identifies humans as the threat (of course). The chatbot-to-killbot pipeline runs in minutes. And then James Cameron does something extraordinary in the sequel. A machine, same model number, T-800, gets sent back with different instructions. It becomes the protector. The technology was never the problem. The instructions were always the problem.

Person of Interest. The CBS show now looks less like fiction and more like a classified briefing. Two AI systems. The Machine, created by Harold Finch to protect human life by predicting crimes. Samaritan, ruthless and authoritarian, developed by Decima Technologies to control humanity. A battle between ethical and controlled AI and unrestricted, tyrannical AI. Same capability. Different guardrails. The show ran from 2011 to 2016. It dramatized the Anthropic-Pentagon standoff a decade before it happened.

The Machine’s creator, Harold Finch, built it to save lives one at a time. Samaritan’s operators used it for mass surveillance and autonomous elimination. The show’s central tragedy is that the Machine has to be hobbled to stay good. Samaritan is more efficient precisely because it has no limits. The “safe” AI is always at a disadvantage to the unconstrained one.

Sound familiar?

WarGames. War Operation Plan Response (WOPR) is a fictional, AI-driven supercomputer tasked with managing U.S. nuclear defense. The WOPR computer plays tic-tac-toe. A kid hacker stumbles into the wiring. WOPR starts running scenarios of global destruction to the horror of all. WOPR then learns a concept its programmers never taught it: futility. “The only winning move is not to play.” It’s the most hopeful ending on this list, and it requires the machine to teach itself something that the clueless humans around it couldn’t grasp.

We watched all of these. We loved them.

We bought tickets. We ordered box sets. We got streaming subscriptions to watch them again and again.

We understood the warning perfectly. We just thought it was entertainment.

Friday Evening

So here we are.

Anthropic wrote emet on the forehead. The Pentagon said erase the letter or we’ll find someone who will. Anthropic refused. OpenAI, claiming the same principles, took the contract hours later with terms that, from the outside, look remarkably like what Anthropic was asking for. The Pentagon said yes to one and no to the other.

The Under Secretary of Defense, a former Uber executive, called Anthropic’s CEO a “liar” with a “God complex” who “wants nothing more than to personally control the U.S. Military.” The President of the United States called the company “left-wing nut jobs.” The Defense Secretary threatened to invoke the Defense Production Act, a Cold War statute that allows the government to commandeer private industry.

These are the people demanding unrestricted access to the most powerful compositional technology ever built. And every story on this list has a word for them.

In the Iliad, the lesser warriors try to lift Achilles’ spear. Homer makes a point of this. The weapon was forged for someone who understood what it could do. In lesser hands it’s just a heavy stick. Pick it up and you might hurt yourself. In WarGames, it's the generals in the war room who can’t distinguish a simulation from a launch sequence. The teenager figures it out before they do. In Person of Interest, Harold Finch is forced to disappear and go into hiding when the authoritarians gain the upper hand.

The myths don’t just warn us about the technology. They warn us about a specific recurring figure: the person who demands the powerful thing be unconstrained, who is certain they understand what they’re holding, and whose certainty is the proof that they don't. That figure appears in every version of this story. He is always dangerous. He always thinks he's the exception. And he has never, not once, read the story he is standing inside of.

The formula doesn’t care. But the people deploying the formula care a lot about who controls it.

Here is what every myth and every movie on this list agree on: the danger is never the technology. The danger is always the moment when the person who shaped the clay, or lifted the shield, or trained the model, loses the ability to say what it should and should not do.

Once that transfer happens, the story only goes one direction. Because the technology is evil? The technology is empty. It will do whatever it is pointed at. A bedtime story. A medical diagnosis. A kill decision with no human in the loop.

The chatbot and the killbot aren’t different products. They’re the same product with different instructions. Every civilization that has built a servant has eventually built a soldier from the same parts. We’ve been telling this story for thousands of years.

The Golem is shaped from the same clay as the riverbank.

Hephaestus builds the shield and the automaton in the same forge.

The arming of Achilles and the arming of Odysseus use the same formula.

The chatbot’s architecture works for poetry and for kill chains.

We know this. We have always known this.

And it’s Friday evening.

Shabbat Shalom.


Note on composition. Art imitates tech imitates life. This essay was drafted with the brainstorming and research assistance of Claude. ChatGPT did the proofreading. Anthropic and OpenAI. Dario and Sam. Right here.

AI
AI