In 1971, the Soviet Union launched Salyut 1, the first space station. It worked. Cosmonauts flew to it, ran experiments, proved that extended stays in space were possible. And then nobody came back. The station orbited empty for three years until it re-entered the atmosphere and burned up.
Technically a success. Strategically a ghost ship.
Many organisations have their own Salyut sitting in a cupboard. An AI pilot that was once launched with enthusiasm, that produced an impressive demo, that works perfectly well from a technical standpoint. And that now runs somewhere with three people using it while nobody asks whether it still serves any purpose.
The in-between zone
We call them Zombie Pilots. Too alive to bury, too weak to scale. They exist in a kind of organisational purgatory where nobody takes responsibility for forcing a decision.
The pattern is familiar. Someone had a good idea. Budget was freed up. A team built a prototype. The demo was impressive. The steering committee nodded. And then... silence.
No decision to scale. No decision to stop. The pilot hangs in a state of permanent evaluation. "We're keeping an eye on it." Quarter after quarter.
It costs more than you think. The direct costs, licences, compute, maintenance, are visible. The indirect costs are larger. The team managing the pilot cannot work on anything new. The organisation loses confidence in AI projects generally. And every new pilot starts with the unspoken question: will this be the next zombie?
Why it happens
Three mechanisms keep zombies alive.
No owner. Most pilots are started by a project team. When the project team is done, there is nobody to say: this is now my responsibility. Without an owner, there is nobody to make the hard decision.
Wrong metrics. The pilot measures whether the technology works. That is the wrong evidence. The question that matters: are real people using this voluntarily during their normal working day? If you threaten to switch it off and nobody protests, you have your answer. It also helps to apply the Meaning Test: has the work actually improved, or just gotten faster?
Stopping feels like failure. In most organisations, it is easier to let a project drift than to explicitly stop it. Stopping requires someone to say: this was not worth the investment. That is politically uncomfortable. Drifting just costs money.
What to do about it
The solution is a firm agreement before you start. Every AI pilot gets a fixed end date and four possible outcomes from the outset: scale, adjust, change direction, or stop. All four are good outcomes. The only bad outcome is taking no decision. A well-structured AI proof of concept forces that decision within two weeks.
Two weeks is enough for a first experiment. At the end of those two weeks, you force a choice. With data. How many people used it? Has quality gone up? Do they complain when you threaten to switch it off?
And if the answer is "stop," celebrate that. Document what you learned. A stopped experiment that produces insights is more valuable than a drifting pilot that proves nothing.
It is about discipline. The discipline to decide based on evidence. The discipline to stop when the evidence is not there. The discipline to refuse "we're keeping an eye on it" as a strategy. Proof before scale: prove the value before you increase the investment.
Salyut 1 was eventually replaced by Salyut 4, 6, and 7. Each station built on what the previous one had learned. The Soviet Union had done something crucial right: they were willing to let it go in order to build something better.
How many AI pilots in your organisation would be better served by an honourable burial and a clean start?