In 1986, the Space Shuttle Challenger exploded 73 seconds after launch. The cause was a defective O-ring that could not withstand the cold. The striking thing: engineers at Morton Thiokol had warned the evening before. They had data. They had objections. But the decision-making process was so layered, so procedural, so focused on consensus, that the warning got softened at every level it passed through. By the time it reached the decision-makers, it was a footnote.
More governance would not have prevented it. Better governance would have.
The framework problem
AI governance has a reputation problem. Deservedly so. Most frameworks are forty-page documents, written by people who do not run AI projects, for people who do not run AI projects.
They are comprehensive. They are complete. They are unusable.
The effect is predictable. Teams that want to move fast ignore the framework. Teams that are cautious get paralysed by it. In both cases governance achieves precisely the opposite of what it was supposed to.
The fundamental misunderstanding: governance is not a document. Governance is a rhythm.
Two tracks, not one
Organisations that get AI governance right separate two things that almost everyone conflates.
The strategy track. Who sets the direction? What is in and out of scope? Which risks are acceptable? This is the work of the sponsor, leadership, and the governance lead. Their mindset: "safe enough to try."
The experimentation track. Who runs experiments? Who learns from the results? Who builds working things? This is the work of practitioners, domain experts, and builders. Their mindset: "fast enough to learn." Here the principle of proof before scale applies: prove the value at small scale before expanding.
The two tracks run in parallel. They influence each other. But they do not sit in the same meeting, follow the same rhythm, or report in the same way. The strategy track sets the frame. The experimentation track fills it in.
Three principles are enough
Instead of forty pages, you need three principles. They fit on a Post-it.
Advice and authority separated. AI may recommend. Humans decide. The same systems that provide information must never be the ones that execute the decision. That sounds obvious. In practice it is violated daily.
Never three things in one system. Full access to information, decision authority, and execution power should never sit in the same AI application. Separate them. Or ensure human checkpoints in between.
Human in the loop for high-impact decisions. Every decision touching employment, finances above a threshold, legal exposure, or customer relationships requires human approval. Always. And run the Meaning Test regularly: is the work genuinely getting better, or just faster?
These three principles are the fence around the playing field. Within that fence, the team can move fast. The fence is there so that speed does not come at the cost of safety.
The rhythm
Governance is not a one-off document. It is a rhythm of short, regular check-ins.
Every two weeks, thirty minutes. Three questions. What have we learned? What is blocking us? Do we need to adjust scope, timeline, or approach? This rhythm is also the moment to check for AI drift: is quality degrading, are costs creeping up, are users stopping their critical review?
That is it. No two-hour steering committee meeting. No thirty-slide quarterly report. A short conversation with the right people, on a fixed rhythm, with fixed questions.
The Challenger commission concluded afterwards that the problem was not too little information, but too many layers between the information and the decision. Each layer filtered, softened, nuanced, until the signal was unrecognisable.
Good AI governance does the opposite. Short lines. Few layers. Clear principles. And the trust that a team can make its own decisions within those principles.