In 2003, London opened the Millennium Dome. It had cost 789 million pounds. It could accommodate 20,000 visitors a day. There was just one problem: nobody wanted to go. The building was spectacular. The reason to visit was missing.
The Dome was the answer to a question nobody had asked.
This pattern has a name in AI projects. We call it Solutioneering: starting with the tool instead of the problem. It is the fastest route to an expensive project that delivers nothing.
The familiar opening
It always starts the same way. Someone on the management team comes back from a conference, a vendor demo, or an article in the business press. "We need to do something with AI." Or more specifically: "We need to do something with agents." "We need a chatbot." "Have you looked at Copilot yet?"
The tool is central. The question of which problem it solves comes later. Or never.
A project team is assembled. The vendor is invited. A proof of concept is launched. The proof of concept succeeds, because proofs of concept with sufficient budget always succeed. The steering committee is pleased. They scale. A good proof of concept works the other way around: it starts with the question, not the tool.
And then it turns out nobody uses it. Because it is a solution to a problem that does not exist. Or a problem that should have been solved differently. Or a problem that does exist, but not for the people who are supposed to use the system.
Why it is so tempting
AI tools are spectacular in a demo. A language model that fluently answers any question. An agent that autonomously executes tasks. An automation that does in seconds what takes a human hours.
That demo creates desire. And desire is a poor advisor in investment decisions.
There is also something organisational at play. "We are doing something with AI" is a signal to the outside world: we are innovative, we are ahead of the curve. That signal can feel more valuable than the actual result. So projects get started that look good in a press release but change nothing in daily work.
How to recognise it
Three signals that an AI project is suffering from Solutioneering.
The technology appears in the brief. "Build a chatbot for customer service" is Solutioneering. "Cut response time on product complaints in half" is a problem. That difference determines whether you find the right solution or stay locked to the one you started with.
Nobody has observed the work. If the project team has not sat alongside an employee to watch how the work is actually done, the solution is almost certainly based on assumptions. Follow the Friction: go and see where things snag first, then build. Assumptions are rarely correct.
The success criteria are vague. "More efficiency" is not a criterion. "20% reduction in complaint handling time while maintaining customer satisfaction" is a criterion. Vague criteria lead to vague results and an inability to decide whether something works.
What to do instead
Start with the problem. State it as a sentence with no technology in it. "We lose too much time manually classifying incoming orders." "Our quotes take three weeks when the competition does it in three days." "The team spends forty percent of its time on work that contributes nothing to the end product."
Then go and check whether that is actually true. Observe the work. Talk to the people doing it. Map the friction. A good AI consultancy helps you do exactly that: look before you build.
Only then, once the problem is sharp and the friction is visible, do you look at which technology fits. Maybe that is AI. Maybe it is a simpler automation. Maybe it is a process change with no technology at all. All three are good outcomes.
The most powerful question in any AI project is: if we solved this problem perfectly with a human expert, would it matter to the user? Would it change their day? If the answer is no, more technology will not fix it either.
The Millennium Dome is now called the O2 Arena. It eventually became a success, but only after someone asked the right question: where is the demand? Concerts and events. That is where the audience was. The building did not need to change. The story behind it did.
With AI projects, it is exactly the same. The technology is rarely the problem. The story behind it is.