Ask most mid-market leadership teams what their AI strategy is and you'll get some version of the same answer.
"We've rolled out Copilot to the senior team. Marketing is using ChatGPT. We're looking at a couple of automation tools for ops. And we've got a pilot running in customer service."
That's not a strategy. That's a shopping list.
A strategy connects what you're doing with AI to why you're doing it — which business problems you're solving, in what order, with what expected return. Most companies skip that part entirely. They go straight to the tools because tools feel tangible. You can buy them, install them, point to them in a board meeting. "We're doing AI" feels like progress.
But doing AI and doing AI well are very different things.
The failure stat everyone quotes — and why it's misleading
You've probably heard this one: "95% of AI projects fail to deliver ROI." It comes from an MIT study published in August 2025. It sounds terrifying. And it's become the go-to stat for every conference speaker, LinkedIn post, and consultant deck in the market.
Here's what almost nobody mentions: that study was based on 52 interviews. It defined "success" as measurable P&L impact within six months — ignoring efficiency gains, cost reductions, and anything that takes longer than half a year to show results. The authors themselves described their findings as "directionally accurate," not definitive.
Compare that with what Harvard Business Review found when they surveyed 1,006 executives with broader success criteria. 90% reported moderate-to-great value from AI. Not 5%. Ninety.
So is AI failing? No. But something is clearly going wrong, because 80% of UK businesses have now adopted AI in some form, and barely a third report a positive return on investment. 71% say they haven't even identified a clear use for AI in their organisation — despite already paying for the tools.
What a tool list looks like
Here's the pattern we see in almost every mid-market company that calls us.
Someone in marketing signed up for ChatGPT because it helps them write copy faster. Finance started experimenting with an AI forecasting tool. The CTO is evaluating Copilot for the dev team. Someone in operations saw a demo of an automation platform and is running a pilot. The CEO has been to a conference and wants to "do more with AI."
None of these are bad ideas in isolation. But nobody chose them as part of a plan. Nobody asked which of these solves the most important business problem. Nobody defined what success looks like. Nobody is measuring whether any of it is working.
The result is what McKinsey calls "AI islands" — disconnected experiments scattered across departments, each running on its own logic, none connected to the business strategy. Research suggests that for every 33 AI pilots a company launches, roughly four make it to production. The rest get abandoned, quietly forgotten, or kept running because nobody wants to admit they're not delivering anything.
And the spending adds up. Companies at this stage are typically paying £3,000 to £15,000 a month on AI tool subscriptions, with no unified view of what they're getting for it. Organisations without AI governance have five times more redundant AI subscriptions than those with a framework in place.
What a strategy actually looks like
The companies getting real value from AI — the ones in that 31% reporting positive returns — don't start with tools. They start with questions.
They look at their business and ask: where are we losing money, losing time, or losing customers? They map their processes and identify the three or four places where AI could make a measurable difference. Not everywhere. Not "let's sprinkle AI across the business." A small number of high-impact opportunities, chosen deliberately.
Then they prioritise. Not everything can happen at once, and not everything should. The best AI strategies sequence initiatives so that early wins build confidence and fund later investments. They start with the problem that's costing the most, or the one where the data is cleanest, or the one where the team is most ready — and they do that one properly before moving on.
They define success before they start. Not "let's see how it goes" — a specific outcome, measured in numbers the board cares about. Revenue generated. Costs reduced. Time saved. Customer retention improved. And critically, the HBR research found that when the finance director owns AI value accountability, 76% of companies report significant returns. When the CTO owns it, that drops to 53%. Same tools. Different ownership. Twenty-three point gap.
They build governance from the start. Not as an afterthought, not as a compliance exercise, but as the foundation that makes everything else possible. Who can use which tools. What data can go where. How results are reported. How decisions are made about what to continue, what to scale, and what to stop.
And they invest in people, not just software. 58% of UK companies have given their teams access to AI tools with zero training. They've handed people a powerful instrument and said "figure it out." The companies seeing returns train both their employees and their executives — and they see a 23-point advantage over those that don't.
The gap isn't technology. It's leadership.
This is the uncomfortable truth that the AI vendor market doesn't want you to hear: the difference between companies getting value from AI and companies burning money on it has almost nothing to do with which tools they chose.
It's about whether someone senior enough owns the strategy. Whether AI investments are connected to business outcomes. Whether there's a framework for deciding what to do, what to stop, and what to scale. Whether the leadership team agrees on what they're trying to achieve.
Research shows that 61% of failed AI projects were treated as IT projects rather than business transformation. 56% lost active C-suite sponsorship within six months. 73% lacked clear executive alignment on what success looked like.
These aren't technology failures. They're management failures. And they're exactly the kind of failures that don't happen when someone with the right experience is in the room. This is what a fractional Chief AI Officer does — it's the core of what we do at Bramforth AI. Helping mid-market companies build an AI strategy that actually connects to outcomes, not just a list of tools that sounds impressive in a board meeting.
The question worth asking
If someone asked your board tomorrow, "What's your AI strategy?" — what would the answer be?
If it starts with a list of tools, you don't have a strategy yet. You have spending. And the data is clear about what happens next: the companies that build a strategy first and invest second see three times higher returns than those that buy first and try to make sense of it later.
The good news is that building a strategy doesn't take years. It takes someone asking the right questions, in the right order, with the right people in the room. Most companies are closer to a real AI strategy than they think. They just haven't had the conversation yet.
Find out where your business stands.
Take our 2-minute AI Readiness Assessment for a clear picture of your strengths, gaps, and next steps. Or book a discovery call to talk it through.
Take the Assessment Book a Discovery CallWant more like this? Subscribe to the newsletter — no hype, no jargon.