What the AI Transformation Playbook Gets Right — and Where It Stops
There’s a line that shows up in almost every AI strategy deck, consulting report, and conference keynote:
“AI is not a technology initiative — it’s a business transformation.”
It’s true. I’ve been saying some version of it myself for a couple of years. And every executive I meet nods in agreement. We all get it — this isn’t just about the technology.
And yet, 98% of organizations still don’t see a return on their AI investments.
A KPMG Canada survey of 753 business leaders (November 2025) found that 93% of Canadian organizations say they’re using AI — but only 2% report seeing a return on their generative AI investments. To be fair, many are still early: 30% expect returns within the year, and 61% say it will take one to five years. But even with that patience factored in, the gap between adoption and results is hard to ignore. BCG’s global research (2024-2025) tells a similar story: 74% of companies struggle to achieve and scale value from AI, and only about 5% are generating meaningful results at scale. The RAND Corporation puts a finer point on it: more than 80% of AI projects fail outright — roughly double the failure rate of traditional IT projects.
So if we all agree it’s not just about technology — if that insight is now universally accepted — why is the gap between adoption and results so wide?
I think the answer is that the playbook is correct but incomplete. It diagnoses the condition accurately, then stops before prescribing the treatment. Saying “this is a business transformation, not a technology initiative” is like a doctor saying “your problem isn’t the symptom, it’s the underlying condition.” True — but what do we do about it?
After 25 years as a technology executive — as CIO, CTO, Chief Architect — across organizations of very different sizes and maturities, I’ve watched this pattern repeat with cloud, with big data, with digital transformation. The technology arrives. The strategy decks get written. Pilots launch. And then the hard part begins — the part the playbook doesn’t cover.
BCG’s research quantifies it nicely: the organizations that succeed with AI at scale allocate roughly 10% of their resources to algorithms, 20% to technology and data, and 70% to people and processes. That 70% — the organizational change, the process redesign, the human side — is exactly the territory the standard playbook acknowledges but doesn’t map.
Here’s what I’ve observed about where things actually break down.
When the Hammer Goes Looking for Nails
The most common failure pattern I see is also the most intuitive one: organizations adopt AI as a solution looking for a problem.
It usually starts with enthusiasm — and that enthusiasm is legitimate. A leadership team sees what AI can do, watches competitors make announcements, and decides the organization needs to be “doing AI.” A team gets assembled, a budget gets allocated, and the mandate is some version of: “Find high-impact use cases for AI across the business.”
This sounds reasonable. It’s also backwards.
The most effective technology initiatives I’ve been part of — the ones that actually delivered measurable value — started with a business problem, not a technology. Someone said, “We’re losing customers because our onboarding takes six weeks” or “Our analysts spend 60% of their time gathering data instead of analyzing it.” The problem came first. The tool selection followed.
When you start with the tool instead of the problem, you end up in a predictable place: use cases that are technically interesting but strategically marginal. AI can summarize meeting notes? Sure — but was that keeping anyone up at night? AI can generate marketing copy? Useful — but if your marketing bottleneck is strategy, not production speed, you’ve automated the wrong thing.
BCG’s research on AI at scale bears this out. Their analysis found that 62% of AI’s value lies in core business functions — areas that directly affect competitive advantage — not in peripheral experiments. The organizations that successfully scale AI deploy it against problems that move the needle on revenue, cost, or competitive position. The differentiator isn’t the sophistication of the AI. It’s whether the problem it’s solving actually matters to the business.
The fix sounds almost too obvious: start every AI conversation with the problem, not the tool. If the best solution turns out to involve AI, great. If it doesn’t, that’s great too. The goal isn’t to use AI. The goal is to solve business problems.
The Expectation Problem
Related — and perhaps less obvious — is that we’re still collectively working out what AI is actually good at. And in the meantime, the gap between what we expect from AI and what it reliably delivers is generating a quiet wave of disillusionment.
This isn’t a criticism of the technology. It’s an observation about a maturation process we’re in the middle of.
AI is extraordinary at certain things: pattern recognition across massive datasets, generating first drafts, synthesizing information, accelerating code development, classification tasks. And it’s getting better fast. When applied to the right problems, the impact is real.
But AI also has real limitations that we’re still learning to account for. It can hallucinate — confidently presenting fabricated information as fact. Its outputs aren’t always reproducible. It can struggle with tasks that require strict logical reasoning or domain-specific precision. It has a “black box” quality that makes auditability and traceability challenging in contexts where those qualities are non-negotiable — regulatory compliance, financial reporting, medical decisions.
What I observe in many organizations is a mismatch: expectations set at the level of the most impressive demo, deployed into environments that need the reliability of a production system. The demo shows AI drafting a perfect strategic analysis. The production reality is an AI that gets it right 85% of the time — which, depending on the domain, might be exceptional or completely unacceptable.
Gartner’s research aligns with this: they project that at least 30% of generative AI projects will be abandoned after the proof-of-concept stage — poor data quality, escalating costs, unclear business value. Not because the technology failed, but because the expected value didn’t survive contact with reality.
When expectations are miscalibrated, even good AI implementations get labelled as failures. The organization doesn’t learn the right lesson — “we need to apply AI differently” — it learns the wrong one: “AI doesn’t work for us.”
The Boundary We Haven’t Drawn Yet
There’s a deeper issue under the surface of both these problems, and I think it’s one of the most important — and least discussed — challenges in AI adoption right now.
We’re still figuring out what should be done by AI, what should be done by traditional IT systems, and what should remain with humans.
The current reflex in many organizations is to treat AI as a universal solution — to route everything through a language model or an AI agent. But the reality is more nuanced. Some tasks are well-suited to AI: anything involving natural language, pattern recognition, unstructured data, or judgment calls where “good enough” is good enough. Other tasks — the ones requiring perfect consistency, full traceability, deterministic outcomes, regulatory auditability — are better served by traditional rules-based systems. We’ve been building reliable deterministic software for decades. That didn’t stop being useful.
And some tasks — the ones involving empathy, ethical judgment, stakeholder relationships, navigating ambiguity — still need human beings.
The organizations getting into trouble are the ones trying to make everything an AI problem. They build an AI-powered workflow where half of it would work better as a traditional automated process and a quarter of it really needs a human in the loop. The result is a system that’s impressive in demos but fragile in production — because AI was asked to do things it’s not well-suited for.
I believe this question — where does AI fit versus traditional systems versus humans? — deserves much deeper exploration than I can give it here. It’s a topic I plan to dedicate a full article to soon, because getting this boundary right is, in my view, one of the most consequential design decisions organizations will make in the next few years. For now, the key point is this: the best AI strategies aren’t the ones that put AI everywhere. They’re the ones that put AI where it belongs.
The Leadership Vacuum
This is perhaps the most consequential pattern I’ve observed — and the one that’s hardest to talk about, because nobody is at fault.
When AI projects launch in most organizations, there’s a natural tendency for business leaders to step back and let the AI specialists take the wheel. It makes sense on the surface: this is a new, complex technology. The people who understand it best should lead. The data scientists, the ML engineers, the AI consultants — they know what’s possible, they know the tools, they know the architecture.
And so, in many organizations, AI projects become technology-led initiatives that happen to touch business processes, rather than business initiatives that happen to use technology.
The early stages of these projects often go well. The AI team builds an impressive prototype — a model that works, a demo that gets the steering committee excited. Progress feels fast. But then the project needs to move from prototype to production — integrate with real workflows, change how people actually do their jobs, handle edge cases that require deep domain knowledge — and momentum stalls.
It stalls because the AI specialists, talented as they are, don’t have the business depth to make the right design decisions at that stage. They don’t know which edge cases matter most. They don’t know why the process works the way it does, or which stakeholders need to be brought along. They can build a brilliant solution — but they can’t always tell whether it’s solving the right problem in the right way.
Meanwhile, the business leaders who do have that depth are sitting on the sidelines, deferring to the “experts.” They’ve mentally categorized this as a technology project — something IT or the AI team owns.
McKinsey’s 2025 research on AI high performers underscores this point: organizations where senior leaders demonstrate active ownership of AI initiatives are three times more likely to achieve meaningful results. And the gap goes beyond leadership posture — 55% of high performers fundamentally reworked their business processes when deploying AI, nearly three times the rate of other organizations. They didn’t just add AI to old workflows. They redesigned the work itself. That kind of redesign doesn’t come from AI specialists. It comes from business leaders who understand the work deeply enough to reimagine it.
I want to be clear: this isn’t about blame. The AI specialists are doing exactly what they were hired to do. And business leaders’ hesitation is rational — this technology is new, the learning curve is real, and nobody wants to be the person who slowed things down.
But the result is a structural gap. The people who know the technology don’t know the business deeply enough. The people who know the business don’t feel empowered to lead. And the project drifts into a space where it’s technically sound but strategically misaligned.
The organizations I’ve seen succeed with AI are the ones where business leaders stay in the driver’s seat — even when they’re uncomfortable, even when they don’t fully understand the technology. They lean on AI specialists for technical guidance, but they own the problem definition, the success criteria, and the decision-making. They treat AI the way previous generations of leaders treated ERP implementations or supply chain redesigns: as a business initiative that requires technical partnership, not a technical initiative that requires business input.
What the Playbook Should Say Next
If the first page of the AI transformation playbook says “this is a business transformation, not a technology initiative,” the pages that follow should say something like this:
Start with problems, not tools. Every AI initiative should begin with a business problem that someone is accountable for solving. If you can’t name the problem in a sentence, you’re not ready to pick the solution. Saying “AI must be able to make us better in this area” is not good enough.
Calibrate expectations honestly. AI is powerful and improving rapidly — and it’s not magic. Set expectations based on what the technology reliably delivers today in your specific context, not based on demos or what another organization reported. Build in room for the technology to mature.
Put AI where it belongs — and only where it belongs. Not every process needs AI. Not every AI process needs to replace what came before. The most effective implementations combine AI, traditional systems, and human judgment deliberately — each doing what it does best.
Keep business leaders in the driver’s seat. AI specialists are essential partners, but the project should be led by someone who owns the business outcome. If your AI initiative is being led by the technology team with business “input,” the structural incentives are misaligned.
Start small and learn. This isn’t new advice, but it’s remarkable how often it gets ignored. The organizations that build lasting AI capability are the ones that start with something almost trivially simple, learn from it, and compound from there. The ones that fail often start with the most ambitious use case because that’s what got the funding approved.
None of this is revolutionary. That’s kind of the point.
The AI transformation playbook gets the diagnosis right: this is fundamentally about how organizations operate, not about which models they deploy. But the treatment — the specific, practical work of changing how AI initiatives are conceived, led, and integrated — that part is still being written.
And it’s being written not in strategy decks or keynote presentations, but by the organizations that are doing the work — figuring it out one project at a time, learning what works, and adjusting.
Canada dropped from 4th to outside the top five on the Tortoise Global AI Index between 2022 and 2024. We don’t have a talent problem or an investment problem. We have an execution problem — a gap between understanding what needs to happen and actually making it happen.
The playbook told us the “what.” It’s time we figured out the “how.”
About the Author
André Boisvert
CIO & Strategic Consultant
CIO and strategic consultant helping organizations navigate AI, digital transformation, and IT strategy. Sharing weekly strategic perspectives on enterprise technology.
LinkedIn

