Intent Engineering: The Missing Piece in Your AI Strategy?
I’ve been thinking about Klarna a lot lately.
Their AI customer service agent was, by every measurable standard, a success. In its first month, it handled 2.3 million conversations — two-thirds of all customer service chats. Resolution time dropped from eleven minutes to under two. The company projected $40 million in savings. The CEO called it the equivalent of 700 full-time agents. Wall Street loved the story, investors loved the efficiency, and every conference keynote in 2024 had a slide about it.
Fifteen months later, Klarna was hiring humans back.
Customer satisfaction had declined. Complex issues went unresolved. The brand — a company built on making payments feel effortless — was generating friction at the one touchpoint where trust matters most. CEO Sebastian Siemiatkowski admitted the problem publicly (Bloomberg, Fortune, Entrepreneur): “Cost unfortunately seems to have been a too predominant evaluation factor… what you end up having is lower quality.”
Klarna’s AI didn’t fail. It did exactly what it was told to do — resolve tickets fast and cut costs. The problem wasn’t the AI’s capability. It wasn’t even the data it had access to. The problem was that nobody had encoded what Klarna actually valued: lasting customer relationships, brand trust, the kind of experience that makes someone choose Klarna over a competitor.
The AI optimized perfectly. For the wrong thing.
The Evolution We’re Not Talking About
I’ve watched the AI conversation mature through two distinct phases — and I think we’re overdue for a third.
Prompt engineering was the first. We learned to talk to AI — to craft instructions that produced useful output. It answered the question: “What can your AI do?” Better prompts meant better results, and an entire discipline emerged around getting the most out of a given model.
Context engineering is the current phase. The insight here is that even perfect prompts fail if the AI doesn’t have the right information. Retrieval-augmented generation, knowledge bases, organizational data pipelines — these are all ways of answering a second question: “What does your AI know?” Give AI the right context, and it performs dramatically better.
Both matter. But after 25 years of leading technology transformations, I can tell you — neither is sufficient.
There is a third question that most organizations aren’t asking yet, and it’s the one that separates the companies getting real value from AI from those generating expensive activity:
“What does your AI want?”
This is Intent Engineering — the practice of translating an organization’s goals, values, and decision-making principles into parameters that AI systems can actually act on. Not as aspirational guidelines. As operational constraints.
Intent engineering isn’t a formalized discipline yet. There’s no certification, no established methodology, no textbook. But it is rapidly becoming a very real problem space — one that every organization deploying AI at scale is stumbling into, whether they have a name for it or not.
What the Intent Gap Looks Like
Klarna isn’t an outlier. The pattern repeats across industries whenever organizations deploy AI with clear technical objectives but without encoding their actual values.
Zillow learned this in 2021. Their AI-powered iBuying program was optimized for speed and volume — acquire as many homes as possible, as quickly as possible. The algorithm was technically brilliant at winning bids. It was also systematically overpaying for properties, because nobody had encoded the intent that mattered: accurate valuation with conservative risk margins. The result was a $528 million loss in a single quarter, 2,000 layoffs, and the entire program shut down. The AI succeeded at its objective. The objective was wrong.
The pattern is always the same. The AI hits its technical KPIs — faster resolution, more transactions, higher throughput — while quietly eroding something the organization actually cares about but never explicitly defined: customer trust, valuation accuracy, brand quality, employee morale.
This is the intent gap. And it exists because organizations have gotten comfortable with vague, aspirational definitions of what they value. “Customer-centric.” “Quality-first.” “People-focused.” I’ve seen these phrases on lobby walls in every organization I’ve worked with. They work fine when humans are making the decisions, because humans intuitively fill in the gaps. A veteran support agent who’s been told “we’re customer-centric” knows — without being explicitly programmed — that sometimes the right call is to spend twenty minutes with an upset customer, even though the efficiency metric says keep it under five. That judgment comes from years of experience, from watching a manager make that exact trade-off, from absorbing a culture that was never written down.
AI doesn’t have that luxury. AI needs to know: When speed conflicts with thoroughness, which wins? Under what conditions? By how much?
From Vague Values to Operational Intent
So what does intent engineering actually look like in practice? I’ll be honest — we’re all still figuring this out. But having sat in enough rooms where AI deployments were planned, launched, and occasionally walked back, I can see the shape of it.
It isn’t a technical framework. It’s not a new software layer you can buy. It’s the organizational work of translating implicit decision-making principles into explicit, machine-readable guidance.
This means answering questions that most organizations have never been forced to answer clearly:
- Decision boundaries: When should AI act autonomously, and when should it escalate to a human? Not as a general policy, but for each type of decision, with specific thresholds.
- Trade-off hierarchies: When two organizational values conflict — speed versus quality, cost versus experience, consistency versus personalization — which takes priority, and under what conditions?
- Feedback mechanisms: How does the organization know when AI is optimizing for a proxy metric rather than the actual goal? What signals should trigger a human review?
Consider how differently Klarna’s story might have played out with explicit intent engineering. Instead of a single optimization target — resolve tickets as fast as possible — the AI could have operated within a richer set of constraints: Resolve routine inquiries quickly, but for complex issues, prioritize resolution quality over speed. When a customer shows signs of frustration, offer a human handoff. Never let cost efficiency override a threshold customer satisfaction score. None of this is technically difficult. The hard part is agreeing on the values, writing them down, and making them operational.
AI Forces You to Document Your Culture
Here’s the part that makes executives uncomfortable: you can’t encode intent you haven’t articulated.
Every organization I’ve ever worked with has culture that lives in the hallways — in how decisions actually get made, as opposed to what the values poster says. I once watched two VPs spend an hour debating what “quality-first” meant for their AI deployment, only to realize they’d been operating under completely different definitions for years. It had never mattered before, because their teams were made of people who navigated the ambiguity through experience and good judgment.
AI exposes that ambiguity at scale.
When you deploy an AI agent and tell it to “be customer-centric,” you’re forced to define what that actually means in operational terms. Does it mean never saying no? Does it mean escalating every edge case? Does it mean absorbing cost to preserve goodwill? Every organization answers these questions differently in practice — the answers just happen to live in the judgment of experienced employees rather than in any document.
Intent engineering forces the conversation that most organizations have been deferring for their entire existence: What do we actually value, not as aspiration but as practice? And are we comfortable encoding that into a system that will execute it consistently, at scale, with no judgment calls?
In my experience, the organizations that do this honestly discover two things. First, that their real values don’t always match their stated values — and that the gap is worth closing. Second — and this is the unexpected payoff — that the process of defining intent for AI ends up improving decision-making across the entire organization, because it forces a clarity that benefits humans and machines alike.
The Real AI Race
The competitive conversation right now is about models — who has the most capable AI, the largest context window, the best benchmark scores. I think that conversation misses the point entirely.
Models are increasingly commoditized. Every major player has access to frontier capabilities. The organizations that pull ahead won’t be the ones with the smartest AI. They’ll be the ones with the clearest organizational intent — the ones that have done the hard, unglamorous work of defining what they actually value and encoding it into how their AI operates.
Klarna’s story has a good ending. They recognized the gap, course-corrected, and are now running a hybrid model where AI handles routine tasks and humans handle the moments that require judgment and empathy. Siemiatkowski reframed human service as a “VIP” offering — acknowledging, publicly, that human connection has value that AI can’t replicate. That’s not a failure story. It’s a learning story.
But it took a brand credibility hit and a public reversal to get there. The question for every other organization is whether you can learn the same lesson without paying the same price.
The technology question — “What can your AI do?” — is largely answered. The knowledge question — “What does your AI know?” — is being solved. The intent question — “What does your AI want?” — is where competitive advantage lives now. And answering it starts not with a technical initiative, but with an honest conversation about what your organization truly values.
The companies that win the AI era won’t be the ones with the best models. They’ll be the ones that know themselves well enough to direct them. And that self-knowledge? It starts with a conversation most leadership teams haven’t had yet.
About the Author
André Boisvert
CIO & Strategic Consultant
CIO and strategic consultant helping organizations navigate AI, digital transformation, and IT strategy. Sharing weekly strategic perspectives on enterprise technology.
LinkedIn

