AI 6 min read

The Board's Role in AI: Five Questions Every Director Should Be Asking

Artificial intelligence is now a standing item on most board agendas. And yet, in many boardrooms, the conversation hasn’t moved much beyond “what are our competitors doing?” and “should we be doing more?”

That’s a problem.

Not because AI isn’t important — it is — but because the board’s role isn’t to chase the hype cycle. It’s to ensure the organization is approaching AI with the right level of preparation, the right governance, and a realistic understanding of what it takes to deliver real value.

Over the past 25 years, I’ve had the privilege of presenting technology strategies to boards of directors in several organizations. I’ve seen what happens when a board asks sharp, well-informed questions — and what happens when they don’t. The difference in outcomes is significant.

What I’ve observed is that effective AI oversight requires directors to play a dual role: challenging management with the right questions, and honestly assessing whether the board itself is equipped to evaluate the answers.

Here are five questions I believe every director should be asking.

1. “Is our data actually ready — or are we building on sand?”

Ask this to management.

This is the question that separates organizations that will succeed with AI from those that will spend millions learning expensive lessons.

Every AI initiative — whether it’s a recommendation engine, a predictive model, or an intelligent chatbot — depends on data. Not just data that exists, but data that is inventoried, understood, clean, accessible and governed.

In my experience, most organizations overestimate their data readiness. They have data, certainly. But when you actually conduct a thorough inventory and quality assessment, you often discover that a significant portion of it is incomplete, inconsistent, siloed, or simply not structured in a way that AI can use.

The board doesn’t need to understand data architecture. But it does need to hear a credible answer to this question. If management can’t clearly articulate where the organization’s data stands today — what they have, what quality it’s in, and what gaps exist — that tells you something important about the maturity of their AI ambitions.

2. “Can we actually deliver this — and how do we know?”

Ask this to management.

This is the credibility question, and it’s one that boards sometimes hesitate to ask directly.

AI programs are not standard technology projects. They require specific skills (data engineering, machine learning, prompt engineering), a different development mindset (experimentation, iteration, tolerance for uncertainty), and significant organizational change management.

When management presents an AI roadmap, the board should probe the delivery capacity behind it. Do we have the talent? Are we building, buying or partnering? Have we done this before, even at a small scale? What’s the track record?

A well-structured AI program will start with a realistic maturity assessment — an honest evaluation of where the organization truly stands, not where it aspires to be. If the board doesn’t see evidence of that self-awareness in the proposal, it’s a red flag.

3. “What does success actually look like — in numbers we can track?”

Ask this to management.

There’s a pattern I’ve seen repeatedly: boards get impressed by what other organizations are supposedly doing with AI and approve programs based on a general sense that “we need to be in this space.” The business case is built on optimism rather than measurable outcomes.

That’s not good enough for any other major investment, and it shouldn’t be good enough for AI.

Directors should expect a clear articulation of expected returns, tied to specific business outcomes: cost reduction, revenue growth, efficiency gains, customer experience improvements. Not generic promises — specific, measurable targets with realistic timelines.

Equally important: the board should understand the investment sequencing. AI programs often require foundational investments — data platforms, infrastructure, governance frameworks — before any visible business value emerges. A credible program will explain this honestly, rather than front-loading the impressive use cases to secure approval.

4. “What are the risks we’re not seeing?”

Ask this to management — and to yourselves.

This may be the most important question on this list.

The obvious AI risks — cost overruns, failed projects, talent gaps — are usually addressed in any decent business case. The risks that should concern the board are the ones that don’t make it into the presentation.

What is the most plausible scenario where AI causes harm to the organization without anyone noticing right away? Think about it: an AI model that gradually introduces bias into hiring decisions. A chatbot that quietly gives customers incorrect information. An automated process that makes decisions no one fully understands or can explain to a regulator.

These are not science fiction scenarios. They are happening today in organizations that deployed AI without adequate oversight mechanisms.

The board’s role here is to ensure that management has thought about failure modes, not just success scenarios. And this is also where directors need to turn the question inward: does our board have enough understanding of AI to recognize these risks? If every member of the board is relying on management to tell them what to worry about, that’s a governance gap.

5. “Do we have the guardrails in place — before we accelerate?”

Ask this to yourselves as a board.

AI governance isn’t a compliance checkbox. It’s a strategic enabler.

Organizations that establish clear AI policies, ethical guidelines, data privacy frameworks, and usage boundaries early in their AI journey actually move faster than those that don’t. It sounds counterintuitive, but it’s true: when people know the rules, they innovate with confidence instead of hesitating with uncertainty.

The board should ask whether the organization has a formal AI policy. Whether there are clear guidelines on what data can and cannot be used. Whether there is a process for evaluating AI initiatives against ethical and regulatory standards before they launch, not after something goes wrong.

And the board should ask itself: have we defined our own expectations around responsible AI use? Have we articulated to management what level of risk is acceptable? If not, we are implicitly delegating a governance responsibility that belongs at the board level.

The Real Question Behind All Five

Ultimately, these five questions point to a single, deeper question: is this board equipped to provide meaningful oversight of AI, or are we simply approving what we don’t fully understand?

That’s not a comfortable question. But it’s the one that separates boards that govern from boards that rubber-stamp.

AI is too consequential — too full of potential and too full of risk — to be left to management alone. The board doesn’t need to become technical. But it does need to become literate enough to ask the questions that matter, and honest enough to recognize when it needs help answering them.

Share this article

Related Articles

Stay Updated

Get weekly insights on AI, digital transformation, and IT strategy delivered to your inbox.