Brad Ferris
The Director's LensEdition 07 · AI Governance

The AI Strategy That Boards Aren't Actually Governing

Ninety-five percent of organisations have an AI strategy. Only 8% have measurable ROI. That 87-point gap isn't an AI problem — it's a governance problem, and it's showing up on directors' watches.

Published28 April 2026
Read5 minutes
All editions
The Governance Story

In April 2026, KPMG International published its Global AI Pulse survey of 2,110 senior executives across 20 countries and 8 sectors, finding that while 95% of organisations have an AI strategy, only 8% have established measurable ROI. That 87-point gap isn't an AI problem. It's a governance problem — and it's exactly the kind of problem that shows up on a director's watch.

The data buried inside the report is more instructive than the headline. KPMG identifies 11% of organisations as "AI leaders" — the ones actually scaling AI and capturing returns. The differentiating characteristics aren't budget or technology. They're structural: 81% governance readiness among AI leaders versus 63% in the rest; board-level AI expertise at 45% versus 20%; dedicated board oversight of AI at 89% versus 76%. These aren't outcomes — they're inputs. The boards of the companies getting results built oversight structures before they needed them. That sequence matters enormously. Governance embedded in system design from the start is the single clearest differentiator between organisations scaling AI and those stalling.

The second dimension the report surfaces — without naming it directly — is what KPMG calls the orchestrators versus operators divide. Operators use AI tools. Orchestrators build AI-driven systems that coordinate work across the entire organisation. Most boards have approved AI initiatives thinking they're in operator mode. But the systems being built are quietly moving them into orchestrator territory — different risk profile, different capital exposure, different stakeholder implications. That shift doesn't belong in management's lane. It belongs in front of the board.

Questions I'd Ask in the Boardroom
  • We say we have an AI strategy — what's our governance readiness against KPMG's 81% benchmark markers? Can someone map our current framework against those indicators before the next meeting?
  • Our board has digital experience. Does that translate to genuine AI literacy — the kind that lets us challenge management on model risk, data governance, and algorithmic accountability? Or are we using "digital" as a proxy for something it isn't?
  • In every significant AI project currently underway, at what point in the design process was governance applied — at inception, or when something went wrong?
  • Are we an orchestrator or an operator? If the honest answer is "we're not sure," that's a strategic question the board hasn't resolved, not a management one.
  • 75% of executives in this survey are concerned about AI risk and security. Where do those risks sit in our risk appetite statement — with specific thresholds and named owners? Or are they described vaguely under "technology risk"?
  • If one of our AI systems triggered a regulatory inquiry or a bias complaint tomorrow, what's the escalation path? Who finds out, in what order, and at what point does the board get informed?
Red Flags & Watch Points
  • Governance as afterthought. If your AI initiatives started with "build first, govern later," you're in the 89% who have a strategy but not the 11% getting results. The sequencing isn't a detail — it's the whole game.
  • Proxy expertise on the board. "Digital transformation" experience does not equal AI governance literacy. If your board cannot ask pointed questions about model risk, data lineage, or algorithmic accountability — not conceptually, but in relation to your specific AI systems — that's a structural gap, not a knowledge gap you can fix with a briefing paper.
  • Risk appetite silence on AI. If your organisation's risk appetite statement doesn't explicitly address AI-related risks — model failure, data privacy, bias, regulatory exposure, vendor concentration — it isn't governing AI. A statement that says "we have a prudent approach to technology risk" is not a risk appetite statement. It's a placeholder.
  • ROI without accountability. The report is unambiguous that ROI measurement is where most organisations fail. If no one owns the AI ROI metrics — named, with a reporting obligation to the board — those metrics will not materialise. Measurement without ownership is aspiration, not governance.
Opportunity & Risk Balance

The opportunity is real and the window is narrowing. Becoming an AI orchestrator — a company that uses AI to coordinate and amplify work across the organisation at scale — is a genuine and durable competitive advantage if governance is built correctly. The risk is that boards frame governance as a brake on speed. The KPMG data says the opposite: the organisations with stronger governance structures are the ones making faster, more confident AI bets, because their boards can actually evaluate the risk they're taking on.

The deeper risk isn't being left behind by AI adoption. It's approving investment after investment, watching ROI fail to materialise, and not knowing why — because the board never built the oversight structures to ask the right questions. At that point, the failure isn't management's. It's the board's.

Director's Recommendation
My position

Every board should run one practical diagnostic against the KPMG findings — not a consultant engagement, a single question asked at the next meeting: "For each significant AI initiative currently underway, at what point in the design process was governance applied?" If the honest answer is "after build," or "we're not sure," the board has an oversight gap. The fix isn't a policy document. It's a structural change: AI governance criteria embedded in project approval, not project post-mortem. The companies delivering measurable ROI from AI didn't get there by accident. They built oversight structures before they needed them. That's the board's job — and this report gives every director a clear, data-backed benchmark to measure their own organisation against.

Researched and drafted by Brad's agentic AI team. Edited and published by Brad Ferris.