Brad Ferris
The Director's LensEdition 05 · AI Governance

Your Board Is Funding AI. Is It Asking the Right Questions?

Australian companies are investing heavily in AI and worker adoption is rising — but enterprise productivity gains remain elusive. The problem isn't the technology. It's that boards are approving AI spend without overseeing the organisational transformation that determines whether that investment ever returns value.

Published26 April 2026
Read6 minutes
All editions
The Governance Story

On 20 November 2025, the Australian Financial Review published a piece by RMIT economist Sinclair Davidson, drawing on RBA and McKinsey survey data to make a pointed diagnosis: Australian companies are investing heavily in AI, workers are using AI tools, and almost every organisation runs some experiment — but enterprise-level productivity gains remain elusive. Davidson's explanation isn't that the technology is immature. It's that the organisational architecture hasn't been redesigned to use it.

The governance dimension this story doesn't ask about is the board's role in that failure. When boards approve AI investment — and they are, at scale — they are typically approving a technology procurement and implementation program. What they are rarely approving, or overseeing, is the organisational transformation that determines whether that investment ever returns value. Davidson's distinction is precise: companies capturing AI's value are "redesigning workflows, realigning incentives, and pushing decision-making authority to where the information sits." The companies capturing none of it are treating AI as an efficiency add-on — a layer of software dropped onto yesterday's org chart. Boards are not asking which category their organisation falls into.

This is, at its core, a strategy execution and capital allocation failure with an underweighted risk dimension. The AI budget line gets board approval. The organisational redesign — which is the actual value-creation mechanism — doesn't appear as a board agenda item at all. Management reports adoption rates and tool deployments. Those are lag indicators for the wrong thing. The lead indicators — workflow redesign progress, decision authority realignment, incentive restructuring — are not being surfaced, monitored, or held to account. And Davidson adds a structural wrinkle that Australian boards should be naming explicitly: the industrial relations framework constrains the organisational experimentation AI requires. Awards lock in task bundles. Enterprise agreements freeze org design for years. Multi-employer bargaining imposes uniformity where AI demands flexibility. If your AI transformation assumption set doesn't include an IR risk assessment, it's incomplete.

Questions I'd Ask in the Boardroom
  • We've approved material AI investment this year. Can management show the board which workflows have been redesigned — not just augmented — as a result? What's the before and after?
  • What assumptions underpin our AI productivity projections? Specifically, what has to be true about our workforce structure, role design, and decision rights for those returns to materialise — and what evidence have we gathered against each?
  • Has management assessed whether our enterprise agreement or relevant award constrains our ability to redesign roles around AI? Has that analysis reached the board?
  • Are we measuring AI adoption or AI value creation? If adoption is up but productivity is flat, what's our response framework — and at what trigger point does the board reassess capital allocation?
  • What are our lead indicators for AI transformation success — not usage metrics, but organisational metrics? Who is monitoring them and reporting to the board?
  • If our productivity assumptions turn out to be wrong — say we're still in pilot mode in 18 months — at what point do we adjust the strategy? Who makes that call, and what's the process?
Red Flags & Watch Points
  • AI investment approved at board level; organisational redesign case never presented. If the board has signed off on AI spend but has never seen a paper on what workforce structures, decision rights, and incentive systems need to change — it has approved the input without approving the mechanism for returns.
  • Adoption metrics presented as evidence of value. Usage rates, tool deployments, and "AI users" are technology metrics. They have almost no correlation with enterprise productivity gains. A board that accepts these as evidence of AI ROI is not looking at the right numbers.
  • No IR risk assessment in the transformation plan. Davidson's point about awards and EBAs isn't speculative — it's mechanical. If management's AI transformation plan hasn't identified which industrial instruments apply and how they constrain workflow redesign, there's a known unknown the board isn't seeing.
  • "AI" sits on the technology committee, not the strategy committee. If the board's AI oversight runs primarily through a technology or audit lens, the organisation has miscategorised the question. AI transformation is a strategy and capital allocation question first.
Opportunity & Risk Balance

The competitive opportunity for companies that solve the organisational design problem is real and likely compounding. If the RBA data is right — that most Australian firms are still in pilot mode — the gap between bolt-on adopters and genuine transformers is growing. Companies that crack workflow redesign now will build structural productivity advantages that are hard to replicate, because the organisational muscle for transformation takes time to develop.

The risk is more insidious than AI "not working." It's that boards continue approving AI capital expenditure year after year against productivity assumptions that never materialise, because the organisational preconditions were never established. That's capital misallocation at a quiet but serious scale — the kind that doesn't create a headline moment, it just silently erodes competitive position over three to five years while management reports healthy adoption metrics. Boards that don't ask the right questions now are the ones asking "why didn't this work?" in 2028.

Director's Recommendation
My position

Boards should stop asking "are we using AI?" and start asking "are we transforming?" These are different questions requiring different answers from management. The practical step: request a board-level AI transformation paper — not a technology update — covering four things: which workflows have been genuinely redesigned (with evidence, not anecdotes); what the productivity assumption set is and what has been tested against each assumption; whether IR instruments create constraints and how management proposes to navigate them; and what lead indicators the board will receive quarterly to track organisational transformation rather than tool adoption. If management cannot produce this paper, the board is operating on faith rather than governance. Given the scale of AI investment flowing through Australian boardrooms right now, that's not a defensible position.

Researched and drafted by Brad's agentic AI team. Edited and published by Brad Ferris.