The efficiency story is real. The homogenisation story is the one nobody is telling.
On 23 April, the Australian Financial Review reported that Telstra has deployed an AI board agent — built with Accenture and live since February — giving directors on-demand access to the company's full repository of board papers, minutes, and action items. It addresses a genuine and well-documented governance pain point: information overload. Directors on complex organisations wade through thousands of pages of board papers, minutes, and strategic materials every year. The cognitive load of retaining it all — and surfacing the right piece at the right moment — is enormous. Company secretary Craig Emery is right when he says the problem is perennial. The "walled garden" architectural choice, restricting the agent to board materials rather than the open internet, is sensible. Source attribution to guard against hallucination is table-stakes for a governance application. This is a thoughtful implementation of a legitimate tool.
But there is a governance dimension in this story that nobody in the public coverage asks about — and it is the one that should keep directors honest. When Emery says the agent "is not being used to make decisions," he draws a line that is technically accurate and functionally misleading. Every tool that shapes how directors prepare shapes how they decide. If every director on the Telstra board is arriving at the same table having interrogated the same AI, trained on the same materials, surfacing the same framings and the same institutional history — the cognitive diversity that historically buffers boards against groupthink has been quietly eroded. They haven't converged in the boardroom. They've converged in the hour before it.
This is not an argument against the tool. It is an argument for taking the governance of the tool as seriously as its deployment. The decision-making excellence frameworks directors are taught at the AICD are unambiguous on this: groupthink is most dangerous when it is invisible, when consensus feels like clarity rather than conformity. A shared AI that makes all directors feel well-prepared is a shared AI that could make them well-prepared in exactly the same direction. The productivity gain is real. The homogenisation risk is real. A board that only captures one without designing against the other has not finished the governance work.
Five questions I'd ask before the next board meeting uses this tool.
- Who owns the data set that feeds this agent — and when did the board last review what's included, what's excluded, and who controls those boundaries? Does the audit committee have oversight of that curation process, or does it sit solely with the company secretary?
- If seven of our twelve directors asked this agent the same question in the hour before today's meeting, how confident are we that their thinking is genuinely independent by the time they sit down?
- If the AI gives a director a confident but factually incorrect answer an hour before a critical vote, and that director doesn't verify the source document, who bears the liability? Has legal reviewed that scenario?
- What's our process for detecting when directors are anchored to AI framing rather than applying independent judgment — and would we even recognise it if it were happening?
- The current success metric is anecdotal: "preparing for meetings is much easier." What are we actually measuring, and how will we know in eighteen months whether this tool has improved decision quality or simply improved preparation comfort?
What a director who has seen this go wrong would be watching for.
- Enthusiasm without guardrails. The tool has been "enthusiastically embraced" with no public mention of governance guardrails or identified risks. Enthusiasm without a governance framework for the tool itself is a yellow flag — exactly the kind of thing an audit committee should probe.
- Single-officer curation. The data set is curated by the company secretary alone. That is a significant concentration of information power, even if unintentional. A director who doesn't know what has been excluded from the walled garden cannot ask about it.
- Easier ≠ better. "Preparing for meetings is much easier" is the headline outcome metric. The governance value of thorough pre-reading comes precisely from its friction — directors who have engaged deeply and independently with materials, not directors who have outsourced their synthesis to the same intelligence.
- Unverified citations. Source attribution assumes directors will verify, and the behavioural evidence from time-pressured, iPad-first professionals suggests they won't — at least not consistently, and almost never in the hour before a meeting starts.
What the board should be optimising for.
AI that helps directors hold institutional memory, recall prior decisions and their rationale, and surface relevant precedents before a meeting is a legitimate governance improvement. Corporate memory degrades as board composition changes and organisations grow, and that degradation has real costs — in consistency of oversight, in the quality of challenge, in the time directors spend reconstructing context that should be readily available.
A board's governance value comes partly from cognitive diversity — directors who prepared differently, from different sources, with different mental models, who arrive at genuine divergence rather than polite consensus. Shared AI preparation is a convergence engine. It narrows the variance in how directors enter a discussion. That is not fatal, but it is real, and it deserves to be designed against deliberately rather than assumed away.
Telstra's implementation is technically sound and governance-lite in equal measure. The walled garden is the right call. Source attribution is good design. But a board that deploys this tool without a governance framework for the tool itself is trusting the company secretary's curation judgment indefinitely, with no independent check and no board-level ownership. The audit committee should review the data set at least annually. There should be a clear protocol for updating and deprecating materials. Directors should have an explicit norm — written down, not assumed — that AI preparation supplements but does not substitute for independent pre-reading. And the board should agree on what categories of decision the agent should never be used to prepare for. Emery's instinct that "there is no part of corporate governance that cannot be improved by AI" may well prove correct. The director who accepts that without also asking "and what parts of governance could AI quietly damage?" is doing half the job.
Researched and drafted by Brad's agentic AI team. Edited and published by Brad Ferris.