On 30 April 2026, APRA Member Therese McCarthy Hockey signed an industry-wide letter on AI governance — addressed to all APRA-regulated entities and accompanied by an attachment titled, with telling directness, AI Supervisory Engagement Debrief: Observations for Executive Management. It draws on a late-2025 deep-dive supervisory engagement with selected large banks, insurers, and superannuation trustees. Private health insurers were not in the sample; PHI directors should still treat the letter as binding. APRA published no numbers — no entity count, no AI use case count, no maturity scores. The reporting is qualitative by design, which means directors cannot benchmark against peers and must instead read the letter as the bar.
The attachment names four observations: cyber threats outpacing information security practice; AI adoption moving faster than governance maturity; supplier risk frameworks failing to address concentration and opacity; and traditional change-management and assurance being inadequate for AI's probabilistic, adaptive characteristics. The cover letter adds a separate finding aimed squarely at directors. Boards are showing strong interest in AI's upside but are "still developing the technical literacy required to provide effective challenge on AI related risks and oversight." APRA goes further: it observes "an overreliance on vendor presentations and summaries without sufficient examination of key AI risks such as unpredictable model behaviour and the impact on critical operations." That is unusual prudential prose — pointed, board-specific, and almost guaranteed to be quoted back to directors in their next supervisor interaction.
The letter is also notable for what it does not do. It does not name a single specific prudential standard by number. APRA's framework is described as "principle-based … technology and vendor agnostic" — meaning existing standards in operational risk, information security, governance, and data risk apply to AI without re-citation or grace period. Most boards will read "no new requirements at this stage" as breathing room. They should read it as the opposite. APRA has reserved the right to "take stronger supervisory action and, where appropriate, pursue enforcement," which is more direct language than this regulator usually puts in writing. The next time something goes wrong, the lever will be the standards already on the books — applied through a 2026 AI lens that didn't exist when most boards last benchmarked their compliance.
The other quietly remarkable detail is that APRA names a specific frontier model — Anthropic Mythos — by name in a public letter. Regulators almost never do this. The framing matters: the model is named not as a productivity risk inside regulated entities, but as an adversarial cyber threat against them — frontier-model capabilities accelerating attack pathways, vulnerability discovery, and exploit injection at speeds that existing patching cycles cannot match. APRA is signalling that its supervisors are now technically literate enough to assess your governance against both your own technology stack and your adversary's. That asymmetry — regulator more fluent than board — is the real news.
- The letter calls out boards relying on vendor presentations rather than examining model behaviour and critical-operation impact. When was the last time we examined an AI deployment without the vendor in the room — and who in management was qualified to lead that examination?
- APRA expects an AI strategy "consistent with the entity's risk appetite and tolerance settings … with clearly defined triggers aligned to resilience objectives." Can you point me to those triggers in our current papers, or are we exposed if APRA asked tomorrow?
- For AI workloads supporting our critical operations, what are the "credible fallback processes" we'd run if our primary provider became unavailable for geopolitical, commercial, or technical reasons — and have they been tested, not just designed?
- We have supplier risk frameworks. Have we actually tested exit and substitution arrangements for our critical AI providers, as APRA explicitly called for — or do we have them on paper only?
- AI risks cut across operational risk, cyber, data governance, model risk, privacy, conduct, procurement and third-party. Who in our organisation is accountable for integrated assurance across that span — and is that role independent of the executives deploying AI?
- Which directors on this board could read a model card and challenge management on it tonight? If the honest answer is none, what is our 12-month plan to fix that — and is "an AI training session" really the answer?
- Vendor presentations being treated as a substitute for independent board examination of AI deployments. APRA has now named this pattern explicitly; it cannot be unseen.
- An "AI strategy" approved by the board with no clearly defined triggers, no resilience objectives, no link to risk appetite, and no monitoring cadence reaching the board.
- "We have a multi-cloud strategy" offered as evidence of AI provider diversification. Cloud diversity is not model diversity, and the letter is specifically about tested exit and substitution arrangements — not contractual ones.
- Internal audit and second-line risk functions that have not been resourced with the specialist AI capability the letter explicitly calls for. The letter says these functions are "challenged" — APRA's polite word for under-equipped.
- A 90-minute board AI literacy session being treated as the answer to the literacy expectation. The bar is "sufficient understanding and literacy … to provide effective challenge and oversight" — sustained capability, not a slide deck.
The upside is real, and APRA explicitly acknowledges it: AI presents great opportunity for productivity and efficiency, and "failing to embrace AI may put businesses at a strategic disadvantage." Australian financials are well-positioned to capture those gains because the regulatory culture is disciplined and the customer base trusts regulated institutions. Done well, AI governance is not a brake on adoption — it is the discipline that lets boards approve faster because they understand what they are approving and trust the controls sitting around it. Boards that internalise that asymmetry now and treat the next 12 months as the window to build genuine assurance are the ones that come out of this period strategically ahead.
The downside is asymmetric. Governance maturity costs money and slows deployment by quarters. Governance failure costs years of regulatory remediation, board reshuffles, and reputational damage on the scale of CBA 2018 or AUSTRAC 2017 — and APRA has now explicitly reserved the right to "take stronger supervisory action and, where appropriate, pursue enforcement," which is more direct than this regulator usually puts in writing. Boards that read "no new requirements at this stage" as a reprieve will find themselves explaining, two years from now, why their existing prudential obligations were breached and the board was unable to demonstrate effective challenge.
Three moves in the next 90 days. First, commission an integrated AI assurance review that mirrors APRA's four observation themes — cyber and information security, governance maturity, supplier and concentration risk, and integrated assurance for adaptive systems — with named accountabilities for the CRO, CTO, and CISO and a single board paper that crosses domains rather than four fragmented ones. Second, rewrite the AI strategy and risk appetite together so they meet APRA's explicit board minimum: alignment to risk tolerance, third-party dependency monitoring, and clearly defined triggers aligned to resilience objectives. Third, build genuine director-level AI literacy — at least two directors who can read a model card, interrogate a vendor security and assurance pack, and challenge management on inference cost economics. If the chair cannot name those two directors today, board renewal is the conversation, not deferral. The regulator has been clear, and the language has more teeth than the press release implied. The next test is whether boards meet it before the next incident does.
Researched and drafted by Brad's agentic AI team. Edited and published by Brad Ferris.