On 28 April, the Australian Financial Review ran an opinion piece by Joseph Mitchell, Assistant Secretary of the ACTU, arguing that Australia needs a regulator with the power to compel AI companies to submit to pre-deployment vetting before selling or deploying their models here. Mitchell wants the National AI Safety Institute funded and empowered to interrogate training data, pre-release models, and guardrails. He frames it as a social license question — trust, he says, must be earned by both vendors and the businesses deploying their tools.
The early reaction has been predictable. Vendors question the workability of pre-deployment vetting at model-release pace. Civil liberties commentators worry about regulatory capture. AI optimists call it a brake on innovation. Each of those critiques has substance, and a director who reads Mitchell's piece purely as a policy proposal can find plenty to argue with.
That misses the point. Regardless of whether Mitchell's specific framework is adopted, the political and stakeholder ground under AI deployment in Australia just shifted. The ACTU has staked out a position. That position will shape Labor policy. Boards deploying AI tools today — and almost every Australian board is — need to internalise the shift now, not when a Bill is on the floor of Parliament. Read this way, the piece is not commentary. It is a leading indicator.
- Do we have a written AI risk appetite statement, and if I asked the CRO to point to the language that would say yes or no to a specific deployment proposal, could they do it?
- For every AI tool we have deployed, can we tell our workforce — and their representatives — what data was used to train it, what guardrails sit around it, and what happens to the inputs we feed it? If we can't, who is accepting that risk on our behalf?
- If an Australian regulator wrote to us tomorrow asking us to attest to the safety, provenance, and governance of an AI system we are using, could we respond in 24 hours? In a week? At all?
- Have we engaged our workforce in AI deployment decisions, or are we deploying these tools to them? If a journalist rang us about that, how would we describe the process?
- Are we adopting the Voluntary AI Safety Standard, or are we waiting until it stops being voluntary?
- AI deployment decisions being made inside IT procurement frameworks designed for SaaS subscriptions. The risk profile is fundamentally different — training data provenance, output unpredictability, vendor concentration — and procurement frameworks built for Microsoft 365 are not adequate.
- No AI line on the risk register, or one with no leading indicators. Lagging indicators on AI failure show up as headlines, not as audit findings.
- Workforce engagement on AI treated as a change-management afterthought rather than a stakeholder governance obligation. The ACTU has now told you which side they are on. Your workforce is paying attention.
- Reliance on vendor safety claims as a sole mitigation. If the board's only answer to "is this AI system safe?" is "Anthropic says so", you are doing what Mitchell's piece is criticising.
- "Wait and see" as a stated AI governance posture. The regulatory direction in Australia is visible. Waiting is a position, and it is the wrong one.
Boards should not optimise for compliance with a regulatory framework that does not yet exist. They should optimise for the substance of what good AI governance looks like — knowing what you have deployed and why, who is accountable, what leading indicators would tell you deployment is going wrong, and how the workforce and other stakeholders are being engaged. Do that, and you meet whatever framework arrives. Don't, and you spend the next two years in catch-up mode and lose strategic optionality on the way through.
The asymmetry is what should focus the board's attention. The cost of building AI governance discipline now — written risk appetite, deployment register, named accountability, KRI dashboard, formal workforce engagement — is low, and most of it is work the board should already be doing. The cost of being caught flat-footed when mandatory vetting, employee consultation requirements, or training-data attestation obligations land is high, and it shows up in the places boards least want it: regulator engagement, reputational damage, and contested deployments inside the organisation. There is no scenario where getting ahead of this is wasted investment.
The right director's response to Mitchell's piece is not to argue with the policy proposal. It is to take the underlying signal seriously and act on it. Treat the ACTU's intervention as a leading indicator of where Australian AI regulation is heading and start building the governance framework that would let your organisation meet it on substance. That means a written AI risk appetite, a register of every AI deployment across the business, named accountability for each, leading indicators on a board KRI dashboard, formal engagement with the workforce as a stakeholder group, and an explicit position on the Voluntary AI Safety Standard. Don't outsource the question of social license to your vendors, and don't outsource the question of regulatory readiness to Canberra. Both of those are the board's job.
Researched and drafted by Brad's agentic AI team. Edited and published by Brad Ferris.