top of page

AI Governance Grows Up: Proxy Statements Signal a New Era of Board Accountability

  • awcollison8
  • Oct 6
  • 2 min read
ree

For years, governance professionals like myself have been preaching a simple truth: technology is no longer just an operational tool — it’s a governance issue. We saw it happen with cybersecurity. Then data privacy. Now, artificial intelligence is stepping onto the same stage — and this time, it’s entering through the front door of corporate accountability: the DEF 14A proxy statement.


Traditionally, proxy filings were about electing directors, approving compensation, and checking the usual ESG and risk management boxes. But in the last 18 months, something significant has shifted. Companies are beginning to explicitly disclose how their boards oversee AI — and that changes everything.


Why This Matters: AI Just Became a Board-Level Responsibility


When AI appears in a proxy statement — a regulated document sent to shareholders — it signals that AI is no longer just an innovation initiative. It’s a governance obligation.

Across sectors, we’re now seeing language such as:


“The Audit and Technology Committee oversees key risks related to artificial intelligence, automation, and data governance.”


Or:


“Our Responsible AI Framework is aligned with NIST and ISO standards and is subject to regular board review.”


This isn’t fluff. This is the early formation of AI risk governance models, right in plain view of regulators, shareholders, and stakeholders.

The Pattern Is Clear: AI Governance Is Following the Same Maturity Path as Cybersecurity

Stage

Cybersecurity (2015–2018)

AI Governance (2023–2025)

Awareness

Cyber was mentioned vaguely as “IT risk.”

AI is referenced as “innovation and efficiency risk.”

Assignment of Oversight

Boards assign cyber to Audit/Risk Committees.

AI oversight now appears in Tech, Audit, or "Innovation Committees."

Early Frameworks

NIST Cyber Framework emerges.

NIST AI RMF & ISO 42001 become reference points.

Disclosure Expectations

Cyber risk became mandatory in SEC reporting.

AI risk disclosures are next — the writing is already on the wall.

 

The governance cycle is predictable — but AI raises new and unique challenges that boards cannot treat as a checkbox exercise.

 

The Governance Questions Boards Must Now Answer

  1. Who is accountable for AI oversight?

    Audit? Risk? Technology? A new AI Committee?


  2. What frameworks are we using?

    NIST AI Risk Management Framework? ISO 42001? Internal ethics guidelines?

  3. Do we have visibility into all AI activity across the enterprise?

    Because what’s more dangerous than AI? Shadow AI (unauthorized AI)

  4. How are we balancing innovation with compliance?

    AI is both a growth driver and a regulatory landmine. Governance must enable both.

My Message to Fellow Governance Leaders: Lean In — or Get Left Behind

AI governance isn’t just about risk mitigation. It’s about strategic value protection and value creation. Boards that take AI seriously — that don't just disclose oversight but demonstrate discipline — will attract investment, retain customer trust, and accelerate responsible innovation.

For those of us in information governance, records management, privacy, or risk roles, this is our moment. We’ve spent years building the foundation: controls, policies, metadata, lifecycle management, and audit defensibility. Now AI has made all of that mission-critical.

Final Thought

Proxy statements are more than compliance filings — they’re signals of what corporate America takes seriously.

And the message is now unmistakable:


AI is officially a governance issue.


Let’s make sure we — the stewards of responsible information — are the ones helping boards define how to govern it.



 
 
 

Comments


    bottom of page