Want to share this?

For the past decade, boards have steadily evolved their oversight of cybersecurity. AI governance has now accelerated that evolution, forcing a visible shift from “good governance practice” to board governance that is evaluated by investors and regulators.

Beginning with the 2026 proxy season, major institutional investors and proxy advisors, including BlackRock, ISS, and Glass Lewis, will expect companies to demonstrate how their boards oversee both AI and cyber risk in proxy disclosures. This marks a major turning point in how corporate boards are evaluated, held accountable, and perceived by shareholders.

This isn’t about waiting for AI regulation. It’s about meeting investor expectations, fulfilling fiduciary duty, and ensuring leadership legitimacy.

Why Investor Expectations Now Define AI and Cyber Risk Oversight

Organizations like BlackRock and Glass Lewis shape board governance standards across industries.

BlackRock, the world’s largest asset manager with over $10 trillion under management, influences director elections through its stewardship priorities. A negative voting stance from BlackRock can reshape board composition and signal weak risk management practices.

Glass Lewis and ISS, the two largest proxy advisory firms, guide shareholder voting on governance, compensation, and oversight. Their recommendations influence whether directors keep their seats, how committees are structured, and how effectively regulatory risk management is implemented.

Ignoring these expectations can make companies appear:

That perception increases both shareholder pressure and regulatory scrutiny.

Why AI Governance Has Become a Board-Level Imperative

AI introduces an unprecedented scale and speed of risk. It can:

  • Amplify system failures
  • Accelerate the impact of misuse
  • Obscure decision-making logic
  • Move data beyond organizational boundaries

This elevates AI from a technology initiative to a business continuity and governance issue.

As Chris Hetner, former Senior Advisor for Cybersecurity to the SEC Chair, notes:

“Compliance doesn’t translate to sound risk management. This really boils down to enterprise risk.”

AI changes how decisions are made, how data flows, and how accountability is assigned, all of which directly affect investors and regulators. For this reason, AI oversight now falls squarely under board governance and fiduciary responsibility.

What Boards Must Demonstrate in 2026 Disclosures

Boards are expected to show clear ownership, evidence, and control, not just policy documentation. Disclosures should include:

  1. Committee Oversight: Identify which committee (Risk, Audit, or dedicated AI & Technology Oversight) is responsible for AI and cyber risk.
  2. Expertise: Disclose whether the board has AI or cybersecurity expertise, or uses external advisors.
  3. Risk Identification: Outline how the company assesses AI-related risks—operational, ethical, compliance, and reputational.
  4. Governance Practices: Explain data protection, ethical use, transparency, and accountability processes.
  5. Materiality Escalation: Define how AI or cyber incidents are evaluated for materiality and reported.

This shift aligns with emerging AI regulatory frameworks, emphasizing transparency and accountability over documentation volume.

The Silent Threat of Shadow AI in Enterprise Systems

Most organizations already use AI tools, often without formal approval. Employees interact with ChatGPT, Copilot, Notion AI, and other embedded systems. This “shadow AI” creates regulatory risk through:

  • Unmonitored data exposure
  • Unapproved algorithmic decision-making
  • Lack of audit trails

As Hetner emphasizes:

“You can outsource AI capability, but you cannot outsource AI risk.”

Boards must understand where AI operates in their organization, not just where it was intended to.

AI Governance: Why Leadership Can’t Let ChatGPT Run the Show

Generative AI like ChatGPT can simulate reasoning, but cannot produce evidence. It lacks the ability to:

  • Map decisions to regulatory frameworks
  • Demonstrate compliance with AI regulations
  • Produce auditable outcomes

Boards are accountable for decisions made with AI. Without explainable and defensible AI systems, companies face exposure to shareholder litigation and regulatory risk management failures.

Why AI Governance Cannot Wait for Regulation

This is no longer a technology or compliance issue. It is a governance and fiduciary risk issue.

AI-driven errors can:

  • Escalate into reportable incidents
  • Cross jurisdictional boundaries
  • Trigger regulatory investigations

Boards must act now to establish AI governance frameworks that align with evolving AI regulations and investor expectations.

Where Boards Should Focus Now

To prepare for 2026 disclosures, boards should prioritize:

  1. Clarifying Ownership: Determine which committee leads AI governance.
  2. Mapping AI Systems: Identify sanctioned and shadow AI in the organization.
  3. Classifying Risk: Define high-, medium-, and low-risk AI systems.
  4. Ensuring Human Accountability: Establish human oversight for high-risk AI decisions.
  5. Reporting Regularly: Create dashboards for ongoing oversight.
  6. Testing for Failures: Run exercises for data leakage, AI misclassification, and incident escalation.

Effective regulatory risk management is about explainable, informed, and defensible oversight, not eliminating all risk.

The Leadership Signal: What Investors Will Look For

Boards aren’t expected to eliminate AI risk. They’re expected to know where it exists, how it’s managed, and how quickly it can be contained.

Companies that implement a clear AI governance framework will appear:

  • Well-governed
  • Prepared for AI regulation
  • Trusted by regulators and shareholders

Those who delay will face questions about oversight, transparency, and accountability, questions that affect investor confidence and brand credibility.

As Hetner concludes:

“Board oversight of AI is going to be the new normal. The disclosures start to get real in 2026.”

Some boards will lead. Others will catch up. But all will be judged on their ability to govern AI risk responsibly.