Want to share this?

AI is reshaping enterprise risk at a pace unmatched by previous technological shifts. For boards and executives, the challenge is not only understanding the technology but preparing for a regulatory environment that is still taking shape. In the latest episode of the Privacy and Compliance Collective, RadarFirst CEO Zach Burnett explores this topic with cybersecurity leaders Edna Conway and Chris Hetner.

Their message is clear. Regulatory pressure is coming. Investor expectations are rising. And the organizations that build defensible frameworks today will be best positioned tomorrow.

Regulation Is Fragmented. The Risk Is Real

Hetner notes that the United States remains largely deregulatory toward AI compared with global counterparts. There is talk of federal action, but no clear path yet. In the absence of national standards, states may introduce their own AI laws, echoing the patchwork complexity we see in privacy regulation.

This fragmentation means enterprises must plan for varied, evolving requirements. Meanwhile, the SEC already requires disclosure of material incidents, AI-related or otherwise. If an agent leaks fifty million records or model drift causes a financial miscalculation, public companies must disclose that exposure.

Proxy advisors like ISS and Glass Lewis are signaling that boards will be evaluated on their oversight of AI governance. The cost of inaction is rising.

Insurance May Lead the Way

Conway argues that insurance markets will likely become the fastest driver of AI governance maturity. Just as cyber insurance reshaped how organizations approached information security, business insurers are already asking how AI risk should be assessed and priced.

The moment underwriting requires demonstrable AI governance, boards will have no choice but to strengthen their frameworks.

Resilience Is the New North Star

For Conway, resilience is the core principle that unifies cybersecurity, AI governance, procurement discipline, and operational continuity. Boards should ask.

  • What is our resilience plan?
  • Where does AI exist in our organization?
  • How is it governed within our ERM framework?
  • How do we conduct drills, tabletops, and scenario testing that include AI incidents?

Tabletop exercises for cyber are routine. Exercises for AI-driven failures are not, but they must become standard practice.

What a Defensible Framework Looks Like

Hetner outlines several governance anchors boards should expect management to establish.

1. Clear accountability and ownership

AI cannot be solely the responsibility of IT or security teams. It requires cross-functional partnership and a defined taxonomy.

2. An AI inventory and classification model

Organizations must document systems, data flows, risk levels, and business owners.

3. Continuous monitoring and testing

This includes evaluating bias, drift, data quality, access patterns, and security deviations.

4. Integration into enterprise risk management

AI is not a standalone technology problem. It is a business risk with legal, financial, and operational implications.

5. Incident response and supplier engagement

Suppliers must participate in exercises. Contact protocols must be current. Third-party AI failures can be as damaging as internal ones.

Governance as a Leadership Imperative

Both experts stress that competency in AI governance must extend beyond boards. CEOs, CFOs, and operational leaders must elevate their understanding. Every organization is now a technology organization, regardless of industry.

Boards must not assume AI literacy will come naturally over time. They must cultivate it deliberately.

Preparing for What Comes Next

AI-native platforms like those built at RadarFirst offer organizations a way to incorporate governance and regulatory interpretation into their workflows rather than bolt it on later. This is essential as AI accelerates regulatory complexity and amplifies operational risk.

For leaders seeking to build readiness, the path forward is clear. Stay curious. Ask hard questions. Understand where AI lives in your processes. Strengthen your frameworks now before regulatory obligations solidify.

AI will continue to evolve faster than rules can keep up. Resilient organizations will be those that can adapt with clarity, transparency, and a defensible approach to risk.