AI at the Tipping Point. Why Boards Can No Longer Wait to Govern AI Risk.
Artificial intelligence has moved far beyond hype. After years of quiet progress in machine learning and data infrastructure, the compute capacity is finally in place to unlock transformative value. This is why AI adoption is accelerating across every industry and every function. Yet as AI evolves, so do the risks. And boards are now squarely in the hot seat.
In a recent episode of the Privacy and Compliance Collective, RadarFirst CEO Zach Burnett sat down with two leaders shaping the global conversation on AI governance. Edna Conway of EMC Advisors and Chris Hetner of the National Association of Corporate Directors unpacked the real value of AI, the pressures facing boards, and the emerging expectations from regulators and investors.
The AI Boom Is Real. The Value Is Still Emerging.
Edna Conway reminds us that AI is not new. Machine learning has been at work for years. What is new is the scale. Compute power and storage have finally caught up with ambition. Enterprises can now extract meaningful outcomes from AI rather than experiment at the margins.
But how do we measure success? Conway cautions against equating value with workforce reductions. Efficiency gains may occur, but the real promise lies in reallocating human talent to higher-order innovation. What is missing are consistent metrics that measure effectiveness across diverse use cases. Without them, organizations risk misunderstanding the real impact of AI adoption.
Boards Are Entering an AI Minefield
Chris Hetner, who advises thousands of board members through the NACD, describes a governance landscape under strain. At board summits, seventy percent of conversations revolve around AI. Directors want to know whether AI will accelerate revenue, streamline operations, or improve time-to-market. At the same time, they worry about skill gaps and a lack of governance structures.
Hetner calls it an AI minefield. Companies are racing to adopt AI but often lack clarity on use cases, controls, or risk boundaries. The mantra shoot first, aim second reflects a marketplace sprinting ahead without guardrails.
Investors Are Raising the Stakes
Major institutional investors and proxy advisors like BlackRock, ISS, and Glass Lewis are all signaling the same expectation. Heading into the 2026 proxy season, public companies will be expected to disclose how boards oversee AI governance, ethics, and risk. Directors cannot simply ask management for updates. They must demonstrate credible, defensible oversight.
What Good AI Oversight Looks Like
Conway offers five foundational questions every board should adopt.
1. Who owns AI across the enterprise?
Ownership must be explicit, accountable, and cross-functional.
2. What is the documented governance framework?
A clear, codified model is essential.
3. Is AI integrated into enterprise risk management?
This includes cyber, privacy, procurement, and operational resilience.
4. How is AI monitored and reported to the board?
Directors must see structured reporting, not ad hoc updates.
5. Are there independent challenges of the AI program?
Not necessarily audits, but periodic validation of assumptions and controls.
Conway notes that as AI becomes deeply embedded across the business, the historic rule of nose in, fingers out becomes harder to apply. Directors may need greater technical fluency to assess whether AI is appropriately integrated into cybersecurity, development pipelines, procurement practices, and supply chain risk management.
A Board Competency Gap Is Emerging
NACD surveys show that over 70% of directors still do not fully understand their exposures to AI and cyber. Hetner believes the competency gap is not just a board issue. CEOs and CFOs must elevate their technological literacy because every enterprise is now a digital enterprise.
The challenge ahead is translating complex technical risk into business, financial, and operational terms that directors understand. This is where frameworks like NIST provide helpful flexibility. They offer structured guidance without imposing rigid certification mandates.
The Path Forward
Organizations seeking defensible AI governance must start with a resilience mindset. Where does AI live inside your workflows? How is it secured, monitored, and validated? What are the scenarios that could go wrong? How will you respond?
Boards do not need to be technologists. They do need to ask informed, consistent questions. The companies that succeed will be those that embed AI governance into ERM, maintain transparency, and continuously test their assumptions.