Skip to content
Jump to Section

Why AI Risk Demands an Incident-First Approach

Insights from IAPP Global Summit 2026 in Washington, DC

By Zach Burnett, CEO of RadarFirst & Kalinda Raina, Vice President and Chief Privacy Officer at Airbnb

Artificial intelligence is advancing faster than organizations’ ability to govern it. The question is no longer whether AI will introduce risk; it already has. The real challenge is how organizations are preparing to manage the incidents AI inevitably produces.

1. AI Is Driving a Measurable Increase in Incidents

AI is no longer a theoretical risk vector; it is actively reshaping the incident landscape. According to IBM’s 2025 Cost of a Data Breach Report, 1 in 6 data breaches now involves attackers using AI, most often through AI-generated phishing and deepfake impersonation.

What makes this shift significant is not just the presence of AI, but its scalability. AI enables attackers, and increasingly internal systems, to generate incidents at volume. What were once isolated events are becoming repeatable patterns that organizations must manage consistently at scale.

Anthropic’s Mythos model has seen a U.S. government-engaged rollout due to national security concerns. As a result, organizations are facing a steady increase in both the frequency and complexity of incidents tied to AI technologies.

2. There Is No Clear Definition of an “AI Incident”

Despite the rise in AI-related risk, organizations lack a shared understanding of what constitutes an incident. Traditional definitions, centered on data breaches or security failures, do not fully capture the realities of AI systems.

AI introduces a broader category of “events”: harmful outputs, hallucinations, biased decisions, or even market-moving misinformation. These may not trigger traditional incident thresholds, but they can still create real legal, reputational, and financial consequences.

For example, even fictional AI-generated scenarios have demonstrated the ability to influence markets and drive real-world impact.

This ambiguity is not temporary. Native AI systems will continuously produce unexpected outcomes. Organizations must expand their frameworks to recognize and manage AI-driven events, not just conventional incidents.

3. Litigation and Boards, Not Legislation, Will Drive AI Incident Management

While global AI regulation continues to evolve, it remains fragmented and uncertain. Legal accountability, however, is already established. Organizations are responsible for the outcomes of the technologies they deploy, even when AI generates those outcomes.

This shifts the urgency. Organizations cannot wait for regulatory clarity when liability already exists. In practice, litigation risk is becoming the primary forcing function behind AI governance, accelerating the need for operational readiness today, not years from now.

From Policy to Practice: Incident Management as the Foundation of AI Governance

The gap between AI adoption and governance remains significant. In 2025, 63% of organizations reported they do not yet have formal AI governance policies in place (IBM, Cost of a Data Breach Report 2025).

This lack of preparedness underscores a broader issue. Governance strategies often exist as principles rather than operational capabilities.

To close this gap, organizations must build structured event and incident management programs that operationalize AI governance. This includes:

  • Detecting AI-driven events and anomalies
  • Defining thresholds for escalation
  • Standardizing triage and risk assessment
  • Enabling consistent, auditable response processes

Incident management is no longer a downstream function. It is the mechanism through which AI governance becomes real.

Conclusion: Governance Must Be Built for the Incidents AI Will Create

AI’s greatest constraint is not capability, but operationalization. Organizations that succeed will not be those that simply adopt AI, but those that build the infrastructure to manage its consequences.

In this new era, AI governance is not static. It is continuous, event-driven, and deeply operational. Companies that invest in incident-first governance models will be best positioned to manage risk, meet legal expectations, and maintain trust in an AI-driven world.

Let’s Get Started

Trusted by leading organizations, RadarFirst enables teams to manage incidents with speed, consistency, and defensibility by standardizing how incidents are captured, assessed, and actioned.