If you’ve ever participated in an organized sport, you’re likely well aware of the importance of context when it comes to evaluating your performance as a player. Say, for example, I play soccer every weekend (which I do). Let’s imagine I’m arguably the best defender on my team – or even across all the recreational players involved (it’s fun to pretend). I might start feeling pretty good about myself, and how I perform on the pitch. Now imagine I’m suddenly pulled into an MLS game, playing against professionals in the field. I might be a good player on a limited bench – on weekends, playing against other amateur enthusiasts, but in a larger scale I cannot rank or make the cut.

I bring this up, not because I want to broadcast my lifelong aspirations to play professional soccer, but as an example for something I see in privacy programs. When it comes to incident response management, there are no established rankings of who is doing well or who does poorly. Privacy professionals operate, in many ways, in the dark: you may think you’re doing well, but without knowing the benchmark for how others in the industry are doing, you will never be able to accurately assess the performance of your privacy program. Are you operating at a club sport level, or are you a professional all-star?

Benchmarking helps you know where you rank, and make better, more data-driven decisions for improvements. And the benchmarking metrics we provide each month in this series are intended to provide privacy professionals much-needed context for evaluating incident response performance.

This month, we’re introducing the new quarterly benchmarking metrics. At the beginning of each new quarter, we will provide the previous quarter’s statistics on key benchmarking metrics, so that you will always have the most up to date figures to compare your program against, and we can illustrate how incident response is changing over time. The three metrics we will revisit on a quarterly basis include:

  • How many incidents rise to the level of a notifiable data breach?

  • What is the disposition of incidents – are they malicious in intent?

  • How long does it take to discover an incident and provide notification to affected individuals?

It’s important to have these statistics to know how your organization is doing and to begin establishing what is standard for your incident response program as well as the privacy community. The ultimate goal in sharing this is to help your organization identify areas of opportunity to reduce enterprise risk, demonstrate the efficacy of your incident response program efforts, and derive data-driven insights that can prove return on investments you’ve made into your privacy program – and justify additional budget or headcount as needed.

A note on the data in this series: The incident metadata available in the Radar platform is representative of organizations that use automation best practices to help them perform a consistent and objective multifactor incident risk assessment. Below this article is a list of key definitions and terms used, as well as more information about the metadata pulled from the Radar product.

How many incidents rise to the level of a notifiable data breach?

In last month’s benchmarking article, we explored this question in depth, calling it the “Benchmarking 101” statistic to analyze. So far this year, this is how the measurement of incident vs. notifiable data breach has progressed quarter over quarter:

Oct 2019 Benchmarking Graphics-01

The data has remained pretty flat – which makes sense, as it represents organizations that consistently perform a multi-factor incident risk assessment and score the severity of the incident against the sensitivity of the data involved. This low figure of notifiable breaches is possible when you document and prove sufficient risk mitigation. You gain efficiency by leveraging an efficient system that reduces the time from discovery to notification.

Here are some ways to view these benchmarking figures to your privacy program’s benefit:

  • Your notifiable breach rate is higher: You may want to consider if your organization has an objective, consistent way of evaluating risk. Otherwise, you may be in jeopardy of over-reporting, which can have real and lasting impacts in the name of fines and reputational damage, and could even open your organization up to the threat of private right of action, as in California under CCPA.
  • Your notifiable breach rate is lower: Your risk assessment model may not be scoring the risk of harm to individuals using a consistent and defensible methodology, which can result in missed notifications, enforcements and fines from regulators, and again brand and reputational damage.

What is the disposition of incidents – are they malicious in intent?

This is a benchmarking statistic that can cause privacy professionals to lose sleep at night. What is causing the bulk of privacy incidents – malicious attacks from hackers? Disgruntled employees taking off with valuable data? Plain old human error? 

Oct 2019 Benchmarking Graphics-02

We see in the figure above that, although things like ransomware or phishing attacks may take the headlines these days, in reality the majority of incidents are unintentional or inadvertent in nature. These are attributed to human error – for instance, emailing sensitive records to the wrong person. We dove into examples of these three incident designations in a benchmarking article last year, and find the breakdown of the types of incidents has proven steady since.

Here are some ways to view these benchmarking figures to your privacy program’s benefit:

  • Are you seeing more intentional, but not malicious incidents? You may want to consider limiting access to data to only those who strictly need it to conduct business day-to-day. Are you sure you’re only collecting the data you absolutely need? Could it be further safeguarded somehow with improved credentialing?
  • Are you seeing more of your incidents are intentional and malicious? Obviously, security safeguards need to be a high priority for your organization. Secondarily, and perhaps even more concerning, you may need to ask yourself a question. Are these high-profile, high-risk threat vectors overshadowing the more routine but also very important everyday incidents occurring within your organization? A critical element of compliance is that each incident – big or small – requires a documented and consistent incident risk assessment and breach determination, especially when a decision is made that the incident is not reportable. The burden of proof is on the organization to justify its decision.

How long does it take to discover an incident and provide notification to affected individuals?

With regulations imposing shorter and shorter notification timeframes, the minute you discover a privacy incident has taken place, it can feel like you’re racing the clock.

Oct 2019 Benchmarking Graphics-03

In another recent benchmarking article we looked at how these timeframes may further change when you begin looking at specific industries – for instance, healthcare vs. financial services. What is critical about the timeframes represented above is that your organization’s ability (or inability) to meet the regulatory requirements for notification can be tied to very real consequences. In 2017, for example, the Office for Civil Rights issued their first ever enforcement settlement for lack of a timely breach notification – one of the first instances of this type of enforcement. Since then, countless other enforcements have taken place across industries and around the world. The bottom line is that it very much benefits privacy practitioners and the public alike if we can streamline our incident response programs and reach notification decisions as fast as possible.

Here are some ways to view these benchmarking figures to your privacy program’s benefit:

  • Does your organization take longer to discover incidents? This could indicate an issue in training. Do employees know what constitutes a privacy incident, and who to alert? Or maybe third party contractors are the main source of your incidents. Are these third parties aware of their contractual requirements to provide notice? Are timelines specified in these agreements?
  • Does your organization take longer to investigate and then notify of an incident? This phase of incident response is very much within the control of privacy professionals, so there are many areas to consider in accelerating the process. When an incident is internally reported, are all the pertinent details generally included as part of incident intake? Does your team have the resources like libraries of up-to-date legal requirements under the applicable jurisdictions easily available? Do you have regular tabletop exercises and well-defined playbooks and processes for who does what, and when?

What industry metrics are you tracking?

We will be revisiting these same figures in a couple months for a wrap up of the Q4 2019 metrics. In the meantime, I’m interested in learning from my fellow privacy pros: what metrics are you tracking? How do you measure success currently, and what industry reported figures would help you advance your program? And what can we as privacy professionals all do to establish benchmarks and continuously improve our efforts to safeguard the data entrusted to us?

To quote a famous soccer player, Mia Hamm, “Celebrate what you’ve accomplished, but raise the bar a little higher each time you succeed.”


Key definitions used in this article:

  • Incident: Unauthorized disclosure of personal information where multi-factor risk assessment is performed to decide whether it is a breach
  • External Incident: An incident caused by a 3rd party processor or service provider
  • Breach: An incident that requires notification to impacted individuals
  • Occurrence Date: Date the incident took place
  • Discovery Date: Date the entity became aware of the incident
  • Notify Date: Date of first notification to regulators or individuals

*About the data used in this series: Information extracted from Radar for purposes of statistical analysis is aggregated metadata that is not identifiable to any customer or data subject. The incident metadata available in the Radar platform is representative of organizations that use automation best practices to help them perform a consistent and objective multifactor incident risk assessment using a methodology that incorporates incident-specific risk factors (sensitivity of the data involved and severity of the incident) to determine if the incident is a data breach requiring notification to individuals and/or regulatory bodies. Radar ensures that the incident metadata we analyze is in compliance with the Radar privacy statement, terms of use, and customer agreements.