Privacy incident response is inherently complex, with a patchwork of regulations, affected populations, and data elements to consider. But there are additional factors in each industry that further tangle this web, from ambiguous PII definitions to conflicting layers of law within a single jurisdiction.

At our recent RadarFirst User Summit Series, privacy officers from Fortune 50 and Fortune 100 financial institutions joined Summit Series participants to find out how different finance and insurance organizations handle some of these challenges.

Dealing with Ambiguity in PII Definitions

The first determination to be made by the privacy team in any incident investigation is whether it involved PII or PHI and whether there is any risk of harm. Unfortunately, the answers are not always clear, particularly in the finance industry, where the definition of “personal information” can vary by jurisdiction.

For example:

  • In Pennsylvania, it’s defined as an individual’s name in combination with a financial account number in combination with a password security code, an access code, etc.
  • In Hawaii, it’s defined as a name in combination with an account number, with no additional data elements needed to qualify as PII.

As our panelists pointed out, in the U.S. alone, privacy teams have to navigate 50 different state data breach laws, each with its own interpretation of what data elements are considered PII.

The complexity multiplies with definitions from U.S. federal regulations such as GLBA and international laws, including GDPR. And guidance from regulators doesn’t necessarily help with the confusion. One of the panelists reported finding ten definitions of PII on one regulatory website, between the site content and featured publications.

Sometimes common sense provides answers.

For example, a Social Security number or financial account number without a name probably presents a low risk, although GLBA considers that personal information. A credit card account number probably presents more risk than an account number for a mortgage or auto loan. (One of the panelists joked, “If someone had access to just my mortgage account number, about all they could do is pay it off. If someone did that, I’d be glad it was disclosed!”)

Cutting through the Ambiguity

For privacy incident response it is crucial to devise a policy in order to address these puzzles and to handle them with consistency.

First, our panelists recommended identifying the data elements that are unique to your business and that can cause uncertainty in risk assessments.

83% percent of session participants said they deal with unique data elements that are meaningful to their organization.

Then, develop a strong data classification scheme with clear definitions of what will and won’t be considered PII plus the risk levels associated with different data elements and combinations.

To ensure consistency, aim to automate.

Radar already takes into account data elements regulated under various breach notification laws. A recently-introduced “treats-like” feature lets privacy teams specify custom data elements to be treated in the same way as pre-defined data elements when assessing incident risk. See how Radar works >

For example, a financial institution might specify that an account number in combination with an access code be “treated like” a bank credit card number in combination with a PIN when assessing incident risk.

This provides the privacy team with a more accurate picture of each incident and more accurate metrics, while allowing Radar’s Breach Guidance Engine™ to automate the incident risk assessment for the unique data elements.

Assessing the Risks of Masked Data Elements

Another point of ambiguity comes from exposure of masked data elements.

While masked personal data elements should theoretically pose no risk of harm, in some cases, personal information could be inferred. One of our participants gave an example where an account number was exposed, together with an alpha code for the account-holder’s name. If the name could be inferred by a determined party, the incident would have to be treated as an unauthorized disclosure.

Different international, federal, and state laws have different rules for masked information when determining data breach notification requirements, but the combination of exposed elements really determines whether masked elements could be linked back to individuals. Those combinations should be considered and reflected in the organization’s data and risk classification scheme.

Using Metrics to Drive Mitigation and Data Classification

One of the benefits of automation in incident response is the ability to easily identify and report on trends. For instance, frequent disclosures of the same data elements, or conversely, a unique disclosure, can point to areas where mitigation or stronger data classification is needed.

Further, if a given data element is exposed frequently through a certain process or document, perhaps that element could be masked or removed from that context, or perhaps training is needed to reduce the chances of exposure. Or perhaps the data element is being treated as higher risk than it needs to be, in which case the data classification may need to be updated.

If you see a new data element suddenly cropping up in incidents, perhaps there’s a new process that needs to be reviewed for security and privacy. There’s also the possibility that people are misreporting the data elements involved in incidents, requiring some clarification in the incident intake form.

Handling Notification Exceptions from U.S. State Laws?

A number of U.S. state laws exempt federally regulated entities from notification requirements when they meet federal notification requirements.

In theory, these exemptions should simplify the notification process and save costs but, in practice, they can add complication because exemptions vary from state to state. For example, Arizona grants a complete exemption from notification requirements where federal regulations apply, whereas New York exempts from notification to affected individuals but still requires notification to the state Attorney General, department of state, state police, and consumer reporting agencies.

So how do organizations track and apply this web of exemptions?

Not all of them do. In an online poll conducted during the User Summit Series session, 29 percent of respondents said they do have a company policy around exemptions, while 43 percent do not. (The rest were not sure.)

Organizations that do leverage exemptions may work with internal and outside counsel to build a “legal grid” of applicable laws and exemptions. However, maintaining a grid is resource-intensive, and applying it consistently is difficult. Radar eliminates the need for such a manual process, automatically accounting for exemptions during the incident risk assessment and notification recommendation phases.

Check out Radar’s free tool – Breach Law Radar >

Judging Incident Severity Levels

Our audience in this session raised another complex point: judging the severity of an incident, especially from the infosec versus the privacy perspective.

Mahmood Sher-Jan, RadarFirst CEO, pointed out that an infosec team will tend to judge incident severity based on the number or records involved—a “severe” incident might involve thousands or millions of records—whereas the privacy team judges based on the risk of harm to individuals, regardless of how many were impacted. (In fact, the Radar community was instrumental in persuading the framers of GDPR not to base its risk assessments simply on volume.)

Our panelists agreed that assessing severity is not always clear cut, giving examples where many people’s information might be exposed, but the data elements involved posed little risk. So, severity has to be assessed based on not only volume, but the combination of data elements involved, looking realistically at the risk of harm to the individuals affected.

Again, incident response can be streamlined if severity factors are reviewed regularly and documented in the organization’s risk classification scheme.

See how Radar assesses privacy incidents > 

Preparing for New Insurance Model (NAIC) Data Security Laws

Financial organizations that are involved with insurance in the U.S. are beginning to be affected by a new set of laws.

In 2017, the National Association of Insurance Commissioners (NAIC) published a model law for insurance data security, and as of August 2020, 11 states had adopted versions of the new NAIC laws and 4 states had new laws pending. In addition to security requirements, these laws create breach notification obligations. As with other regulations, there are subtle differences from state to state. For example, Virginia customized the NAIC model to exempt licensees and expanded the definition of PPI (“nonpublic information”) to include passport numbers and military identification numbers.

While not all financial institutions have an insurance component, it’s helpful to be aware of these new regulations in case an incident ever falls within their scope. (One participant described it as an “in case of emergency, break glass” scenario.)

Fortunately, Radar has the new NAIC-based laws built into its legal engine, so Radar users will have no need to break actual glass.

Handling Incidents as a Data Processor

Handling incidents as a data processor is another process that is normal for some companies and the exception for others.

76 percent of session participants reported that their organization is profiled as the processor / third-party at least some of the time.

Our panelists described several best practices for handling these situations.

  • Determine and document in advance the relationships where your organization acts as a data processor.
  • Update incident reporting forms to ask whether your organization is the processor and, if so, to ask who is the data controller.

By doing this upfront work, our participants using Radar were able to see immediately both regulatory notification requirements and contractual notification obligations to the data controller.

They also recommend breach notification templates that can be loaded with incident information and sent to the data controller, who can work with their own counsel to determine what information will be sent to their clients.

Taming the Complexity Beast

Working through the complexities of privacy incident response isn’t for the faint of heart.

But the panelists and participants at our User Summit Series outlined approaches that go a long way towards taming the beast:

  • Keep up with laws to anticipate new issues
  • Create a comprehensive data element and risk classification, then review and update it often
  • Automate as much as you can.

To help you stay ahead of regulatory changes, you can take advantage of Radar’s Global Data Breach Notification Law library, a free resource with hundreds of global privacy laws, rules, and regulations that’s kept current on both existing and proposed legislation. It’s our way of supporting our brave privacy defenders, because no beast tamer ever faced a foe more fearsome than the ever-morphing face of incident response.


You might also be interested in:

[vc_basic_grid post_type=”post” max_items=”3″ item=”765″ grid_id=”vc_gid:1600186425127-591a7a20-206b-6″ taxonomies=”14″]