• Skip to primary navigation
  • Skip to content
  • Skip to primary sidebar
  • Skip to footer
nNovation LLP

nNovation LLP

Small Canadian regulatory law firm with a big presence

  • Home
  • About Us
  • Our Team
    • Kim D.G. Alexander-Cook
    • Timothy M. Banks
    • Shaun Brown
    • Anne-Marie Hayden
    • Constantine Karbaliotis
    • Kris Klein
    • Dustin Moores
    • Florence So
  • Blog

Posted By: Shaun Brown December 15, 2020Category: Privacy Reform

The problem with de-identification in the Consumer Privacy Protection Act

The recently tabled Consumer Privacy Protection Act (CPPA) would allow organizations to use and disclose de-identified information for certain purposes without consent. This makes sense, but there is a flaw: information that is de-identified according to the law is not even personal information. So privacy legislation shouldn’t apply. Yet, according to the proposed CPPA, de-identified information is personal information, excluded from only some of the CPPA’s requirements. This seems to defeat the purpose of referencing de-identification in the first place, while potentially redefining the concept of personal information.

What is de-identification?

To de-identify personal information in the CPPA means the following:

to modify personal information — or create information from personal information — by using technical processes to ensure that the information does not identify an individual or could not be used in reasonably foreseeable circumstances, alone or in combination with other information, to identify an individual.

De-identified information appears to be a new category of personal information that would remain within the scope the CPPA, although certain uses and disclosures can be made without consent. De-identified information can be used by an organization internally for research and development purposes. It can be disclosed to government institutions, health care institutions, post-secondary institutions, or other entities prescribed in regulation, for “socially beneficial purposes”.1

The CPPA does not explicitly state that de-identified information is personal information. However, this is implied, as the CPPA applies only to activities involving personal information according to the sections of the law describing its purpose and application.2 There is nothing to suggest that the law is intended to apply to de-identified information in addition to personal information.

What is personal information?

To understand the problem, it’s necessary to consider the meaning of “personal information”, defined as “information about an identifiable individual”. There are two related and overlapping lines of inquiry under this definition. The first is whether the information is “about” an individual (as opposed to, for example, an object). The second is whether an individual is “identifiable”.

In the absence of statutory guidance, courts have used different language to interpret this definition. In 2007, the Federal Court of Appeal stated that an individual is identifiable if it is “reasonable to expect” that an individual could be identified from the information alone or combined with “sources otherwise available”.3 A year later the Federal Court of Canada adopted the standard put forward by the Privacy Commissioner of Canada: there must be a “serious possibility” of identifying an individual through the information alone or combined with “other available information”. 4

More recently, the Federal Court found that “serious possibility” and “reasonable to expect” are effectively the same thing: more than mere speculation or possibility, but not probable on a balance of probabilities.5

The need for a different threshold

De-identification in the CPPA uses effectively the same threshold as personal information, but in reverse. We’ll call this the “serious possibility/reasonably foreseeable” threshold. The courts have said that information is personal if there is a serious possibility that an individual could be identified, which is equivalent to “reasonable to expect.” Under the CPPA, personal information becomes de-identified if there are no “reasonably foreseeable circumstances” in which an individual could be identified. So personal information that is de-identified under the CPPA should not be personal information according to our current understanding of personal information as interpreted by the courts. Except, in the CPPA, it is.

Here’s another way of looking at it. In our current world, information becomes personal when it rises above the threshold of serious possibility/reasonably foreseeable, as seen in figure 1 below. Yet, under the CPPA, information that is personal information becomes de-identified personal information when it crosses below the threshold of serious possibility/reasonably foreseeable, as seen in figure 2.


An obvious question is when, if ever, does personal information become non-personal? In other words, once information becomes personal and within the scope of the CPPA, is it possible to transform it so that it is outside the scope of the CPPA? Currently, information that is sufficiently de-identified to no longer qualify as personal information is not regulated under PIPEDA (even if it is not truly anonymized). The effect of the CPPA seems arbitrary. If the information had been collected in a manner that never met the threshold for what constitutes personal information, it would never be subject to the law. However, because the information was, at some point, within the scope of the law, it is permanently trapped.

Even more confusing, does this alter the definition of personal information? If so, where is the new threshold? It seems that this would have to be lower under the CPPA than it already is.

It might be argued that there is a meaningful difference between “serious possibility/reasonable to expect” and “reasonably foreseeable circumstances”. But this isn’t tenable. When comparing “serious possibility” with “reasonable to expect”, the Federal Court said that it may be “impossible” to discern a meaningful difference. There’s no way the rest of us could be expected to differentiate between “reasonable to expect” and “reasonably foreseeable”.

Even less probable is an intentional effort to expand the definition of personal information, and in turn, the scope of the law. The government would have to be more explicit about such a significant change.

Most likely, this is just a well-intentioned idea with flawed execution, which would make the law too confusing.

One potential solution is to modify the definition of “de-identify” by removing the reference to reasonably foreseeable circumstances, as follows:

de-identify means to modify personal information — or
create information from personal information — by using
technical processes to ensure that the information does
not identify an individual. or could not be used in reasonably
foreseeable circumstances, alone or in combination
with other information, to identify an individual

This would create a threshold for de-identified information that is clearly distinct from the definition of personal information, which would seem to accomplish the objective of including de-identified information in the CPPA.

Another option is to just remove all references to de-identification from the law. Though maybe not ideal, if the threshold for de-identification is not modified to differentiate it from the definition of personal information, then the law would be better without it.

  1. Personal information disclosed for the purpose of a prospective business transaction would have to be de-identified.
  2. The purpose and application of the CPPA are defined in sections 5 and 6.
  3. Information Commissioner v. Transportation Accident Investigation and Safety Board, 2006 FCA 157 (CanLII), [2007] 1 FCR 203
  4. Gordon v. Canada (Health), 2008 FC 258 (CanLII)
  5. Canada (Information Commissioner) v. Canada (Public Safety and Emergency Preparedness), 2019 FC 1279 (CanLII)

Share this article:

Previous Post The Digital Charter Implementation Act: A Clear Plan for Change
Next Post Sidewalk gridlock: A tale of two (smart) cities

Related Posts

February16

Limitation of liability in B2B contracts valid under Quebec civil law

January28

Maturing the Privacy Impact Assessment

January07

10 crisis communications tips for privacy breaches

Reader Interactions

Comments

  1. Chris Howerton says

    January 8, 2021 at 12:59 pm

    We have written software for schools for about 30 years. Every once-in-awhile, a bug surfaces that requires us to look at school data. Sometimes, it’s a LOT of data — as in: literally the entire school database.

    We organized things so that the servers are in the schools; it’s not an internet app, it runs on the school or district computers. You can imagine how much we DO NOT want the school database sent to us — but geez, sometimes we NEED it to solve the problem.

    So: what to do?

    We wrote a piece of software that we give to the district IT departments. It anonymizes the school data. Completely. For example, a phone number like 604-534-8293 might get changed to 342-995-1049. Random digit replaces real digit. Same with names. For photos, we have a few dozen pictures of hand-drawn cartoon characters (so a lot of students look the same after anonymization!).

    The IT department exports their database, runs it through the anonymizer and then sends it to us. Now we have a database that has the EXACT structure of the one that doesn’t work, and we can fix the bug without ever accessing their real data.

    It’s kind of like what Facebook does. They have a dossier on every person alive (well, most people). If you don’t use Facebook (I don’t), they’ll find you anyway because you have friends that mention your existence. So they “average” your data: in terms of who you are, you’re kind of an average of your friends.

    So it’s sort of like that: we are able to import the STRUCTURE of the school data, but with ABSOLUTELY ZERO personal data, no averaging can happen. So it’s all good. Almost always, it’s the STRUCTURE that caused the problem, and if it’s data then with everything being anonymized we find those kinds of bugs as well (for example, if the software cannot handle nonsensical phone numbers).

    My point being: complete anonymizing can work.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Categories

  • Adequacy
  • CASL
  • Class Actions
  • Communications
  • Competition Act
  • Genetic Privacy
  • IT Contracts
  • Legislation
  • Ontario
  • PIPEDA
  • Privacy
  • Privacy Breach
  • Privacy Commissioner of Canada
  • Privacy Impact Assessment
  • Privacy Reform
  • Privacy Shield
  • Quebec
  • Right to be forgotten
  • Smart Cities
  • Supreme Court
  • Transborder Data Flows
  • Uncategorized

Recent Posts

Limitation of liability in B2B contracts valid under Quebec civil law

February 16, 2022

Maturing the Privacy Impact Assessment

January 28, 2022

10 crisis communications tips for privacy breaches

January 7, 2022

Tag Cloud

Access to Information Act CASL Class Actions CompuFinder Constitutionality CRTC Cybersecurity Equifax data breach Federal Court of Appeal google National Security OPC Consultation PIPEDA Privacy Privacy Commissioner of Canada Smart Cities spam Transborder Data Flows

Footer

EXPERT LEGAL SERVICES

135 Laurier Avenue West, Suite 100 Ottawa Ontario K1P 5J2
  • Home
  • About Us
  • Our Team
  • Blog
  • Privacy

Copyright © 2020 nNovation LLP. All Rights Reserved