Many companies that handle personal information reassure their users by saying that all the data is “anonymized.” If you don’t know any better, that sounds reassuring.
However, the method most companies use to anonymize data and the size of modern databases make it easy for attackers to re-identify individuals. From medical records to cell phone data sets, it only takes about a dozen pieces of information to find the person behind each “anonymous” record.
Part of our mission at Proton is to make sure people understand the privacy risks of sharing data. Maintaining your data security means sharing your data only with trustworthy organizations that are clear about what data they collect and what they do with it.
Everyone leaves a trace
By definition, truly anonymized data is stripped of all the elements that could possibly identify the correct individual. (In this article, we’ll refer to this individual as the “data subject,” borrowing the GDPR term.) The most popular mode of anonymization is to remove personally identifiable information from a database, such as your name, your birth date, your phone number, your home address, etc.
On the surface, this might seem like enough to protect your privacy. However, as you begin overlapping different types of data, you can start to identify people. Indeed, one data anonymization company, Aircloak, even acknowledges that true anonymization is extremely difficult: “as is the case with IT security, no 100% guarantee can be given, and often there is the need for a risk assessment.”
Here’s an example of re-identification from the Journal of Technology Science that can give you an idea of how this might work. In it, an “anonymous” medical record can be cross-referenced with another source of information (in this case a newspaper brief about a motorcycle crash) to identify the patient’s name.
It only takes 15 data points to make 99.98% of people identifiable in a database of 7 million people, according to one paper published in Nature.
Fifteen data attributes may seem like a lot, but it’s not. The report references the Experian data breach, which leaked an “anonymized” database containing 248 data points on 120 million Americans. Major political campaigns also keep massive databases (and distribute them to their allies) which include hundreds of data points on their data subjects.
If a database has fewer people in it, it becomes substantially easier to re-identify individuals. This investigation needed only four data points. There are dozens of other examples.
Why this matters
Re-identifying data contained in a supposedly anonymized database is not just a neat statistical trick for academics. It has real-world consequences. Anonymized data is treated differently because it is supposed to protect the privacy of its data subjects.
In the US, anonymized medical records can be sold to pharmaceutical companies. A similar practice is allowed in the UK.
Some countries do a better job of requiring effective anonymization. The European Union’s GDPR covers this in Recital 26, which says that data must truly be anonymous to be exempt from the regulation’s data protection rules. And there are methods of anonymization, such as data generalization or perturbation, that are more effective.
However, this issue touches on more than just the technical difficulties presented by anonymization. It also raises the misleading promises companies make when they talk about how they treat your data.
Data analysis can provide numerous benefits to citizens, organizations, and governments, and it is legitimate to collect and analyze data for specific purposes. The distributed privacy-preserving contact tracing project is one example of how data collection could be used to trace COVID-19 infections while protecting individuals’ privacy.
However, data collection must always be made clear to the data subject, and people should always have a choice. Many companies present vague or hard-to-decipher privacy policies that make it almost impossible for data subjects to know what data is being collected and who it is being shared with. These companies treat anonymization as a way to sell data while still meeting the minimum requirement for data security.
However, if malicious actors can re-identify you from anonymized data, it raises ethical questions about such a business model. As a user, it means you should evaluate the companies you share data with even more closely. And companies, at the very least, should notify their users of the risk of re-identification before they share their data. If not, it is impossible for users to give their informed consent.