Three tests for digital privacy

Stop making a social problem a technical one

Ersin Akinci
5 min readJan 2, 2017

Not too long ago, I worked in digital forensics as an investigator for a law firm handling software fraud and digital privacy cases.

Our firm was one of the good guys, taking on companies that made bunk software, stuff with names like Windows Registry Super Duper Cleaner 2000 Pro Gold Platinum Edition that would promise to speed up your computer by one quadrillion percent. Or apps that would send creepy personal information to third parties without any warning in the terms of service or privacy policy.

In this world of Edward Snowdens and Fancy Bears, the kind of work I did was unglamorous and obscure. I didn’t stop elections from being hacked and I didn’t solve murders. But one thing I did do was gain a glimpse into how digital privacy really works.

There’s a process called discovery in civil litigation in the United States where both sides in a lawsuit can request information before going to trial to help each side build their case. The scope of the information that can be requested may surprise you. For example, if you were suing a cheese factory for selling 7 oz. of cheese in packages that were labeled 8 oz., you could request that the factory provide you with all purchasing records for weighing equipment in the past three years. If the factory doesn’t provide you with enough or satisfactory information, you can file a motion to compel, which if approved by the court forces the factory to provide you with more evidence.

Now, if you’re a defendant committing fraud or violating users’ privacy, there’s no way to avoid the discovery process altogether. There are techniques however that can mitigate transparency: for instance, deluging the other party with all manner of data that may or may not contain one or two records relevant to the lawsuit, thereby technically complying with the discovery request while in practice turning it into a needle in a haystack search for the opposing side.

And then that data dump is assigned to someone technical to sift through to find a tiny nugget of information. Someone gets the haystack, sitting on a USB stick in someone’s pocket. Someone like me.

Do you trust me?

When you sign up for new TV service or purchase a game from an app store,

  • Do you trust not only the company you’re giving your personal information to, but also every employee inside of that company?
  • Do you trust every court that may have jurisdiction over any lawsuits arising from that company’s operations and do you trust every clerk working for each of those courts?
  • Do you trust every law firm that may sue that company and then request data on the company’s customers, and do you trust every investigator and consultant who may be brought on to analyze that data?

Of course not. You don’t trust any of these entities or people because most likely you don’t even know that they exist. Even if you knew that they existed, on most days you probably aren’t thinking “what apps or services did I purchase X years ago whose creators are being sued”? Yet you probably still have the same ██████, the same ████, the same ███████, the same ███████████, the same ██████, and the same ██████. Oh wait, you didn’t give over any of that highly sensitive personal information when you made your purchase or signed up for that service…did you? Because if you did, I 100% guarantee you that it is sitting in a haystack in someone’s pocket right now.

And that, in a nutshell, is why every purely technical approach to digital privacy like saying “just encrypt everything” is doomed to fail. The point isn’t that this one particular process known as discovery renders all your efforts to use the most up-to-date browsers and strong passwords irrelevant. The point that ungodly amounts of private and personally identifiable information legally and regularly change hands between corporations, governments and individuals — and most people have no idea. The trust chain isn’t just between your computer and the corporation, it’s with every business partner, governmental body, law firm, storage service, etc. and each of their business partners, governmental bodies, law firms, storage services, etc. And all of the people working for, with or against them. It’s not really a chain at that point, it’s more like an infinitely recursive trust dark web.

By all means, go ahead and encrypt everything. Use strong passwords. Technical fixes are Good Things™ that makes it harder to do Bad Things™. But what’s to stop a junior employee from picking one credit card number at random from a database and going on a shopping spree once your number has securely arrived at his employer’s servers? (If you say “but the number is hashed!”, you’re not thinking creatively enough.) Technical fixes may help build confidence, but often that confidence is misplaced because there is no trust.

We can build trust through culture. That’s what companies like DuckDuckGo are trying to do, making products that explicitly claim not to do creepy things like Google does. Is there any way to verify that DuckDuckGo isn’t selling your IP address and search history to analytics companies? No, but they’ve staked their entire reputation to their privacy mission and created a robust open source community of activists and programmers to advance their cause. At some point, you just trust them in the same way that we all once upon a time used to just trust Google.

We can also build trust through transparency and processes. In the case of civil lawsuits, discovery is actually a court-mediated process, meaning that if anyone did anything evil with data acquired through that channel, they would theoretically be in trouble. Speaking generally, however, how many legal data channels that are open to potential abuse is the average person aware of? The answer used to be one, business partners, and now with Snowden we also know that intelligence agencies are a second. Maybe with this article you now know of three. In reality, there are probably dozens. What are those channels and what mechanisms are in place to ensure that data is being exchanged properly within each of those channels and how can a victim find relief if their privacy is violated during such an exchange?

People often say “there is no such thing as privacy anymore,” as if computers and the internet have made privacy impossible. In reality, privacy only ever exists when we as a society adopt laws and practices to hold people who violate it accountable. Privacy is a function of our policies and values, not a a mathematical proof.

I propose three rough guidelines or tests for gauging whether someone has any meaningful digital privacy:

Ersin’s first digital privacy test:

If someone’s digital privacy were violated, would they know it?

Ersin’s second digital privacy test:

If someone’s digital privacy were violated, would they know where to turn to?

Ersin’s third digital privacy test:

Does someone have a broad end-to-end idea of how justice should theoretically work when their digital privacy has been violated in the same way that they would know what should theoretically happen if they went to the police after being robbed?

In reality, privacy isn’t binary, it’s on a scale. Yet unless these three tests are met, I find it difficult to argue that someone could have strong digital privacy. Actions like restricting what data gets sent out from your phone may enhance security by decreasing the attack vector, but ultimately privacy is an imperfect social phenomenon rather than a technical outcome.

--

--

Responses (1)