Mental models for security and privacy (Part I)

Mitchell P. Krawiec-Thayer, PhD
11 min readDec 13, 2020

Introduction

Builders of privacy tools (such as encrypted messengers, secure email services, and fungible digital money) are tasked with a particularly difficult and high-stakes challenge. It is not enough for their app to be engaging or useful; it must also keep its users and their information safe.

In this article, I’ll loosely use the term ‘security’ to reference keeping something safe from manipulation or unauthorized access, and the term ‘privacy’ for keeping some data or information concealed. You can have security without privacy (for example, a tall chain-link and barbed-wire fence that anybody can see through), and you could have privacy without security (for example, a curtain between two rooms).

There are many legitimate reasons why a user may desire security or privacy — they might be a journalist, a victim of domestic abuse, a political activist, or perhaps simply exasperated with the AdTech industry’s dragnet surveillance and monetization of personal data to unscrupulous highest bidders. For some users it may be a matter of preference, but for others a matter of life and death. (Stay tuned for an upcoming field guide to help you decide which concerns make sense for your situation, and teach simple steps to proportionally and preemptively address those risks).

This article introduces some mental models and philosophies surrounding the use and design of secure/private technology. This document is essentially two articles combined for narrative continuity:

  • The first half is geared towards a non-technical audience, and uses familiar examples to introduce fundamental concepts that anybody can use to begin assessing whether their digital habits are aligned with the level of security their circumstances warrant.
  • From “Designing future-resistant security” onward, the second half of this article is geared towards anybody with a voice in the design of secure or private technology, especially systems that require long-term security. We’ll examine some key questions for defining technical requirements, explore worst practices and common pitfalls, and end with a thought experiment that I use to center my own contributions.

User segmentation and threat models

Whether something is ‘secure’ or ‘private’ cannot be determined in a vacuum; it must always be assessed with respect to specific threat models. A handwritten journal is safe from hackers but not physical snoops, whereas the opposite is true for a password-protected electronic journal. Which version is ‘safe’ is a relative answer that depends on whether your main concern is a nosy roommate or a company harvesting your notes for advertising keywords.

For this reason, it is crucial for designers of secure systems to ask themselves “secure against what?”. While that might at first sound flippant, completely answering this question requires much more than considering one attack surface or threat model. It requires anticipating *every* attack surface and threat model, and then assessing which are relevant.

There is no one-size-fits-all answer, and even flawlessly implemented security features are not useful if misaligned with a user’s needs and threat models (pepper spray might protect you from a bear but won’t help you in a flash flood). And in many cases, there is no universally “right” or “wrong” answer, like how pepper spray and life jackets are useful to different people at different times, or even the same person at different times. Returning from analogy to a relevant example, consider two email providers: Gmail and Protonmail.

Gmail provides a high-powered email suite with an extremely convenient search function that can, within milliseconds, surface every time that a keyword appeared in the body of an email over my last 15 years of inbox history. Naturally, for this functionality to exist it is necessary for Gmail’s algorithms to have complete access to the full text of every email I’ve sent or received. While they have a best-in-class security team and practices, there is always the possibility of technical mistakes, greedy/blackmailed employees, clever hackers, and old fashioned human error.

Protonmail provides an email suite with encrypted storage on the backend, where private keys and decryption are handled device-side. This means that even if a clever hacker or greedy employee absconded with a copy of the data on Protonmail servers, the message body would be an indecipherable ciphertext. Thus threat models related to a third party accessing plaintext email messages from service provider storage are mitigated, however this comes with convenience and efficiency tradeoffs. Naturally a Gmail-style search function is out of the question, since Protonmail cannot search for keywords in encrypted emails (this is the whole point!), and client-side decryption of messages takes a fraction of a second.

Without context (boating or camping?) a life jacket is no more “right” or “wrong” than pepper spray, and I would argue that the same is true for Gmail and Protonmail. In fact, I prefer to use Gmail for my business email (where I place a high priority on bottomless storage, integrated chat, account recovery, and a thorough search function), and I prefer Protonmail for my personal email (which is connected to my bank accounts, pharmacy, etc and thus warrants a more private solution).

The point of this ramble about Gmail and pepper spray is that detailed threat model evaluation is 1) often quite difficult, 2) absolutely crucial, and 3) must eventually be analyzed in the context of users’ needs. Since the goal of designing secure systems is to protect your users from the threat models that impact them, the most important question at the core of every project must be “who are our users?” (note that I mean their characteristics rather than identities). This introspection to identify the relevant threat models (i.e. answer “secure against what?”) is a necessary requisite for evaluating whether a system succeeds at offering sufficient privacy or security. There is a reason why user research teams often start by crafting customer personas to create a starting point for understanding what pain points they should be trying to address.

Security for today or tomorrow?

Some systems only require in-the-moment security for the duration of some process, whereas other situations require assurances of long-term security against today’s and tomorrow’s adversaries.

For an example where non-permanent security is acceptable, consider a password that you use to login to some online service. Suppose that today’s attackers could crack a 10-character password, but a 15-character password is considered reasonably secure. Of course, computers get faster, and 30 years from now we expect 25-character passwords to be vulnerable. Does that mean that you need to immediately change your password to be 26 characters long. No! In a situation like this (where we have a reasonable understanding and projections of adversary capabilities), it’s fine to use a shorter 15-character login password for the time being. As long as you update to a longer password before computers get fast enough to crack your old one, your account is secure. Even if the old shorter password is later cracked, there are no problems, since it no longer grants access to the service. In this case it is sufficient to use “good enough for today” security that simply blocks current attackers. (Disclaimer: the above numbers are for illustration only and should not be taken as advice)

As a contrast, imagine a whistleblower communicating with a journalist about corruption in a violent and vindictive organization. The user’s life may depend on the encrypted email/messenger app keeping their reports and identity private. Unlike the previous example, temporary “good enough for today” security is *not* good enough! In these situations, the developers must adopt security features and parameters that will stand the test of time.

Determining which category (present or long-term) a particular use case falls into is important for identifying the appropriate level of security requirements. Security and privacy hardening often comes with efficiency tradeoffs (key sizes, verification times) that can impact user experience. Since there are downsides to weak security (risk to users) and downsides to over-engineering (reduced efficiency), it’s necessary to thoughtfully assess which measures are necessary but not overkill for addressing users’ threat models.

The remainder of this article will focus on applications that require long-term security or privacy, where today’s users could be harmed if some mechanism is later compromised during their life. These use cases result in a specific class of problems that come with unique challenges and technical design principles. This tends to be the case with products such as encrypted messaging and anonymous digital currencies, where subsequent decryption or retroactive deanonymization could place users at risk. The following also applies to companies storing sensitive (presumably encrypted) personal data.

Designing future-resistant security

One of the trickiest parts about designing a system to be secure in the long run is the fact that it must remain safe when attacked by adversaries and techniques that won’t even be invented for decades to come. For an encrypted message or cryptocurrency transaction broadcast today to remain private in 2060, its mechanisms must remain secure under attack from all of the hardware and tools that will be created during the intervening years (e.g. the state-of-the-art methods that will be available to an adversary in 2055).

The potential capabilities of future adversaries fall into two categories: 1) more powerful implementations of existing attacks, and 2) attacks leveraging some new technique or paradigm. Consider a few possible future threats:

  1. Stronger / faster / parallelized computation increasing practical resources for brute force attacks. (For example, cracking 2048-bit RSA keys is computationally intractable today, but will almost certainly be feasible in 50 years)
  2. New algorithmic abilities (For example, statistical methods for graph analysis are rapidly evolving for the purpose of analyzing cryptocurrency transaction trees)
  3. Quantum computing (For example, a quantum-enabled adversary could use Shor’s algorithm to factor RSA keys or extract cryptocurrency wallet private keys from public addresses)

It’s easiest to intuit more powerful computers (#1) because we’ve all witnessed the continuous increase of processor speeds, memory, bandwidth, etc over the last few decades. Imagining these attackers merely requires imagining existing threat models with faster execution, and we have historical trends and heuristics like Moore’s law to help with estimating adversary capabilities. Planning for fundamentally new attacks like novel algorithms (#2) and quantum computing (#3) is much more challenging, and often requires consulting with domain experts.

Let’s be honest, trying to imagine attack surfaces relative to adversaries that don’t exist yet is really challenging. However it is absolutely necessary for teams that are building tools whose users are counting on long-term security. This assessment should be centered on informed projections and educated estimates, not intuition or stabbing in the dark.

For example, privacy coin developers anticipating future algorithmic advances can infer that transaction tree analysis will leverage highly-parallelized graph matching and metadata-based heuristics. Or when considering threat models that include a quantum computer, we can base decisions on a rich body of academic literature and experiments that describe their expected capabilities. For example, we already know that Shor’s algorithm can factor large numbers or break the discrete log problem to compromise RSA or ECC-based cryptography in O(log N), and Grover’s algorithm can search for function inputs in O(N^1/2). These insights make it possible to rigorously define formal adversary models, audit today’s technology against future attackers, and even design plausibly future-proof systems.

As an aside: Every privacy project where retroactive deanonymization has future impacts for today’s users must soon have discussions about whether to include quantum computers in their threat model. To learn more about this specific topic, especially in the context of privacy-preserving cryptocurrencies, check out our MoneroTalk episodes about quantum computing (pre-audit, and post-audit) that partially inspired this article. If you have questions feel free to drop by our corresponding r/Monero AMA.

Worst practices for long-term security

In general, when it comes to designing future-proof security, there are a few mental traps that it’s easy to fall into.

The first mistake is the tendency to picture paradigmatically-different adversaries as simply bigger or faster versions of current technology. The difference between a traditional computer and a quantum computer is analogous to the difference between a typewriter and a laptop. Sure, both can be used to take notes during a meeting, but a laptop can carry out functions (e.g. video editing) that a typewriter fundamentally cannot.

It’s also important to notice the implicit influences in your design. Decisions like anonymity set size should be data informed, not based on intuition (e.g. “ring size 11 feels okay to me and seems to work so far?”). The Dunning-Kruger effect is especially prevalent in conversations about quantum computing; often individuals that have read a few popular science articles about qubits are more vocally confident in their predictions for when large-scale quantum computers will (or won’t) arrive than experts who actually work with the technology… When in doubt, consult with a domain expert.

Another logical fallacy is to ignore what we do know because of the things we don’t. Without a crystal ball, we will never know for sure when threats have been exhaustively enumerated. However, this should not deter developers from taking advantage of the information that is available. There are a million ways that a car accident could unfold, and no list will contain every possibility. Nonetheless, tens of thousands of engineers work every day to create safety mechanisms (seat belts, airbags, firewalls) designed to minimize damage in the conditions that we can anticipate.

Lastly, I want to highlight that there is a very natural temptation to avoid these challenging conversations! Evaluating future threat models is technically difficult and often stressful. However, the fact that something is hard is no excuse for not doing it, and we cannot throw our hands up in the air and leave our users out to dry! Avoiding conversations about whether to protect users from future adversaries will not keep them safe — as Rush once observed, “if you choose not to decide, you still have made a choice.”

Who are your users?

Let’s end with a thought experiment. Consider two private cryptocurrencies with different security timelines and comically large differences in performance (orders of magnitude more than actual tradeoffs).

  • SpeedyCoin provides fast and cheap transactions with a $0.001 fee, however the privacy features only have a 5–10 year shelf life.
  • TankCoin transactions are slow and cost nearly $20 per each, due to extremely hardened cryptographic mechanisms expected to remain secure for the next 100–200 years.

Consider now two hypothetical users:

  • User 1 is a DeFi enthusiast who likes to demonstrate the utility of cryptocurrencies by showing how fast and easy it is to buy a cup of coffee with a QR code and a single click.
  • User 2 is a persecuted minority under an authoritarian country, slowly saving up money over the years with the hopes of one day starting over somewhere safe. If their government learns that they’re saving up money for an escape, they (and their family) will be arrested and tortured.

User 1 would find TankCoin effectively unusable (due to low speed and high fees), whereas User 2 would find SpeedyCoin effectively unusable (due to putting family at risk if the privacy features are cracked in a few years). This brings us full-circle back to the notion that security can only be meaningfully evaluated in the context of specific use cases & threat models. SpeedyCoin and TankCoin cannot be ranked or objectively labeled “right” or “wrong”, because the correct approach is a relative matter determined by the users’ needs.

I’ll leave you now to consider the most important question for privacy tech designers: Who are your users?

Thanks for reading! You can find me on Twitter, LinkedIn, and GitHub.

If you have questions/comments about this article, or would like to discuss security/privacy design, shoot me a message at:

--

--

Mitchell P. Krawiec-Thayer, PhD

Chief Scientist & President of Geometry Labs // Data Science and Protocol PrivEng for Monero Research Lab // aka Isthmus