Home » Selfhood in the Shadows: Rethinking Whistleblowing Infrastructure with Psst
Selfhood in the Shadows: Rethinking Whistleblowing Infrastructure with Psst
This article represents Nora Trapp’s deep-dive outline of the collaboration between the Applied Social Media Lab and Psst, why we took this approach, and what our outcomes were.
Imagine discovering a serious wrongdoing at your workplace. You want to speak up, but doing so feels like stepping off a cliff alone. This is the plight of many whistleblowers today. Blowing the whistle can be a lonely and risky act: careers are jeopardized, reputations attacked, and personal lives upended. Even when digital whistleblowing platforms are available, they usually leave individuals isolated, with no real way to bring issues forward collectively. It’s no wonder that over 70% of whistleblower complaints end up dismissed or withdrawn without action. A lone voice is easy to ignore—or discredit.
From the technical perspective, building support systems for whistleblowers is hard. For example, consider the challenge of building initial trust between a whistleblower and a journalist. The journalist must somehow verify just enough of a whistleblower’s identity to create confidence in the whistleblower’s complaint; however, revealing too much detail about a whistleblower’s identity is risky because, for example, the journalist might be subpoenaed and forced to reveal their sources. Whistleblower authentication challenges the dominant model of online identity in which platforms and employers control the data, credentials, and accounts that define who we are online, and pushes us toward something more user-controlled: selective, anonymous, and verifiable on one’s own terms.
Whistleblowing is also one of the highest stakes applications of digital identity we readily see tested. If infrastructure can prove secure enough for whistleblowers, then it’s easy to imagine it safely empowering many communities that need to prove things about themselves without exposing themselves. Journalists, activists, vulnerable groups, and anyone who has to build trust under conditions of anonymity or fear could use these building blocks.
Imagine proving “I am over 18” without revealing your date of birth, or “I am a doctor” without revealing your name, or “I live in this region” without giving your address—all lend weight to a statement or could define access to systems within social spaces while putting user safety first.
This work is a small piece of a larger movement to redesign digital identity and communication with privacy, safety, and user agency at the core.
About the Collaboration
In 2025, we partnered with Psst, a nonprofit creating safer, more collective channels for whistleblowers to speak up. Their goal is to “change the face of whistleblowing, making it something that can be done anonymously if needed, in collaboration with others.”
“Had I been able to connect with other Boeing employees… a more comprehensive picture of what was going wrong across the company might have been made public earlier. A collective approach using Psst Safe could have… potentially saved lives.”
Ed Pierson, Boeing 737 MAX whistleblower and Psst board member
Unlike many technical efforts that start from abstract ideas, this collaboration was grounded in real-world whistleblower experiences. Psst’s fieldwork helped define user types (e.g., the Cautious Collaborator, the Passive Observer, the Solicited Insider, and the Determined Activist), threat models, and the exact moments in the whistleblowing workflow when technical tools could help the most. We weren’t starting from zero: we were testing real solutions in a space where safety is non-negotiable.
Our primary work with Psst was providing them the technical infrastructure to improve their digital safe. The goal of the safe is to be “a kind of information escrow, discerning patterns as they emerge and enabling information to be matched, when appropriate, for collective disclosures.” For example, one report about financial irregularities at Company C might stay sealed until another report with a matching company and subject matter appears, at which point both are revealed together.
What We Built
In our collaboration with Psst, we set out to solve two hard problems at the heart of collective whistleblowing:
How can someone prove they belong to an organization without revealing additional identifying information?
How can multiple people speak up safely together, without putting the first voice at risk?
These aren’t just technical challenges: they’re questions of trust, timing, and control. And in high-stakes settings like whistleblowing, even small design flaws can have real consequences.
At a high level, the digital safe follows a simple workflow:
a whistleblower submits a disclosure;
the system verifies that the disclosure comes from a legitimate insider;
the disclosure is stored in encrypted escrow so no one can read it prematurely;
only when a threshold additional disclosures are submitted, from a matching company and subject matter, are the reports unlocked and shared for review.
Given the real-life whistleblowing challenges that Psst identified, we focused on building two critical layers of infrastructure to strengthen this workflow:
Verification tools, which aid in step 2, allowing individuals to prove organizational affiliation (e.g., that they really work at the organization in question) without compromising their anonymity or requiring steps that feel complicated or inaccessible to the user types Psst identified.
Matching tools, which aid in steps 3 and 4 allowing for disclosures to be held in encrypted escrow until a safe threshold is reached in a way that’s automated and resistant to tampering, leaks, or human error.
Each tool is designed to be modular and adaptable to contexts beyond whistleblowing: journalism, online communities, and privacy-preserving identity verifications more broadly.
Verification Tools
We tackled the first problem, verification, by designing and prototyping several methods for covertly verifying a whistleblower’s identity using existing aspects of a whistleblower’s digital identity. Having multiple methods of verification is important, since each one has limitations that make it not universally applicable to all whistleblowers. That being said, each method that we explored doesn’t rely on any central authority issuing an ID; furthermore, each method avoids exposing unnecessary personal details.
Each of these verification methods leverages existing Internet infrastructure in ways that were never originally intended.
It was also important to us that each verification method could be utilized outside of whistleblowing cases. For example, a journalist looking to verify a source, or a content creator looking to independently verify their identity on a social media platform, might benefit from our authentication infrastructure.
We now discuss the three approaches:
Covert IP Verification
Whenever you’re browsing the internet, you have a unique identifier assigned to you, called an IP (internet protocol) address. IP addresses enable Internet servers to route network traffic to the correct destinations, including to and from your personal devices. IP addresses aren’t specific to a given device, but rather change as a device leaves an old network and joins a new one. Every new network you visit will likely assign you a new address, in the same way that traveling to a new physical location changes your physical address.
Large organizations generally have well-known blocks of IP addresses (e.g., Apple’s 17.0.0.0/8 or Google’s 8.8.0.0/16). So, it seems like a whistleblowing service could verify someone’s organizational affiliation just by checking whether a purported whistleblower from Company C is currently connected to a well-known network belonging to Company C. However, corporate networks are often owned and operated by the company itself, meaning that, while you’re connected to the corporate network, the company can generally see what websites you’re browsing–including whistleblowing sites.
That’s where domain fronting comes in. Domain fronting disguises the true destination IP address of a network message by leveraging existing content delivery networks (CDNs) to relay our lookup request. Using this technique, a whistleblower message that is ultimately destined for a whistleblower verification site might appear to the corporate network to be destined for Google, Reddit, Pinterest, or another large website that relies on content delivery networks.
At a more technical level, suppose that you visit a website through HTTPS (which you hopefully see at the start of every web address you browse to). Further suppose that the website uses domain fronting. When you type real.example.com into your browser, the browser:
looks up the IP address for real.example.com using something called the Domain Name Service (or DNS: think of it like a phonebook for the internet);
opens an encrypted connection to the resulting IP address–an address that, because the site is using domain fronting, is associated with a fronting server, not a real.example.com server; this connection is encrypted using a protocol called Transport Layer Security, or TLS);
sends a web request inside the encrypted TLS stream, with the request including a special field (called a header) telling the content delivery network which ultimate server you actually want to contact (e.g., safe.psst.org).
So, with domain fronting:
A user device performs DNS resolution with and opens a TLS connection to a “front domain” like real.example.com—a domain which looks innocuous to the corporate network observer.
Inside the encrypted stream, the header requests the fronting domain to redirect communication to a different domain (e.g., safe.psst.org)
From the corporate network’s point of view, it looks like the user just connected to the fronting domain via a DNS query and an encrypted TLS connection to that fronting domain. Nothing suspicious! But inside the encrypted tunnel, the user is interacting with a third-party server that, in the Psst use case, is a whistleblower validation server that can validate a user’s corporate affiliation based on their IP address.
Once the Psst server verifies a user’s corporate affiliation, the server can return a token, cryptographically signed by Psst, that says “This anonymous user was recently seen inside Corporation X’s network”. The user can later use that token to prove their affiliation—without revealing who they are, when they connected, or what they disclosed to Psst.
This technique flips surveillance infrastructure on its head. Instead of a company using network-level data to track employees, the employee uses that same metadata—their IP address—to prove they belong to the organization that they need to whistleblow about.
Of course, there are limitations to this approach. Smaller organizations don’t have dedicated IP blocks that can be easily looked up, and there are many reasonable cases where non-employees have access to corporate infrastructure. However, the veracity of the verification is high enough for some whistleblowers; especially those at government agencies, big tech companies, and universities. From Psst’s fieldwork, we learned that the risk of outsiders posing as insiders in these settings is relatively low, and that any spurious reports would be filtered out later when they get in front of humans (e.g., lawyers, journalists), making the approach useful in practice for actual employees.
It’s common practice for users to verify their email address when they sign up for a website. You’ve probably done this dozens of times. At first glance, Psst could use this workflow to verify a whistleblower’s organizational affiliation: you sign up for the Psst “safe”, and then Psst sends a confirmation request to your work email, and you prove ownership by clicking the link in the email.
But just like IP address verification, this approach fails the privacy test. Your employer has access to your corporate email account, and could infer your intent to whistleblow!
However, many social platforms (e.g., LinkedIn, GitHub) already verify work emails as part of their standard user flows. That’s something we can leverage: when a platform like LinkedIn or GitHub verifies a work email, it means the platform itself has already confirmed that the user controls that corporate account. In other words, the platform’s verification can serve as proof of organizational affiliation, without having to ask the employer directly.
To do so, we built a prototype browser extension called CredSnap that uses verified emails from these platforms, along with an open source tool called TLSNotary, to cryptographically prove that a user controls an @company.com email, while redacting the full address or name.
Think of TLSNotary, and similar proprietary tools like the Reclaim Protocol, like a digital notary public. Instead of a person checking your documents, your device creates a special cryptographic transcript of a browsing session. For example, when you log into a site like LinkedIn, the tool can capture and prove that LinkedIn itself has verified your corporate email. In other words, it “observes” the site by recording exactly what your browser received from the service and then produces a cryptographic proof that this specific information was provided. No human notary is required. The process happens entirely on your own device, privately and securely. And what’s especially great about this approach is it doesn’t just work for current employees, but former employees (if they’ve verified their email before leaving) can use it too.
While we’re excited about this concept, there’s still more work to do before it’s ready for whistleblowing and beyond. Open source TLS notarization solutions, in particular TLSNotary, are not fully ready for wider adoption. TLSNotary is still in its alpha phase with a variety of known limitations, bugs, and issues—a number of which specifically affect our use cases. ASML staff are also investigating and evaluating other existing solutions for readiness and suitability. We will continue to monitor progress with these tools and help to advance easy-to-use solutions as they improve.
Email Signatures
Every email you send carries more than just its message. It also includes metadata that can help prove where it came from. One such feature is DKIM (DomainKeys Identified Mail), a security standard originally developed to combat spam and phishing. It works by having the sender’s mail server attach a cryptographic signature to each outgoing email at the moment it is sent, which the recipient’s mail provider can then verify against the sender’s domain.
While these signatures weren’t built for verifying the sender’s personal identity, they can attest that the sender holds a valid organizational (e.g., corporate) email address. To make email signatures useful for whistleblowing, it isn’t enough to show that a message came from an organization’s domain, we also need to verify that it was authored by the individual making the disclosure. Otherwise, someone could reuse any legitimate email they’ve received and impersonate the sender since the signature only proves that the message passed through an organization’s mail server, not that it was written by the recipient.
To close that gap, we require the individual to include something that makes it undeniable they were the author, without revealing to the mail server that you are whistleblowing. Many people send innocuous messages from their work email to their personal email every day–for example, grocery lists, reminders, or notes. If the verifying software knew the “innocuous” message before you sent it, it could be used as a secret handshake to verify it’s really you on the other side.
Our prototype has the ability to generate a randomized grocery list (though it could work with many other messages) for you to send to yourself. Once you’ve done so, you can download the email from your personal inbox and upload it to the tool. The server then verifies the DKIM signature, checks a number of other cryptographic protections that your email provider may or may not provide, and provides a verifiable credential that you can use to prove to Psst, or anyone else, that you own an @company.com email address.
Whistleblowers gain strength in numbers. Thus, Psst wants its sources to have confidence that their disclosures won’t be revealed (to any human!) until there are one or more “matching” disclosures from different sources—in other words, multiple reports about the same organization and the same issue, such as two employees independently flagging fraud. But how can this be done without trusting some person to manually compare reports?
Rendezvous Protocol
We created a cryptographic Rendezvous Protocol to automatically handle the matching in a distributed, secure way. Every submitted disclosure is end-to-end encrypted on the whistleblower’s device, using a one-time key derived from the intended recipient’s public key (e.g., Psst, but also investigative journalists or other lawyers), and then split into multiple fragments that are distributed across independent servers run by different entities (or otherwise isolated with trusted hardware) so that no single party can collude or compromise the system. Think of it like breaking a secret into pieces and stashing them in different vaults. Each fragment on its own is opaque, meaning that no single server can read the fragmented message. Only by combining a quorum of these fragments can the message be reconstructed, and that reconstruction happens solely on the intended recipient’s device, where it is then decrypted with their private key. Not even the Psst team or any single administrator can decrypt the message or determine its contents or sender prematurely.
When a whistleblower submits a concern, they additionally must include a verifying credential (possibly obtained through one of the systems we previously discussed) which the server can utilize to categorize the report by the relevant organization (e.g., “Company Y, Fraud”).
The magic happens when a second (or third, etc. – configurable by the recipient, e.g., Psst) whistleblower submits a matching concern (e.g., another report tagged “Company Y, Fraud”). At that moment, each server independently sends its fragments to the recipient. If a quorum of servers agree that a match has been achieved, the recipient can combine the fragments and decrypt the underlying disclosure.
Until that threshold is met, everything remains encrypted, unlinkable, and deniable. No one can peek at a single report in isolation, and outsiders can’t tell if one or many people have lodged concerns about the same issue.
And unlike a normal puzzle, you don’t need all the fragments to reconstruct your secret. The protocol has tolerance built in that allows for some redundancy. If one server is malicious, whether it tries to withhold its fragment, provide a forged fragment, or release it early, your secret remains protected. And the recipient can cryptographically identify (and potentially ban) misbehaving servers.
Technically, this protocol relies on a form of “threshold cryptography” (inspired by secret-sharing schemes) to achieve this, along with decentralization storage of fragments across multiple parties. This decentralized design means there is no single point of failure or trust. Neither a rogue admin nor an outside attacker could compromise whistleblower secrets without breaking into multiple independent servers, and even if such extreme collusion were to occur they have only prematurely accessed the disclosure ciphertext. The actual message remains encrypted to the recipient.
A prototype end-to-end implementation of this protocol is available now on GitHub, alongside a white paper that outlines in detail how someone like Psst could put it to use.
Trusted Execution Environments
In addition to cryptographic and distributed approaches, we also explored using Trusted Execution Environments (TEEs): secure enclaves within a computer’s processor that isolate sensitive computations from the rest of the system. Intel SGX and AMD’s SEC are example implementations of TEEs. TEE hardware stores a secure computation’s dynamic state in memory that is encrypted with keys that are not revealed to the server’s operating system or to the server’s human administrators; the memory is also integrity protected with cryptographic hashes, meaning that inappropriate modifications can also be detected by the TEE hardware.
A server-side TEE allows a remote client to verify what software the TEE is running, such that a client will only share sensitive data with trusted server-side TEE code. This verification ability, called remote attestation, means that whistleblowers can guarantee that their data is entering a trusted disclosure-matching enclave. The matching algorithm would run inside the enclave, inaccessible even to the machine’s owner, with the enclave state having both confidentiality (no one can inspect the raw submissions) and integrity (no one can tamper with submission data or the matching code).
TEEs require trust in the hardware vendors that manufacture and provision the TEE hardware. Some vendors have a history of vulnerabilities, and while many of those (known) flaws can be mitigated, relying on a single provider introduces a central point of risk. Nonetheless, by combining TEEs from multiple vendors, and combining them with distributed cryptographic systems like the Rendezvous protocol, we can build resilient, privacy-preserving systems that don’t rely on any one actor to behave perfectly.
Looking to the Future
This work is not a product—it’s infrastructure. We’ve built modular components that can be adopted, forked, or extended by the communities that need them most. We’re interested in hearing from folks who might find the infrastructure helpful, such as:
newsrooms that want to empower source-safe reporting;
labor and advocacy orgs that want to support collective disclosure;
academic or legal researchers working on privacy, anonymity, or digital identity;
platforms and toolmakers looking to add privacy-preserving identity verification to existing workflows.
If you’re building systems where standard authentication systems don’t work, or where safety in numbers could unlock a story that would otherwise stay hidden, we’d love to hear from you at asml@cyber.harvard.edu.