The Privacy Compliance Toolkit: How Privacy Protocols Actually Stay AML-Compliant
Curvy9 min read·Just now--
Most public arguments about privacy and compliance get stuck on a false binary — either you have privacy and regulators hate you, or you have compliance and users get surveilled. Anyone who has actually shipped a privacy protocol knows it does not work like that.
Compliance is not one thing. It is six or seven different problems, each with its own toolkit, each with different trade-offs. A privacy protocol that takes compliance seriously does not pick one tool and call it solved. It assembles a stack.
This article walks through the building blocks: what each one does, what it does not do, and where the open problems still are. The goal is to give builders, integrators, and compliance officers a clearer map of the space, so the next conversation can start somewhere more useful than “but how do you stop bad actors?”
Why compliance needs to be unbundled
A regulator looking at a privacy protocol cares about a small number of concrete questions. Can sanctioned addresses deposit funds? If known stolen funds enter the protocol, can they be stopped before they reach an honest user? If illicit funds are discovered after the fact, can they be removed from the privacy set? Can a compliance officer or auditor get visibility when legally required? Does the protocol create a clear audit trail?
These are different questions, and a tool that answers one does not automatically answer the others. Pre-transaction screening is no help against funds that turn out to be stolen six hours after the deposit cleared. A delay window cannot catch a hack that goes unnoticed for two weeks. Viewing keys are useful for audits and irrelevant to blocking illicit deposits in the first place.
Most public discussion of privacy compliance treats all of this as one problem. Once you split it apart, the design space gets a lot clearer.
The building blocks
There are six widely used compliance primitives in privacy protocols today. Each one maps to a different question above.
1. Pre-transaction screening (KYT at the door)
The most basic block. Before a deposit is accepted into the privacy set, the source address is screened against a real-time risk database. If the address is sanctioned, tied to a known hack, or flagged by the analytics provider, the deposit is rejected.
This is the same model that Wallet-as-a-Service providers use through their integration with Global Ledger, and it is the model Curvy uses for entry into the privacy aggregator. Global Ledger’s KYT engine evaluates the source against a database of attributed addresses and returns a risk score in roughly 500ms, fast enough that the user does not feel it.
Pre-transaction screening solves the easy case: known-bad funds at the moment of deposit. It is cheap, fast, and well understood.
What it does not solve is the recency problem. If a hack happened an hour ago and the stolen address is not yet in the analytics provider’s blocklist, screening returns a clean score and the funds get in. Which brings us to the next block.
2. Delayed or extended screening
A deposit delay window, typically minutes to an hour, gives analytics vendors time to update their attribution data before the funds become spendable inside the privacy set.
Curvy’s approach, called Extended Screening, runs asynchronously on the same principle. Funds sent to a Curvy user are temporarily locked while the screening runs. The recipient sees the funds appear as pending, no action required, and they unlock once the extended check clears. From the user’s perspective there is almost nothing to notice.
The block solves the gap between a hack happening and the analytics database catching up. A longer window catches more of those gaps.
It does not solve hacks that go unnoticed for longer than the window. The Upbit incident in November 2025 is the case everyone in this space points to: a hacker laundered stolen funds through a Railgun because the deposit address was not yet on any blocklist when the one-hour delay expired. The funds were inside the pool, indistinguishable from honest user balances, before anyone realized.
That failure mode is what drove the next wave of compliance design.
3. Association sets (Privacy Pools)
Privacy Pools, drawing on the original paper by Buterin, Illum, Furneaux, Hieronimus, Persson, and Estensen, takes a different angle. Instead of screening at the door, users prove that their withdrawal is associated with a chosen set of “good” deposits, and not with any flagged ones.
The clever part is that the user controls which association set they prove against. A privacy-conscious user can prove association with a wide set. A compliance-conscious user can prove association with a narrower whitelist. The protocol does not pick a single policy; it gives users the cryptographic tools to make their own provenance claims.
Association sets are good at provable disassociation from bad actors. A withdrawal can carry a cryptographic proof that the funds are not part of a flagged subset.
The honest limitation is that this was originally a deposit-and-withdraw scheme. Private transfers between users inside the pool are not part of the original Privacy Pools v1 design, though v2 work explores how to enable them.
4. Retroactive deposit address tainting
This is the building block that directly answers the Upbit failure mode. If illicit funds slip past pre-screening and the delay window, you need a way to flag them after they are already inside the privacy set, and stop them from being aggregated, transferred, or withdrawn.
Here is how the mechanism works. Every note (the UTXO unit inside the privacy aggregator) carries an encrypted lineage of the deposit addresses that contributed to it. When a compliance officer adds a deposit address to an on-chain blocklist, every note descended from that deposit becomes unable to perform any further actions inside the protocol. The blocklist is implemented as a Sparse Merkle Tree for efficient non-inclusion proofs. The only path out for the holder of a tainted note is to withdraw to a known address, sacrificing privacy for that exit.
Curvy implements this as Deposit Address Tainting. The trickiness is that the lineage itself has to be private, otherwise the deposit IDs become permanent markers that destroy fungibility, as Michael Connor’s recent ethresear.ch post on tracing bad funds through shielded pools spelled out in detail. Curvy’s design encrypts the OriginAddresses field with the note’s shared secret, so only the owner can prove non-inclusion against the blocklist. Outside observers see nothing.
There is a real engineering constraint here. Without a cap, the lineage list grows exponentially with each transfer. Connor’s post calls this out explicitly, and earlier research at EYBlockchain hit the same wall in 2019. Curvy caps the OriginAddresses set at 16 entries. Once a note’s lineage hits that cap, the user has to do a regular withdrawal before continuing. This is the standard withdrawal flow, where the user can pick any address they like, and is distinct from a privacy-sacrificing rage-quit. Sixteen is a deliberate balance: large enough that ordinary users rarely hit it, small enough that proving non-inclusion stays tractable inside a ZK circuit.
The block solves retroactive removal of illicit funds from the privacy set, including funds that entered before they were known to be bad.
It does not solve a real-world question: honest users whose notes happen to be downstream of a bad deposit will discover their funds are partially tainted only when the blocklist is updated. The funds can be cleanly separated from the good portion, but the question of whether someone who unknowingly received stolen money gets to keep it is a legal one. There is at least one Scottish precedent that says they do. Other jurisdictions vary.
5. Viewing keys for selective disclosure
Most privacy protocols support some form of viewing key, a separate cryptographic key that grants read-only visibility into a user’s transactions. Aleo’s account model includes account view keys and transaction view keys. Zcash has had viewing keys for years. Integrators of Curvy can hold their users’ viewing keys to support audits.
The use case is real. A user under investigation can grant a regulator a viewing key to satisfy a subpoena. An institution operating a private wallet can give its compliance team read access without exposing data to the public. A merchant integration can audit its own settlement flows.
Viewing keys are good for selective disclosure on demand and for producing audit trails for a specific party without making everything public.
The limitation, and it is an important one, is that they are a post-hoc tool. Regulators in most major jurisdictions are not satisfied with “we can show you the data later if you ask” as a compliance answer. They want risk evaluation before funds move. Viewing keys belong in the toolkit, but they cannot carry the whole compliance story on their own. This is the part of the public conversation that has shifted significantly in the last twelve months.
6. Optional KYC at the registry layer
Some privacy protocols are pseudonymous all the way down. Others let integrators ask for identity verification at the point where a user registers their handle.
Curvy’s Name Registry includes an optional KYC hook: an integrator deploying Curvy can require that a user complete KYC before registering a Curvy name. The hook is configurable per integrator. A consumer payments app might require it. A developer-facing SDK might not.
The block solves the problem regulated VASPs and institutional integrators face: getting the identity layer they need to satisfy Travel Rule and similar obligations, without forcing every privacy protocol user globally to do KYC.
It does not cover anonymous usage outside the registry. Like every other block here, it is optional and integrator-dependent.
What is still unsolved
The toolkit is good. It is not complete. Anyone selling it as complete is not paying attention.
Chargebacks and dispute resolution. Traditional payments have reversibility built in. A fraudulent charge gets disputed, the merchant or the network absorbs the loss, the user is made whole. Privacy protocols inherit blockchain’s irreversibility. Tainting can stop further movement of bad funds, but it does not refund the user who unknowingly received them. There is no good cryptographic answer to this yet, and the legal answer varies by jurisdiction.
Jurisdictional clarity for decentralized infrastructure. Who is the “operator” of a permissionless privacy protocol? The team that wrote the code? The smart contracts themselves? The integrator deploying them? The compliance officer adding addresses to a tainting blocklist? These questions are not resolved. Different jurisdictions are going to reach different answers. The Tornado Cash sanctions and the subsequent legal proceedings made this concrete in a way the industry is still digesting.
Multiple, conflicting blacklisters. A protocol used across jurisdictions may need to honor blocklists from different authorities that do not agree. The current generation of designs assumes a single blocklist maintainer. Real-world deployment will eventually need pluralism here.
What this means for builders
If you are integrating a privacy protocol into a wallet, a payments app, or an agentic system, the practical takeaway is that you should not expect a single feature to handle compliance. You should expect a stack.
A reasonable stack for a consumer-facing integration looks something like this. Pre-transaction screening at the deposit. An extended screening window for funds sent to your users. Retroactive tainting available for funds that turn out to be bad after the fact. Viewing keys for compliance officer access. Optional KYC at the registration layer for regulated jurisdictions. And documentation that lets your auditors trace each of these.
For an institutional or B2B integration, the priorities shift. KYC becomes mandatory. Viewing key custody is more important. The threshold for screening risk scores moves down. For an agentic or automated payments use case, the screening latency becomes critical: 500ms is workable, five seconds is not.
Curvy’s design supports all building blocks above as configurable hooks, with Global Ledger as the integrated KYT provider. The full architecture, including the encrypted lineage approach for retroactive tainting, is documented at docs.curvy.box.
What will determine whether privacy protocols become a real infrastructure for serious money is not which tool they pick. It is whether they assemble the toolkit honestly, name what is and is not solved, and give integrators the building blocks to put together a compliance posture that fits their actual use case.