Thinking in Constraints: Notes from Building Zero-Knowledge Systems
Lucia Romano4 min read·1 hour ago--
The first mistake most engineers make when approaching zero-knowledge proofs is assuming they are learning a new cryptographic tool. In reality, what they are encountering is a different computational model.
Zero-knowledge systems do not extend conventional programming paradigms; they replace them. What looks, on the surface, like an unfamiliar stack — Circom, Halo2, custom DSLs — is actually the visible layer of a deeper shift: from executing instructions to satisfying constraints.
At the lowest level, every proof system I have worked with reduces to the same structure. You define a relation over some public inputs and private witnesses, and you prove that this relation holds. The proving system enforces that all constraints are satisfied; nothing more, nothing less. There is no runtime, no hidden control flow, no implicit behavior. If a condition is not encoded as a constraint, it does not exist.
This sounds obvious in theory, but it has practical consequences that are easy to underestimate.
In conventional software, correctness is often distributed across multiple layers: control flow, state transitions, invariants that are sometimes enforced and sometimes assumed. In a zero-knowledge circuit, that ambiguity is not allowed. The system only knows what you explicitly constrain. As a result, the burden shifts from writing logic that executes correctly to designing a system that cannot be satisfied incorrectly.
This is where many implementations fail — not because the cryptography is wrong, but because the constraints are incomplete.
The second shift is performance-related, but it is not about optimizing code in the usual sense. In zero-knowledge systems, performance is primarily a function of constraint complexity. Every additional constraint increases proving cost, often non-linearly depending on the proving system. Operations that are trivial in standard computation — division, conditional branching, bit decomposition — become expensive artifacts that must be justified.
This changes how you design even simple functionality. You start asking different questions. Not “is this readable?” or “is this modular?”, but “is this minimal?” and “can this be expressed with fewer constraints?” The tradeoffs resemble hardware design more than software engineering. Redundancy is costly, abstraction has a price, and elegance is measured in efficiency rather than clarity.
An interesting consequence of this is that idiomatic code and optimal circuits rarely align. A circuit that is easy to read is often not the one you want to deploy. Over time, you develop an intuition for where structure can be compressed, where signals can be reused, and where entire classes of checks can be removed without weakening the statement being proven. That intuition is difficult to teach and usually comes from debugging performance bottlenecks rather than studying theory.
Witness generation is another area that tends to be underestimated. There is a tendency to focus on the proving system itself — its security assumptions, its polynomial commitments, its recursion capabilities. In practice, however, generating the witness can dominate execution time, especially in systems that integrate with general-purpose languages like Rust. Poorly structured witness logic introduces overhead that is not always visible at the circuit level but becomes evident in production environments.
Tooling choices reflect these tradeoffs. Circom offers a relatively direct mapping between logic and constraints, which makes it effective for prototyping and for systems where the circuit structure is not deeply nested. Halo2, on the other hand, exposes a more expressive abstraction, allowing for fine-grained control over how constraints are constructed and reused. That flexibility is powerful, but it comes with a cognitive cost. You are no longer just writing constraints; you are designing how those constraints are composed.
The distinction matters when systems scale. Small circuits tolerate inefficiencies. Large systems do not.
One recurring pattern I have observed is that engineers bring assumptions from traditional software into zero-knowledge design. They prioritize modularity in ways that duplicate constraints, or they encode logic defensively, adding checks that are redundant within the proof model. The result is a circuit that is correct but unnecessarily expensive. Over time, the discipline becomes one of subtraction: identifying what is strictly required for the statement to hold, and eliminating everything else.
This leads to a more precise formulation of the problem you are solving. Instead of asking “how do I implement this feature?”, the question becomes “what is the minimal statement I need to prove for this feature to be valid?” The difference is subtle but important. It shifts the focus from implementation to specification.
Not all systems benefit from this approach. Zero-knowledge proofs introduce overhead, both in development complexity and computational cost. Their value appears when you need to compress trust assumptions, enforce privacy, or make computation verifiable across boundaries where direct execution is not feasible. In those contexts, the constraint model becomes an advantage rather than a limitation.
What makes zero-knowledge engineering interesting is not the novelty of the cryptography, but the discipline it imposes. It forces explicitness. It removes ambiguity. It exposes inefficiencies that would otherwise remain hidden in layers of abstraction.
For engineers willing to engage with that model, the challenge is not learning new tools, but unlearning familiar habits. The systems we build in this space are not just different in implementation; they are different in how they are conceived.
And that difference is where most of the real work begins.