
In the blockchain industry, a theme commonly discussed is the blockchain trilemma: between decentralisation, security, and stability, it is difficult to optimise for one or two of these properties without sacrificing another.
Decentralisation ensures increased availability and minimises censorship. No one entity can control the protocol because the computational power is prohibitively expensive.
Security is the ability to withstand attacks as well as maintain data integrity. Transactions are immutable and final.
Scalability is the capacity to handle high volumes of traffic with low latency. Transactions need to be settled quickly and in a way that participation is not limited to a subpopulation of the internet.
Ethereum and Bitcoin are both decentralised and secure but have sacrificed scalability to do so. They both have low rates of transactions per second when compared to other protocols like Solana. Ethereum has recently started to scale, as highlighted by Vitalik, but there are still limits even with the increased gas consumed per block.
Most layer two chains offer security and scalability but lack decentralisation (which had also been recently highlighted by Vitalik). Their heavy reliance on centralised entities means their availability is at risk, especially if those entities all use the same cloud infrastructure such as AWS or Google.
All three of these pillars rely in some part on the relatively small size of keys, signatures, and data encryption offered through modern cryptography.
Post-quantum cryptography does not offer that same flexibility. The cost of increased cryptographic security is increased computational difficulty which means larger keys, larger signatures, and larger encrypted data.
Last week, two papers were published that brought quantum-resistance to the forefront of everyone working in the blockchain industry. The papers did not indicate when a cryptographic-relevant quantum computer would be built but they did show that the timeline has accelerated.
Whether you had believed it would happen in years or decades, these papers both showed improvements to Shor’s algorithm. There are suggestions that it could happen as soon as 2029 which would be before the first of targeted deadlines set by the NIST.
Hash based schemes are still considered relatively safe since Grover’s algorithm just speeds up the time to generate hashes and does not affect their integrity.
“eXtended Merkle Signature Scheme” (“XMSS”) is a contender for replacing the cryptography used in the execution layer, but it is stateful. Each leaf in the tree can only be used once and it is up to the signer to keep track of their use. This means that these keys will be difficult to establish between multiple devices since the state would need to be shared; using any key twice breaks the security model and makes the entire account vulnerable.
SPHINCS+ is another hash-based contender that could be adopted by Bitcoin because of its small key sizes (smaller than existing keys); however, the signatures are massive compared to the existing cryptography as well as other post-quantum cryptographic algorithms like XMSS, Dilithium, and Falcon. The verification times for SPHINCs+ is also a little too slow when compared with other algorithms.

Dilithium is currently favoured to replace elliptic curve cryptography in the execution layer of Ethereum. It has larger keys than the hash-based methodologies but a smaller signature size and much faster signing and verification times. It uses the “lattice with errors” methodology, which is what Falcon also uses, the mathematics of this method is still poorly understood and has not been as vigourously tested as hash and elliptic-curve based methods.
Falcon uses floating point numbers to generate keys and signatures so it poses limitations on building hardware devices. It’s keys and signature sizes are smaller than Dilithium and it is possible to extract a public key from a signature; this makes it a good candidate for account abstraction since instead of storing a public key in the contract, a hash of the key can be stored and then compared with the key extracted from a signature.
Falcon can also be vulnerable to side-channel attacks where an external system measures factors such as the time required to sign a transaction to determine the private keys; these sort of attacks require a lot of signing events as data and can be mitigated by rotating keys.
Whether various blockchains adopt a hash-based approach, a lattice based approach, or a combination of the two, the trilemma will be affected.
Whether the key sizes are increased or the signatures sizes are increased, that will mean more data moving between nodes and more data being stored on chain. This will make it more expensive to run a node which could result in less nodes in the community and a reduction of decentralisation.
The movement of larger chunks of data between nodes will also affect scalability. So will the longer times required to sign and verify the data. Some of the new methods are relatively fast for verification but still slower than the existing non-quantum methods in use.
It was recently reported that Solana “ran about 90% slower than it does today” when it was tested using quantum-safe signatures.
Security is going to be affected as well. Hardware devices may not support all of the methods in a safe manner; true security requires entropy but small hardware devices may not be capable of true entropy or anything even approaching that. Stateful systems will require careful controls in place to ensure there is no key exposure.
In the last ten years, a lot of progress has been made to address the blockchain trilemma. The existing systems may not be perfect but they are certainly more decentralised, scalable, and secure than they were years ago.
With the introduction of post-quantum cryptography, a lot of those gains may be lost. That could be the price of protecting digital assets.
The Post-Quantum Trilemma was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.