Hoskinson might be wrong about the future of decentralized compute
Cardano’s founder recently made an argument about hyperscalers that needs to be addressed, says Fan.
By Leo Fan|Edited by Betsy Farber Mar 14, 2026, 6:30 p.m.
Make us preferred on Google
The blockchain trilemma reared its head once more at Consensus in Hong Kong in February, to some extent, putting Charles Hoskinson, the founder of Cardano, on the back foot – having to reassure attendees that hyperscalers like Google Cloud and Microsoft Azure are not a risk to decentralisation.
The point was made that major blockchain projects need hyperscalers, and that one shouldn’t be concerned about a single point of failure because:
- Advanced cryptography neutralizes the risk
- Multi-party computation distributes key material
- Confidential computing shields data in use
The argument rested on the idea that ‘if the cloud cannot see the data, the cloud cannot control the system,’ and it was left there due to time constraints.
But there’s an alternative to Hoskinson’s argument in favor of hyperscalers that deserves more attention.
MPC and Confidential Computing Reduce Exposure
This was somewhat of a strategic bastion in Charles’ argument – that technologies like multi-party computation (MPC) and confidential computing ensure that hardware providers wouldn’t have access to the underlying data.
They are powerful tools. But they do not dissolve the underlying risk.
MPC distributes key material across multiple parties so that no single participant can reconstruct a secret. That meaningfully reduces the risk of a single compromised node. However, the security surface expands in other directions. The coordination layer, the communication channels and the governance of participating nodes all become critical.
Instead of trusting a single key holder, the system now depends on a distributed set of actors behaving correctly and on the protocol being implemented correctly. The single point of failure does not disappear. In fact, it simply becomes a distributed trust surface.
Confidential computing, particularly trusted execution environments, introduces a different trade-off. Data is encrypted during execution, which limits exposure to the hosting provider.
But Trusted Execution Environments (TEEs) rely on hardware assumptions. They depend on microarchitectural isolation, firmware integrity and correct implementation. Academic literature, for example, here and here, has repeatedly demonstrated that side-channel and architectural vulnerabilities continue to emerge across enclave technologies. The security boundary is narrower than traditional cloud, but it is not absolute.
More importantly, both MPC and TEEs often operate on top of hyperscaler infrastructure. The physical hardware, virtualization layer and supply chain remain concentrated. If an infrastructure provider controls access to machines, bandwidth or geographic regions, it retains operational leverage. Cryptography may prevent data inspection, but it does not prevent throughput restrictions, shutdowns, or policy interventions.
Advanced cryptographic tools make specific attacks harder, but they still do not remove infrastructure-level failure risk. They simply replace a visible concentration with a more complex one.
The ‘No L1 Can Handle Global Compute’ Argument
Hoskinson made the point that hyperscalers are necessary because no single Layer 1 can handle the computational demands of global systems, referencing the trillions of dollars that have helped to build such data centres.
Of course, Layer 1 networks were not built to run AI training loops, high-frequency trading engines, or enterprise analytics pipelines. They exist to maintain consensus, verify state transitions and provide durable data availability.
He is correct on what Layer 1 is for. But global systems mainly need results that anyone can verify, even if the computation happens elsewhere.
In modern crypto infrastructure, heavy computation increasingly happens off-chain. What matters is that results can be proven and verified onchain. This is the foundation of rollups, zero-knowledge systems and verifiable compute networks.
Focusing on whether an L1 can run global compute misses the core issue of who controls the execution and storage infrastructure behind verification.
If computation happens offchain but relies on centralized infrastructure, the system inherits centralized failure modes. Settlement remains decentralized in theory, but the pathway to producing valid state transitions is concentrated in practice.
The issue should be about dependency at the infrastructure layer, not computational capacity inside Layer 1.
Cryptographic Neutrality Is Not the Same as Participation Neutrality
Cryptographic neutrality is a powerful idea and something Hoskinson used in his argument. It means rules cannot be arbitrarily changed, hidden backdoors cannot be introduced and the protocol remains fair.
But cryptography runs on hardware.
That physical layer determines who can participate, who can afford to do so and who ends up excluded, because throughput and latency are ultimately constrained by real machines and the infrastructure they run on. If hardware production, distribution, and hosting remain centralized, participation becomes economically gated even when the protocol itself is mathematically neutral.
In high-compute systems, hardware is the game-changer. It determines cost structure, who can scale, and resilience under censorship pressure. A neutral protocol running on concentrated infrastructure is neutral in theory but constrained in practice.
The priority should shift toward cryptography combined with diversified hardware ownership.
Without infrastructure diversity, neutrality becomes fragile under stress. If a small set of providers can rate-limit workloads, restrict regions, or impose compliance gates, the system inherits their leverage. Rule fairness alone does not guarantee participation fairness.
Specialization Beats Generalization in Compute Markets
Competing with AWS is often framed as a question of scale, but this too is misleading.
Hyperscalers optimize for flexibility. Their infrastructure is designed to serve thousands of workloads simultaneously. Virtualization layers, orchestration systems, enterprise compliance tooling and elasticity guarantees – these features are strengths for general-purpose compute, but they are also cost layers.
Zero-knowledge proving and verifiable compute are deterministic, compute-dense, memory-bandwidth constrained, and pipeline-sensitive. In other words, they reward specialization.
A purpose-built proving network competes on proof per dollar, proof per watt and proof per latency. When hardware, prover software, circuit design, and aggregation logic are vertically integrated, efficiency compounds. Removing unnecessary abstraction layers reduces overhead. Sustained throughput on persistent clusters outperforms elastic scaling for narrow, constant workloads.
In compute markets, specialization consistently outperforms generalization for steady, high-volume tasks. AWS optimizes for optionality. A dedicated proving network optimizes for one class of work.
The economic structure differs as well. Hyperscalers' price for enterprise margins and broad demand variability. A network aligned around protocol incentives can amortize hardware differently and tune performance around sustained utilization rather than short-term rental models.
The competition becomes about structural efficiency for a defined workload.
Use Hyperscalers, But Do Not Be Dependent on Them
Hyperscalers are not the enemy. They are efficient, reliable, and globally distributed infrastructure providers. The problem is dependence.
A resilient architecture uses major vendors for burst capacity, geographic redundancy, and edge distribution, but it does not anchor core functions to a single provider or a small cluster of providers.
Settlement, final verification and the availability of critical artifacts should remain intact even if a cloud region fails, a vendor exits a market, or policy constraints tighten.
This is where decentralized storage and compute infrastructure become a viable alternative. Proof artifacts, historical records and verification inputs should not be withdrawable at a provider’s discretion. Instead, they should live on infrastructure that is economically aligned with the protocol and structurally difficult to turn off.
Hypescalers should be used as an optional accelerator rather than something foundational to the product. Cloud can still be useful for reach and bursts, but the system’s ability to produce proofs and persist what verification depends on is not gated by a single vendor.
In such a system, if a hyperscaler disappears tomorrow, the network would only slow down, because the parts that matter most are owned and operated by a broader network rather than rented from a big-brand chokepoint.
This is how to fortify crypto’s ethos of decentralization.
DecentralizationNote: The views expressed in this column are those of the author and do not necessarily reflect those of CoinDesk, Inc. or its owners and affiliates.
More For You
The Emperor has no wallet
By Jay Pollak|Edited by Betsy FarberMar 12, 2026
Why crypto still hasn’t solved a single everyday problem, argues VerifiedX’s Pollak.
Read full storyLatest Crypto News
Crypto’s multi-million F1 sponsorship under fire as Middle East war hits region's biggest events
1 hour ago
Ethereum Foundation sells 5,000 ether to Tom Lee's BitMine in $10.2 million deal
1 hour ago
Boris Johnson calling Bitcoin a ‘Ponzi’ draws rebuttal from Michael Saylor and others
1 hour ago
Wall Street pushes tokenized stocks, but institutions aren’t eager to trade them
3 hours ago
Brazil industry giants representing 850 companies decry stablecoin tax threat
4 hours ago
The math behind Strategy’s path to 1 million bitcoin by the end of 2026
5 hours agoTop Stories
Bitcoin holds $71,000 despite Trump warning of strikes on Iran's oil-rich Kharg Island
12 hours ago
Bitcoin can survive 72% of the world's submarine cables being cut, but a targeted attack on five hosting providers could cripple it
15 hours ago
AI developers may not be keen on crypto, but stablecoins are the secret to agentic finance, crypto insiders say
5 hours ago
A huge gap between network use and token value is the most important thing happening in XRP right now
19 hours agoCourt closes Custodia fight with Federal Reserve just as Fed opens master-account door
21 hours ago