Hoskinson might be wrong about the future of decentralized compute

Hoskinson might be wrong about the future of decentralized compute



The blockchain trilemma reared its head once more at Consensus in Hong Kong in February, to some extent, putting Charles Hoskinson, the founder of Cardano, on the back foot – having to reassure attendees that hyperscalers like Google Cloud and Microsoft Azure are not a risk to decentralisation.

The point was made that major blockchain projects need hyperscalers, and that one shouldn’t be concerned about a single point of failure because:

  • Advanced cryptography neutralizes the risk
  • Multi-party computation distributes key material
  • Confidential computing shields data in use

The argument rested on the idea that ‘if the cloud cannot see the data, the cloud cannot control the system,’ and it was left there due to time constraints.

But there’s an alternative to Hoskinson’s argument in favor of hyperscalers that deserves more attention.

MPC and Confidential Computing Reduce Exposure

This was somewhat of a strategic bastion in Charles’ argument – that technologies like multi-party computation (MPC) and confidential computing ensure that hardware providers wouldn’t have access to the underlying data.

They are powerful tools. But they do not dissolve the underlying risk.

MPC distributes key material across multiple parties so that no single participant can reconstruct a secret. That meaningfully reduces the risk of a single compromised node. However, the security surface expands in other directions. The coordination layer, the communication channels and the governance of participating nodes all become critical.

Instead of trusting a single key holder, the system now depends on a distributed set of actors behaving correctly and on the protocol being implemented correctly. The single point of failure does not disappear. In fact, it simply becomes a distributed trust surface.

Confidential computing, particularly trusted execution environments, introduces a different trade-off. Data is encrypted during execution, which limits exposure to the hosting provider.

But Trusted Execution Environments (TEEs) rely on hardware assumptions. They depend on microarchitectural isolation, firmware integrity and correct implementation. Academic literature, for example, here and here, has repeatedly demonstrated that side-channel and architectural vulnerabilities continue to emerge across enclave technologies. The security boundary is narrower than traditional cloud, but it is not absolute.

More importantly, both MPC and TEEs often operate on top of hyperscaler infrastructure. The physical hardware, virtualization layer and supply chain remain concentrated. If an infrastructure provider controls access to machines, bandwidth or geographic regions, it retains operational leverage. Cryptography may prevent data inspection, but it does not prevent throughput restrictions, shutdowns, or policy interventions.

Advanced cryptographic tools make specific attacks harder, but they still do not remove infrastructure-level failure risk. They simply replace a visible concentration with a more complex one.

The ‘No L1 Can Handle Global Compute’ Argument

Hoskinson made the point that hyperscalers are necessary because no single Layer 1 can handle the computational demands of global systems, referencing the trillions of dollars that have helped to build such data centres.

Of course, Layer 1 networks were not built to run AI training loops, high-frequency trading engines, or enterprise analytics pipelines. They exist to maintain consensus, verify state transitions and provide durable data availability.

He is correct on what Layer 1 is for. But global systems mainly need results that anyone can verify, even if the computation happens elsewhere.

In modern crypto infrastructure, heavy computation increasingly happens off-chain. What matters is that results can be proven and verified onchain. This is the foundation of rollups, zero-knowledge systems and verifiable compute networks.

Focusing on whether an L1 can run global compute misses the core issue of who controls the execution and storage infrastructure behind verification.

If computation happens offchain but relies on centralized infrastructure, the system inherits centralized failure modes. Settlement remains decentralized in theory, but the pathway to producing valid state transitions is concentrated in practice.

The issue should be about dependency at the infrastructure layer, not computational capacity inside Layer 1.

Cryptographic Neutrality Is Not the Same as Participation Neutrality

Cryptographic neutrality is a powerful idea and something Hoskinson used in his argument. It means rules cannot be arbitrarily changed, hidden backdoors cannot be introduced and the protocol remains fair.

But cryptography runs on hardware.

That physical layer determines who can participate, who can afford to do so and who ends up excluded, because throughput and latency are ultimately constrained by real machines and the infrastructure they run on. If hardware production, distribution, and hosting remain centralized, participation becomes economically gated even when the protocol itself is mathematically neutral.

In high-compute systems, hardware is the game-changer. It determines cost structure, who can scale, and resilience under censorship pressure. A neutral protocol running on concentrated infrastructure is neutral in theory but constrained in practice.

The priority should shift toward cryptography combined with diversified hardware ownership.

Without infrastructure diversity, neutrality becomes fragile under stress. If a small set of providers can rate-limit workloads, restrict regions, or impose compliance gates, the system inherits their leverage. Rule fairness alone does not guarantee participation fairness.

Specialization Beats Generalization in Compute Markets

Competing with AWS is often framed as a question of scale, but this too is misleading.

Hyperscalers optimize for flexibility. Their infrastructure is designed to serve thousands of workloads simultaneously. Virtualization layers, orchestration systems, enterprise compliance tooling and elasticity guarantees – these features are strengths for general-purpose compute, but they are also cost layers.

Zero-knowledge proving and verifiable compute are deterministic, compute-dense, memory-bandwidth constrained, and pipeline-sensitive. In other words, they reward specialization.

A purpose-built proving network competes on proof per dollar, proof per watt and proof per latency. When hardware, prover software, circuit design, and aggregation logic are vertically integrated, efficiency compounds. Removing unnecessary abstraction layers reduces overhead. Sustained throughput on persistent clusters outperforms elastic scaling for narrow, constant workloads.

In compute markets, specialization consistently outperforms generalization for steady, high-volume tasks. AWS optimizes for optionality. A dedicated proving network optimizes for one class of work.

The economic structure differs as well. Hyperscalers’ price for enterprise margins and broad demand variability. A network aligned around protocol incentives can amortize hardware differently and tune performance around sustained utilization rather than short-term rental models.

The competition becomes about structural efficiency for a defined workload.

Use Hyperscalers, But Do Not Be Dependent on Them

Hyperscalers are not the enemy. They are efficient, reliable, and globally distributed infrastructure providers. The problem is dependence.

A resilient architecture uses major vendors for burst capacity, geographic redundancy, and edge distribution, but it does not anchor core functions to a single provider or a small cluster of providers.

Settlement, final verification and the availability of critical artifacts should remain intact even if a cloud region fails, a vendor exits a market, or policy constraints tighten.

This is where decentralized storage and compute infrastructure become a viable alternative. Proof artifacts, historical records and verification inputs should not be withdrawable at a provider’s discretion. Instead, they should live on infrastructure that is economically aligned with the protocol and structurally difficult to turn off.

Hypescalers should be used as an optional accelerator rather than something foundational to the product. Cloud can still be useful for reach and bursts, but the system’s ability to produce proofs and persist what verification depends on is not gated by a single vendor.

In such a system, if a hyperscaler disappears tomorrow, the network would only slow down, because the parts that matter most are owned and operated by a broader network rather than rented from a big-brand chokepoint.

This is how to fortify crypto’s ethos of decentralization.



Source link

Post Comment

You May Have Missed

Social Media Auto Publish Powered By : XYZScripts.com