Understanding the Merge, Surge, Verge, Purge, and Splurge
“The Merge”, Ethereum’s switch to a Proof of Stake network, is scheduled for mainnet by the end of September. The objective is to unlock blockchain accessibility at scale. At its core, the merge transitions Ethereum from a Bitcoin-style, Proof of Work consensus mechanism to a Proof of Stake system. Ethereum’s pivot from execution sharding to a rollup centric roadmap is a critical step towards scaling blockchains for the next billion users. As discussed in our previous research piece Modular Blockchains: A Deep Dive, data availability and sharding in modular architecture allows blockchains to scale throughput without sacrificing decentralization. The article above dives into data availability, rollups, and fault/validity proofs, all of which are necessary for understanding the context and goals of the merge. This article will offer deeper analysis on the technical specifics of the Merge, Ethereum’s new roadmap, and what this change means for users and developers.
A Rollup Centric Roadmap
Initially, the plan for Ethereum 2.0 (now abandoned terminology) was to achieve scalability by partitioning mainnet into 64 shards, each with separate miners/validators. Users would then send transactions routed to a specific shard, depending on congestion, utilization, and throughput. As a result of the rise and adoption of rollups combined with the complexity of implementing execution sharding, the original scalability roadmap centered around execution sharding has been abandoned in favor of data sharding. Since the Ethereum team now believes that scaling Ethereum to world demand will happen through rollups, the plan for Ethereum post-merge is to become a robust settlement and data availability layer from which rollups derive security from.
The Beacon Chain (The Merge)
Contrary to popular belief, the Merge is not intended to reduce transaction costs, but rather to transform Ethereum into a powerful underlying infrastructure layer for rollups. The first core step towards this goal is the Beacon Chain. This transforms Ethereum from its previous Proof of Work system to a Proof of Stake system, where stakers must post collateral in order to produce blocks, with dishonest actors having their collateral slashed. Moving the consensus system to Proof of Stake introduces validation committees as a primitive, which in turn strengthens network consensus and paves the way for an efficient in-protocol data availability layer. The Beacon Chain conducts and coordinates the network of stakers, and does not process or execute transactions like the Ethereum of today. More concretely, what the Merge does is merge the old execution layer of Ethereum with the new consensus engine provided by the Beacon Chain, swapping the current algorithm utilizing proof-of-work miners with a coordinated network of proof-of-stake validators. Switching consensus algorithms also lays the groundwork for sharding: previously, under proof-of-work mining, there was no registry of miners, and miners may arbitrarily stop their duties and leave the network. Under proof-of-stake, the Beacon Chain now has a registry of all approved block producers, and can coordinate and parallelize the votes of validators.
Groups of validators, committees, are a key innovation provided by the Beacon Chain. Committees are randomly assigned by the Beacon Chain to vote on blocks and form consensus. A committee’s aggregated vote is known as an attestation, allowing the state of the Beacon Chain to be easily verified through checking the vote of a committee, minimizing block size and data growth compared to verification through singular validators. Attestation committees also strengthen consensus, as under this model a relatively large number of validators are needed in order to collude to create a fork. Additionally, the validator set is shuffled periodically, making it difficult for malicious validators to collude in time for an attack.
Consensus & MEV (The Splurge)
After the merge, Ethereum will implement Proposal-Builder Separation for the consensus layer. Vitalik reasons that the Endgame of all blockchains is to have centralized block production and decentralized block verification. Since post-sharded Ethereum blocks are extremely data dense, centralization of block production is necessary due to the high data availability requirements. At the same time, there must be a way to maintain a decentralized validator set which validates the blocks and performs data availability sampling.
The new builder role constructs Ethereum execution payload blocks with user transactions, and submits it along with a bid for proposers (a randomly selected subset of the validator set) to accept. Once a proposer accepts the payload, they then sign off on the block and propagate it through the network. Since payloads sent to proposers are stripped of transaction content, this structure eliminates the possibility of frontrunning by validators. In an efficient market, the introduction of a blockspace market also incentivizes builders to bid up to the full value of the MEV extracted, allowing the decentralized validator set to reap a majority of MEV rewards. Compared to analog Ethereum, this set-up prevents miners from potentially destabilizing consensus, and mitigates harmful MEV. The Proposal-Builder Separation is still an open design space, and you can read more about the risks of existing MEV here and the current research and implementation here.
Danksharding (The Surge)
While Proposal-Builder Separation was initially designed to counteract the harmful externalities and centralizing forces of MEV, the Ethereum core team realized that it could also serve the purposes of data sharding.
The main innovation of Danksharding, named after core contributor Dankrad Feist, is a merged fee market - instead of a fixed number of shards with distinct blocks and proposers, one proposer selects all transactions and data for each specific slot. The proposer, a randomly chosen committee of validators, subsequently performs data availability sampling on blockchain data. This ensures a decentralized way of maintaining data availability for light clients, which is not possible with single-validation due to the excessive data size of blocks post-merge. Since consensus nodes are also performing data availability sampling, this model unifies the settlement, consensus, and data availability layer.
A unified settlement and data availability layer unlocks exciting capabilities for rollups utilizing validity proofs: ZK rollups will now be able to make synchronous calls with the execution layer on Ethereum. This augments and enhances new L2 primitives like distributed liquidity and fractal scaling, setting the stage for innovative, next-gen dapps being built on ZK rollups.
Despite its promising consequences for the future of Ethereum, danksharding will not immediately be available in its full capacity after the merge. Proto-dank sharding (EIP-4844) is a primitive version of full danksharding scheduled to be released before the full implementation. This proposal creates a primitive called a blob-carrying transaction. As the name suggests, a blob-carrying transaction is a transaction which carries a data payload called a blob. Blobs are the data standard for post-sharded Ethereum: they’re bundled with KZG polynomial commitments, and are a much more efficient format than calldata due to being decoupled from EVM execution. Today, rollups use calldata to post transaction data back to Ethereum, resulting in high gas costs. In a sharded future, rollups will use blobs, saving users the gas fees associated with EVM execution. The goal of proto-danksharding is to provide this forward-looking data format for developers, while providing temporary relief to rollups dealing with expensive calldata costs by introducing a separate format and fee market for their soon-to-be-sharded data. While proto-danksharding does not actually implement sharding itself, introducing a standardized specification for a post-sharded data format is the first of many steps towards building an efficient native data availability layer.
History & State (The Verge & The Purge)
Ethereum state and its storage is also a consideration. An ever-increasing state can potentially affect decentralization, as validators must be able to accomplish their task on consumer hardware. Proto-danksharding blobs are separate from the EVM execution layer, and are pruned after about a month. Additionally, EIP-4444 allows clients to prune and stop serving historical data on the peer-to-peer layer after about a year. Regardless, having some type of mandatory history expiry being enforced on the protocol level is necessary as post-sharding will add about 40 TB of historical blob data per year. Blockchain state is required to be stored on RAM or SSD. However, historical storage, data that Ethereum has already come to consensus on, can be stored on cheap HDDs. Since historical storage operates on a honest minority (1-of-N) trust model, storing historical data on nodes performing real-time consensus is not necessary. Danksharding specifications ensure validators store and guarantee data availability for data they come to consensus on for a few months. Afterwards, this pruned history would then be stored by third-parties such as application-specific protocols, BitTorrent, the Portal Network, block explorers, individual hobbyists, or indexing protocols.
Stateless Ethereum is another goal on the roadmap. Block producers constructing the block will utilize a witness, which is a proof that consists of the relevant data required to execute the transactions contained within that block. As a result, clients utilize this witness to validate the resulting state root from executing a block, and requiring only portions of the affected state for execution instead of the full state. The main two obstacles to this design is witness size and witness availability. The first issue can be solved through changing the state data structure in Ethereum from Merkle Patricia Tries to Verkle Tries, a much more efficient data structure for the polynomial commitments used in Ethereum post-merge. The second issue can be solved through enshrining block witnesses as a protocol-level specification. Subsequent to Vitalik’s conclusions in Endgame, reliance on centralized block producers with specialized hardware while retaining decentralized validation is the key design framework to scaling Ethereum.
Takeaways (The Splurge cont.)
Danksharding supercharges rollups inheriting security from Ethereum. Upgrading the underlying infrastructure by tightly coupling data availability with the consensus and settlement layer allow rollups to utilize native data availability solutions, forgoing the security assumptions of validiums and volitions. This paves the way for architectures like enshrined rollups, which eliminate governance and smart contract risk by allowing the deployment of entire rollups in-protocol. Enshrined rollups utilizing SNARKs making synchronous calls in-protocol become a promising design for the future of blockchain scaling. In-protocol rollups have several benefits: the fixed, per-block gas costs that smart contract rollups face today is eliminated, the need to re-execute transactions for validators to verify a block is removed as compute is decoupled from consensus, and witnesses are no longer necessary for stateless clients to download since state diffs are now guaranteed through the properties of validity proofs. These benefits allow for lower settlement latency, better syncing, higher bandwidth for validators (and thus a higher EVM gas limit), as well as safer cross-chain bridging. The Ethereum Foundation is currently working on implementing this design directly into Ethereum’s roadmap, with plans to upgrade the EVM into a SNARK compatible enshrined rollup.
In our previous piece, we discussed the benefits of modular off-chain architecture, and the solutions third-party protocols are developing for data availability, settlement, and execution. The main goal of Ethereum’s roadmap is to minimize trust assumptions and provide in-protocol scalability through implementing native solutions. The base layer of Ethereum is host to an entire ecosystem of decentralized applications promising a fundamental shift in the way we think about identity, storage, search, reputation, and privacy in the digital age. Upgrading Ethereum as a base layer also elevates this application layer, benefitting users and developers by providing a highly-secure, robust infrastructure to scale these use cases globally. Ethereum’s vision is a digital future on a global scale; its adherence to the principles of credible neutrality as well as Ethereum’s network effects, decentralization, and security firmly cement its role in the future in the decentralized web. The Merge is the first step in stewarding Ethereum towards this vision.
Special thanks to Raul Jordan, Sreeram Kannan, and the team at Coinbase Cloud (Viktor Bunin, Ben Rodriguez) for reviewing and providing feedback!