Glossary

Surge (Ethereum)

Hard

The Ethereum Surge is a development stage of the Ethereum network. It includes a set of upgrades, most notably sharding.

What Is the Surge (Ethereum)?

The Ethereum Surge is a development stage of the Ethereum network. It includes a set of upgrades, most notably sharding. The Ethereum roadmap targets partitioning the Ethereum network into a set of 64 shard chains. Execution will be split across these chains, which will increase throughput by allowing parallel computation. Each shard chain will have its own set of validators
Moreover, the network will scale by outsourcing transaction execution to layer-2 blockchains. Since transaction costs are cheaper on many layer-2 chains, the Ethereum mainnet will be focused on consensus, settlement and data availability, while layer-2 chains will provide the execution layer.
Rollups currently use calldata for storage when posting their state roots back to the mainnet. Although calldata is Ethereum's cheapest form of storage, it is still comparatively expensive, considering the data passes the EVM and is permanently recorded on the blockchain. However, rollups do not need permanent data storage. It is sufficient if data is temporarily available and guaranteed not to be withheld or censored by a malicious actor. That is why calldata is not optimized for rollups and is not scalable enough for their data availability needs.

Danksharding

On the other hand, Ethereum's plan to institute Danksharding can achieve meaningful scaling benefits more quickly. Specifically, the Surge will introduce Proto-Danksharding with EIP-4844, which introduces a new transaction type called a Blob-carrying transaction. These transactions resemble regular transactions but provide data availability guarantees in a blob, while at the same time not committing to permanent data storage. Rollups will be able to interpret more data, as blobs are 125 kilobytes big, much bigger than the average Ethereum block. 
The Ethereum Virtual Machine can not access blob data, but it can prove its existence. Each blob is broadcast alongside a block. Blob transactions will have a separate gas market, with prices adjusting exponentially for the demand for blobs. As a consequence, the cost of data availability will be separated from the cost of execution. This will lead to a more efficient gas market and individual components like NFT mints being independently priced. In addition, blobs are expected to be pruned from nodes, further alleviating data storage.
However, proto-danksharding will only be a step to full danksharding. Both will be compatible with each other, though full danksharding will increase the throughput of rollups by several multiples. Even though rollups need to adjust to this new transaction type, once danksharding is in place, they will not have to be adjusted again. At the time of writing, the plan is to include proto-danksharding in the Shanghai hard fork approximately six to twelve months after the Merge.
The idea of danksharding is that checking data availability will be spread among validators. Even though the implementation details are still unclear, the shard data will be encoded with erasure coding to ensure data availability sampling. This extends the dataset in a way that mathematically guarantees its full availability if a certain threshold of samples is available. Data is split into blobs or shards. Each validator has to prove the availability of their assigned shards once per epoch. This process splits the load among validators.

The original data is available for reconstruction, provided that a sufficient number of samples is available and the majority of validators honestly attest to their data. The long-term plan foresees the implementation of private random sampling. This allows individuals to guarantee data availability without validator trust assumptions, although its challenging implementation prevents immediate execution of the upgrade.

Danksharding also targets the increase in the number of target shards to 128. A maximum of 256 shards per block is the upper limit. This significantly increases the target blob storage from 1MB to 16MBs. However, this also introduces a centralizing force for block builders, who will have to compute the blob encoding and distribute the data. Still, for validator nodes, the increased block size will not be an issue since nodes can verify the block efficiently with data availability sampling. To prevent this increase in validator requirements from having an adverse impact on network diversity, an upgrade called the Proposer-Builder Separation will have to be completed. 

Summary

The Ethereum Surge focuses on scaling and improving the network's transaction throughput. It also leverages the strengths of rollups for layer-2 scalability. Sharding is no longer a scaling solution for the Ethereum base layer but prioritizes making data availability cheaper. Ideally, danksharding could even invert the blockchain trilemma by enabling a highly decentralized set of validators to shard data into smaller pieces and preserve its availability guarantees. This would increase scalability without giving up security.