With everyone claiming that they get lower and lower block times on L2s, I thought it was about time to describe what’s going on. Specifically, let’s check what L2 blocks are, how they differ from L1 blocks and why we really shouldn’t care that much about L2 block times, even if they’re an interesting engineering metric.
Back when the term blockchain actually meant something (circa 2009), the concept of blocks was introduced because we needed a unit for a group of transactions submitted to consensus.
For example, on Bitcoin, each block producer tries to find an arrangement of transactions that satisfies the PoW requirements, then broadcasts this block to the network. Other nodes will verify that this block does indeed fill the PoW requirements. On Ethereum, which is now PoS and accounts-based, block producers compute a hash of the blockchain state after the execution of each block (the state commitment), and validators recompute that value for easy validation of the block. The overall process is always the same:
On layer-1 chains, blocks matter: you verify the integrity of the chain at the level of blocks, you manage forks at the level of blocks, etc.
TL;DR: blocks are an essential primitive of consensus.
L2s exist because consensus is slow and decentralization requires supporting slow computers and networks. The L2 approach offloads transaction processing to the fastest available machine, then posts a verifiable execution summary to L1 for consensus. Simply put, most L2s today are centralized systems LARPing as blockchains. And that’s perfectly fine.
This is where block times get fuzzy. L2s continue building blocks mainly for L1 software compatibility, but it’s largely artificial. When posting summaries to L1, L2s typically batch multiple blocks together to reduce costs. While state commitments are needed occasionally for fraud/validity proofs, they aren’t required for every block. Therefore, L2 blocks are essentially useless.
When someone claims they have fast L2 blocks, they’ve simply adjusted a system configuration value to decrease block times. While they still need to process a meaningful number of transactions per block, that’s the extent of it.
As a user, you care about one thing, timing-wise: the round-trip time. In more words, how much time will it take for my transaction to reach the L2 sequencer, get executed, and for the result to be visible on the RPC node I’m using? Let’s focus on that last part: how much time does it take to communicate the execution of a transaction to RPCs?
Slower blockchains typically wait for the end of a block before sending it to peers. Solana started the idea that you could stream blocks instead: just send transactions to the other validators as soon as you processed them. Solana splits these into entries (groups of max. 64 transactions), which are themselves split into shreds for transfer on the network. We have an in-depth article on this topic if you’re curious. These are streamed continuously from the leader node to others, meaning that you get info about the execution of your transactions before the block is even over.
L2s now have decided to reuse this mechanism: Base, with Flashblocks, goes from a block time of 2 seconds to smaller 200ms sub-blocks. MegaETH has a concept of “mini-blocks”, produced every 15 ms on their testnet (most of the time). Eclipse uses the Solana entry/shred system. This way, users have to wait less for their transactions to execute. That’s pretty good for UX!
But let’s be clear: the real feature here is “reduced intervals of communication across the network.” It has nothing to do with some blocks being inherently better than others. We’re just dividing blocks into smaller pieces and streaming them in parallel with execution. Whether you call these pieces blocks, mini-blocks, or shreds doesn’t matter. The end goal is faster communication, not better blocks.
With everyone claiming that they get lower and lower block times on L2s, I thought it was about time to describe what’s going on. Specifically, let’s check what L2 blocks are, how they differ from L1 blocks and why we really shouldn’t care that much about L2 block times, even if they’re an interesting engineering metric.
Back when the term blockchain actually meant something (circa 2009), the concept of blocks was introduced because we needed a unit for a group of transactions submitted to consensus.
For example, on Bitcoin, each block producer tries to find an arrangement of transactions that satisfies the PoW requirements, then broadcasts this block to the network. Other nodes will verify that this block does indeed fill the PoW requirements. On Ethereum, which is now PoS and accounts-based, block producers compute a hash of the blockchain state after the execution of each block (the state commitment), and validators recompute that value for easy validation of the block. The overall process is always the same:
On layer-1 chains, blocks matter: you verify the integrity of the chain at the level of blocks, you manage forks at the level of blocks, etc.
TL;DR: blocks are an essential primitive of consensus.
L2s exist because consensus is slow and decentralization requires supporting slow computers and networks. The L2 approach offloads transaction processing to the fastest available machine, then posts a verifiable execution summary to L1 for consensus. Simply put, most L2s today are centralized systems LARPing as blockchains. And that’s perfectly fine.
This is where block times get fuzzy. L2s continue building blocks mainly for L1 software compatibility, but it’s largely artificial. When posting summaries to L1, L2s typically batch multiple blocks together to reduce costs. While state commitments are needed occasionally for fraud/validity proofs, they aren’t required for every block. Therefore, L2 blocks are essentially useless.
When someone claims they have fast L2 blocks, they’ve simply adjusted a system configuration value to decrease block times. While they still need to process a meaningful number of transactions per block, that’s the extent of it.
As a user, you care about one thing, timing-wise: the round-trip time. In more words, how much time will it take for my transaction to reach the L2 sequencer, get executed, and for the result to be visible on the RPC node I’m using? Let’s focus on that last part: how much time does it take to communicate the execution of a transaction to RPCs?
Slower blockchains typically wait for the end of a block before sending it to peers. Solana started the idea that you could stream blocks instead: just send transactions to the other validators as soon as you processed them. Solana splits these into entries (groups of max. 64 transactions), which are themselves split into shreds for transfer on the network. We have an in-depth article on this topic if you’re curious. These are streamed continuously from the leader node to others, meaning that you get info about the execution of your transactions before the block is even over.
L2s now have decided to reuse this mechanism: Base, with Flashblocks, goes from a block time of 2 seconds to smaller 200ms sub-blocks. MegaETH has a concept of “mini-blocks”, produced every 15 ms on their testnet (most of the time). Eclipse uses the Solana entry/shred system. This way, users have to wait less for their transactions to execute. That’s pretty good for UX!
But let’s be clear: the real feature here is “reduced intervals of communication across the network.” It has nothing to do with some blocks being inherently better than others. We’re just dividing blocks into smaller pieces and streaming them in parallel with execution. Whether you call these pieces blocks, mini-blocks, or shreds doesn’t matter. The end goal is faster communication, not better blocks.