ether

Report: The State of Ethereum Protocol

Scalability — the Story So Far

From the very start of 2018, it was clear that the theme of the year would be “scalability.” On January 2nd, the Ethereum Foundation made a call for applications for grants to be awarded to researchers and devs working on increasing the transaction processing capability of the network. And in a presentation on March 4th, titled “The Road Ahead for 2018,” Vitalik Buterin said “For 2018 we really believe that scaling is the primary focus.”

So, what’s been happening so far in 2018, in the “Year of Scalability?”

As outlined in the Ethereum Foundation’s call for applications, there are two complementary approaches to increasing the processing capacity of the Ethereum system.

Layer 1 is the “on chain,” protocol layer: how can we fundamentally increase the capacity of the Ethereum blockchain?

Layer 2 covers “off chain” solutions in which most transactions are not recorded on the blockchain. Nonetheless, the underlying blockchain remains able to guarantee the safety and security of Layer 2 systems.

The Issue: The Need for Speed

The Ethereum network has become more successful than perhaps anyone could have anticipated in such a short time. The chart below shows the percentage of maximum capacity that Ethereum has been running at since its inception. After a couple of years of running at low capacity (give or take some spikes around the DAO hack and the network spam attacks of summer 2016), utilization has been at over 80% for much of 2018, with over 99% average utilization on a few days this year.

crypto

 

This huge demand has sometimes resulted in undesirable user experience issues such as lengthy waits to get transactions included in the chain and volatile transaction fee (gas) prices.

Massive scalability — the ability to process thousands of transactions per second rather than the the current 15-or-so tps — has long been part of the plan for Ethereum. The approach to implementing this has become known as “sharding.” Currently, like all other blockchain platforms presently in public release, every node in the Ethereum network processes every transaction, which is a huge limitation. In the sharded network, transaction processing and the associated storage (the state) are split into separate, independent shards, so that each node only needs to handle a fraction of the total system load. This sounds fairly straightforward. The real challenge is to do this while maintaining the full security of the network: if we have a thousand shards, say, how do we avoid making a network attack a thousand times easier?

Taipei, March 2018: Building the Sharded Network

At the start of 2018 a specification for a sharded Ethereum protocol already existed and had been reasonably stable for a while. On the basis of this spec, a workshop was planned for March 2018 in Taipei City, Taiwan, to bring together all the parties planning to work on implementing sharding within the various Ethereum clients. As the workshop approached, the pace of research activity accelerated, when… suddenly…three days before the workshop, a brand new outline spec was published, bursting with new ideas and ambition.

So, with much to digest on the long plane journey from Europe, Nicolas Liochon, also from PegaSys/ConsenSys, and I set out towards Taipei for the inaugural meetup of the global sharding community.

A Sharding Architecture & Wider Innovation:

It was great to see the strength of the teams represented in Taipei. Among the participants:

  • A team from Status developing a mobile client in the Nim language.
  • The Prysmatic Labs team — working on a sharding implementation in Go.
  • The Ethereum Foundation research team, of course. Much of the thought-leadership in sharding research is coming from Vitalik Buterin and Justin Drake, as well as others like Karl Floersch, Hsiao-Wei Wang, and Vlad Zamfir.
  • The Geth client development team.
  • The Parity and Web3 foundation teams.
  • The Trinity (Py-EVM) team, also from the Ethereum Foundation.
  • Other individual researchers such as Phil Daian and Leonardo Bautista-Gomez.

The workshop discussion ranged far and wide over the three days.

On the sharding front, we had in-depth discussion of the concepts from the new specification. As an example, in today’s Ethereum network, every node is responsible for three distinct functions: (1) participating in consensus on ordering transactions, (2) executing those transactions to update the state, and (3) making those transactions and the updated state available to the rest of the network (data availability).

In a sharded network, these functions could be split among different participants so as to optimize various features of the network as follows:

  • Executor nodes could be responsible for updating clients on the state of the blockchain (e.g. their account balances) on demand. This allows for a kind of “lazy evaluation” in which only calculations related to data that is actually needed are executed, and also perhaps for “alternative execution engines.”
  • Proposer nodes could be responsible for assembling transactions into blocks which they propose as the canonical history.
  • Collator nodes check that the data in the blocks offered by proposers is available and then add them to the shard’s blockchain.

This is quite different from today’s Ethereum Mainnet, but something like this is likely necessary in order to balance efficiency and security in a network where not every node can be a client of every shard.

In order to maintain efficiency, the idea is that executor and proposer nodes could remain synced with a small number of shards, but in order to maintain security, the collator nodes (which actually write to the blockchain) can be shuffled between shards quite frequently. This avoids shard-takeovers by a small subset of participants.

We also discussed the various infrastructure needed to make all this work, the shard manager contract, stateless clients, and the peer-to-peer network layer, among other things.

In addition to working on the scalability infrastructure, it was also clear that there is a significant pent-up demand for innovation on Ethereum, unrelated to scalability.

Perhaps the implementation of sharding could be a chance to bring in some other big innovations. So we also spent time on other long-standing topics like the replacement of the Ethereum Virtual Machine (the EVM) with eWasm, older topics like account abstraction, and controversial ideas like storage rent.

Berlin, June 2018: Sharding Meets Proof-of-Stake

Many of the concepts discussed in Taipei were new, and teams continued to evaluate them after the workshop. Over the following weeks, a couple of trends emerged. Firstly, that there were some weaknesses with the specifics of the proposals discussed (we published one critique).

Second, there were some very interesting developments on other fronts, most notably in cryptography, that could enable a big refactoring of the sharding model without losing efficiency or security.

With all the new developments to discuss, it was time to hold another sharding workshop. This time, we were kindly hosted in Berlin in June 2018 by the team from Status as part of the client developers’ conference they organized.

We were happy to be unexpectedly joined at the workshop by the Casper FFG (proof-of-stake) team. During the four weeks leading up to the event, another huge change to the specification had been proposed: why don’t we build Sharding & Casper together on a common platform?

It was becoming obvious that some of the new Sharding design choices had commonalities with the planned Casper FFG work that had been progressing independently (as per the now abandoned EIP-1011). Both require validator deposits (stakes), both rely on access to random numbers, both have fault proofs and slashing mechanisms, both make use of aggregate signatures. In view of these commonalities, it was proposed that both Sharding and Casper be built on a common infrastructure known as the Beacon Chain. An additional advantage would be taking the work of running Sharding and Casper off the existing Mainnet, which might struggle to sustain the extra load.

Discussions in Berlin confirmed that we all agreed that this was a positive and practical approach to getting both projects delivered.

Beyond the project planning, once again, a wide range of new ideas were discussed at the workshop. We had sessions on new cryptographic primitives such as zkSTARKs and alternative hash functions, we discussed proofs-of-custody, and we looked at options for random number generation, with the current front-runner being a RANDAO with a verifiable delay function (VDF).

Today: Towards Ethereum 2.0

So where does all this leave us in 2018?

I hope you get a sense from the above that these last six months have seen an explosion in research into scaling Ethereum, and to a large extent the dust has yet to settle.

But the general direction is clear. Development and delivery of both Sharding and Proof of Stake will take place on a new blockchain platform (Ethereum 2.0), coupled back to the current Main Chain which will continue to run as-is.

crypto

Building on a new platform like this, the Beacon Chain, allows us to introduce huge innovation unlimited by the constraints of today’s Mainnet, and, crucially, without having to do open heart surgery on the currently running network. Timelines are quite speculative, but expectations are that the beacon chain (the coordination layer, including Casper FFG) will be implemented during 2019, the Shard Chain (the data layer) in 2020, and the execution layer in 2021.

For Ethereum 2.0, we’re looking at new consensus mechanisms, new crypto-economic models, new execution engines such as eWASM (and possibly, even, delayed execution), and new cryptographic primitives.

To quote Vitalik Buterin from the first sharding workshop, “Ethereum 1.0 is a couple of peoples’ scrappy attempt to build the world computer; Ethereum 2.0 will actually be the world computer.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.