[Horn: Collecting signatures for faster finality - Consensus - Ethereum Research](https://ethresear.ch/t/horn-collecting-signatures-for-faster-finality/14219)
Two-layer signature aggregation protocol that allows the Ethereum consensus layer to aggregate attestations from the entire validator set, every slot, even with 1 million validators.
Massive increase to status quo, where consensus layer is aggregating attestations from 1/32 of the validator set
## Motivation
Ethereum uses BLS signatures to aggregate consensus votes in every slot
Currently, every validator votes once per epoch (32 slots)
If every validator could vote in every slot:
- Faster finality: with one full voting round per slot, we could finalize in two slots. With two, in a single slot.
- LMD-GHOST becomes reorg resilient and provably secure
## Proposal Overview
Keeps most of current sig aggregation logic intact while adding another layer of aggregation on top that reduces the communication costs required to process attestations
Still organizes validators in committees of 1/32 the validator set
Every committee votes in every slot instead of once per epoch
### Introduction to the current aggregation scheme
Assume validator set size of 1 million ($2^{20}$). Single committee is 1/32 of that, 32k validators
Currently, 32k validators in each committee are organized in 64 subcommittees of size 512
16 aggregators in every subcommittee aggregate signatures to publish into the main p2p subnet.
Block proposer takes the best (highest total balance of participants) aggregate from each committee and puts it into the global topic, which is included block.
In [[view-merge]] fork choice, also includes object containing the other aggregates to protect the mechanism from dishonest aggregators as long as at least one per committee is honest
Aggregation is great for improving bandwidth and verification time
**Minimizing the global topic bandwidth is a central part of this proposal**
#### A Strawman Proposal
If we wanted to scale to 1M validators, we'd need 2048 subcommittees. Bitfields and signatures would add up to 8MBs.
Horn adds another layer of aggregation to reduce the amount of messages on the global topic.
512 aggregates with bitfields of size 32kb on the global topic
Number of messages reduced by a factor of 64x
Only need 2.1MB instead of 8
## Horn Protocol
![[Pasted image 20230804101338.png]]
Add a new aggregation layer managed by a new network entity called *collectors*
Collectors get aggregates from each subcommittee (each with 512 validators) and aggregate those further into collections that represent entire committees (32k validators)
Collections are sent to the global topic
#### Collectors
16 assigned to each subcommittee, the same way that aggregators are assigned
A collection consists of an aggregated BLS signature and a bitfield of size $2^{15}$ bits
Collectors are asked to aggregate the bitfields they receive from subcommittee aggregators
Their job is to produce the best possible collection representing the entire committee
Strategy is to pick the best aggregate from each subcommittee and aggregate them further
#### Slot time increase
Requires an additional round of communication for the new aggregation layer
Increase slot time by 10 seconds to around 22
Argument is that SSF benefits are worth it
#### Reward griefing attacks against Horn
Collectors choosing the best aggregate opens the protocol to a reward griefing attack.
A malicious aggregator that controls N validators can create the best aggregate by stealing the second best aggregate, removing N-1 honest public keys, and adding N public keys
This will give the aggregator the best aggregate always.
This attack is possible in the current system.
Argued not harmful in the appendix
## Future Research
Are Horn's bandwidth requirements acceptable?
Can stateless/stateful compression techniques reduce bandwidth usage on global topic
Are there other election mechanisms that could be used for collectors to end up with less?
Other bitfield aggregation schemes or a proof-based scheme
Tuning committee and subcommittee sizes
**What are the minimal conditions for deploying a protocol like Horn and switching to everyone voting at once? In particular, what charges are needed at the consensus protocol level, if any? Interaction between FFG and GHOST - we justify more frequently and the surface area of interaction is increased**