Menu

loading...

Settings

Bridges (1)
Hop
💬General Discussions Discussion: Privacy

Published: Sep 23, 2024

View in forum →Remove

As the cryptocurrency market continues to evolve, user privacy remains a critical concern. With advancements in privacy technologies like zero-knowledge proofs (zk) and other innovative solutions, it becomes possible for exchanges to stay ahead of the curve. Privacy is fundamental in safeguarding personal and transactional information, enhancing security, and boosting user confidence. Therefore, we are curious whether Hop Exchange has been exploring options for integrating advanced privacy solutions. Specifically, we are interested in knowing if technologies such as zero-knowledge proofs (zk), ring signatures, or mixing services are being considered.

We kindly request insights from Hop Exchange regarding their current and future plans for implementing these privacy enhancements. What are the potential timelines and challenges associated with adopting such technologies? Thank you for your attention and consideration.

1 post - 1 participant

Read full topic

🐰Hop Ecosystem RFC - Supporting new coins such as EURe

Published: Sep 23, 2024

View in forum →Remove

We propose the inclusion of the EURe stablecoin by Monerium in Hop Exchange. The EURe is gaining significant traction and offers numerous benefits that would enhance Hop’s offerings.

Adding EURe to Hop Exchange will diversify its offerings, attract more users, and strengthen its market position. We request Hop Exchange to consider this proposal and look forward to potential discussions.

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New OP Merkle Rewards Root Wed, 18 Sep 2024 01:00:00 +0000

Published: Sep 19, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0xc418f55b40b1d6eb75ff848e2340fbf515a2e1181f6b673248f05aca1232d047
Merkle root total amount: 290399.799537088822848506 (290399799537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1726621200 (2024-09-18T01:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1726621200

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0xc418f55b40b1d6eb75ff848e2340fbf515a2e1181f6b673248f05aca1232d047
totalRewards: 290399799537088822848506

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New ARB Merkle Rewards Root Tue, 03 Sep 2024 03:00:00 +0000

Published: Sep 04, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x03d3b5684a37912cca9036f8f4da73f628e3a304b3c6496661d877383b27cc5a
Merkle root total amount: 18378.89 (18378890000000000000000)
Start timestamp: 1718150400 (2024-06-12T00:00:00.000+00:00)
End timestamp: 1725332400 (2024-09-03T03:00:00.000+00:00)
Rewards contract address: 0xb3c18710fE030a75A3A981a1AbAC0db984e51853
Rewards contract network: arbitrum

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0xb3c18710fE030a75A3A981a1AbAC0db984e51853 --rewards-contract-network=arbitrum --start-timestamp=1718150400 --end-timestamp=1725332400

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0xb3c18710fE030a75A3A981a1AbAC0db984e51853
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x03d3b5684a37912cca9036f8f4da73f628e3a304b3c6496661d877383b27cc5a
totalRewards: 18378890000000000000000

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New OP Merkle Rewards Root Wed, 21 Aug 2024 01:00:00 +0000

Published: Aug 22, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0xf54b03f2bb750ca31371b8d2f7c579fe4471b10579dfe9bdd556a067da1f27fc
Merkle root total amount: 289596.009537088822848506 (289596009537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1724202000 (2024-08-21T01:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1724202000

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0xf54b03f2bb750ca31371b8d2f7c579fe4471b10579dfe9bdd556a067da1f27fc
totalRewards: 289596009537088822848506

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Wed, 21 Aug 2024 00:00:00 +0000

Published: Aug 22, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x4ccf7e860ea9d959a962f775ac74ea66691fb7a90f5d1a0f253123aa03b15c41
Merkle root total amount: 16814.46 (16814460000000000000000)
Start timestamp: 1718150400 (2024-06-12T00:00:00.000+00:00)
End timestamp: 1724198400 (2024-08-21T00:00:00.000+00:00)
Rewards contract address: 0xb3c18710fE030a75A3A981a1AbAC0db984e51853
Rewards contract network: arbitrum

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0xb3c18710fE030a75A3A981a1AbAC0db984e51853 --rewards-contract-network=arbitrum --start-timestamp=1718150400 --end-timestamp=1724198400

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0xb3c18710fE030a75A3A981a1AbAC0db984e51853
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x4ccf7e860ea9d959a962f775ac74ea66691fb7a90f5d1a0f253123aa03b15c41
totalRewards: 16814460000000000000000

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New OP Merkle Rewards Root Wed, 24 Jul 2024 01:00:00 +0000

Published: Jul 25, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x89a6ba7c04ea39f1dc5af3f4f66919e4bff834c28b04177efcc5a5a0bc62bd9d
Merkle root total amount: 288219.289537088822848506 (288219289537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1721782800 (2024-07-24T01:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1721782800

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x89a6ba7c04ea39f1dc5af3f4f66919e4bff834c28b04177efcc5a5a0bc62bd9d
totalRewards: 288219289537088822848506

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Wed, 24 Jul 2024 00:00:00 +0000

Published: Jul 25, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x094d0e49c267784244e4e3b8be0d4b83b65b73952e047af16016f2abdb20ddfe
Merkle root total amount: 12083.09 (12083090000000000000000)
Start timestamp: 1718150400 (2024-06-12T00:00:00.000+00:00)
End timestamp: 1721779200 (2024-07-24T00:00:00.000+00:00)
Rewards contract address: 0xb3c18710fE030a75A3A981a1AbAC0db984e51853
Rewards contract network: arbitrum

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0xb3c18710fE030a75A3A981a1AbAC0db984e51853 --rewards-contract-network=arbitrum --start-timestamp=1718150400 --end-timestamp=1721779200

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0xb3c18710fE030a75A3A981a1AbAC0db984e51853
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x094d0e49c267784244e4e3b8be0d4b83b65b73952e047af16016f2abdb20ddfe
totalRewards: 12083090000000000000000

1 post - 1 participant

Read full topic

🐰Hop Ecosystem Collaberation Request with Contrax Finance

Published: Jul 23, 2024

View in forum →Remove

Hello Hop DAO community,

This is Soheeb with Contrax, and I’m also a casual Hop DAO member (I’ve “hopped” on a few calls before :blush:). Contrax is a new DeFi yield aggregator focused on making access to DeFi yield earning as easy as using a fintech app. We use account abstraction, gas coverage, routing into all strategies with USDC or ETH, and auto-compounding to enhance this experience.

We are currently on Arbitrum and have received the LTIPP grant. We integrated a few Hop vaults before we were even live, and today, almost 3/4 of our $400k in TVL comes from Hop vaults, with the most popular being the Hop hETH vault:

The vault can achieve such a high APY because it is on top of your current vault, which already has LTIPP rewards, with further rewards distributed by us.

Given that we are LTIPP partners and have already done the integration while bringing liquidity to your vaults, we would love to collaborate further. The exact ways to collaborate are open to discussion. At a high level, the goal would be to increase exposure of both protocols to the communities of the other and find ways to work together as we build out the product.

Looking forward to hearing from the community!

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 25 Jun 2024 18:00:00 +0000

Published: Jun 26, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x81004d8b0ff10d4a1eab01cb31356a13301cf600b155e112a363ff17a4e749de
Merkle root total amount: 3937.1 (3937100000000000000000)
Start timestamp: 1718150400 (2024-06-12T00:00:00.000+00:00)
End timestamp: 1719338400 (2024-06-25T18:00:00.000+00:00)
Rewards contract address: 0xb3c18710fE030a75A3A981a1AbAC0db984e51853
Rewards contract network: arbitrum

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0xb3c18710fE030a75A3A981a1AbAC0db984e51853 --rewards-contract-network=arbitrum --start-timestamp=1718150400 --end-timestamp=1719338400

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0xb3c18710fE030a75A3A981a1AbAC0db984e51853
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x81004d8b0ff10d4a1eab01cb31356a13301cf600b155e112a363ff17a4e749de
totalRewards: 3937100000000000000000

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 25 Jun 2024 17:00:00 +0000

Published: Jun 26, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0xb03f629bfeb677f50c2a3116643f277a2c89c31a7475e7c71bbc70b073c42a65
Merkle root total amount: 286227.099537088822848506 (286227099537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1719334800 (2024-06-25T17:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1719334800

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0xb03f629bfeb677f50c2a3116643f277a2c89c31a7475e7c71bbc70b073c42a65
totalRewards: 286227099537088822848506

1 post - 1 participant

Read full topic

🐰Hop Ecosystem Discontinue MAGIC Arbitrum Nova LP mining HOP rewards

Published: Jun 21, 2024

View in forum →Remove

Unecessary selling pressure (214.33 HOP / day), useless incentive (who tf bridges to Nova in MAGIC…?).

7 posts - 4 participants

Read full topic

🗳Meta-Governance Grants Committee Election Nomination Thread

Published: Jun 14, 2024

View in forum →Remove

Fortunately, the snapshot vote to renew the Hop grants committee passed with 99.88% voting in favor therefore the next step is to commence the nomination thread.

The base responsibilities for each committee member will be to:

  • Promote the grant program to attract grant applicants (i.e. hosting discord calls or x spaces).
  • Share quarterly participation and voting metrics regarding grant applicants.

If you would like to nominate yourself to join the three-person grants committee please share:

  • Background on yourself.

  • Why are you a great candidate to join the Hop grants committee?

  • Describe a successful grants program.

  • Please share a short list of RFPs you would like for the grants program to target.

  • Past experience with grants programs

  • Reach within crypto community

14 posts - 9 participants

Read full topic

💬General Discussions Introduction - Spike - Avantgarde finance

Published: Jun 03, 2024

View in forum →Remove

Hello, Hop! :dizzy:

Just wanted to introduce myself, I’m Spike, I work for Avantgarde finance. Before becoming blockchain maxi I used to be investment banker believe it or not! :laughing:

I’ve been around since 2016, saw first DAO formation and witnessed all the fun with Ethereum classic (how is it still alive).

At Avantgarde I’m covering everything related to governance - voting, decision making, we are a large delegate in a number of protocols including Compound and Uniswap.

Big big big thanks to @francom for having a chat with me recently. It was very helpful in terms of getting better understand how community functions and what are the key pain points.

I’ll be joining the call every now and then to better understand where community is moving.

And of course great to meet everyone!

Cheers!

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 28 May 2024 17:00:00 +0000

Published: May 29, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x7aed64b8e489d8e85bc11b1d503068774969d12d1a30e4d9eac2b27def0dedb0
Merkle root total amount: 284306.499537088822848506 (284306499537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1716915600 (2024-05-28T17:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1716915600

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x7aed64b8e489d8e85bc11b1d503068774969d12d1a30e4d9eac2b27def0dedb0
totalRewards: 284306499537088822848506

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 30 Apr 2024 16:00:00 +0000

Published: May 01, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0xf6e9ddfc2e29427f49ddedb817044cee70a4652da1429eca1e84db25cdce7ad1
Merkle root total amount: 282326.709537088822848506 (282326709537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1714492800 (2024-04-30T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1714492800

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0xf6e9ddfc2e29427f49ddedb817044cee70a4652da1429eca1e84db25cdce7ad1
totalRewards: 282326709537088822848506

1 post - 1 participant

Read full topic

🪳Bugs & Feedback I found a bug. do you have bug bounty program?

Published: Apr 30, 2024

View in forum →Remove

Hello
I’m curious to know if Hop Protocol offers a bug bounty program.

Additionally, I’d like to inquire about the rewards for bug reports, as I’ve discovered a critical bug.

Thank you!

2 posts - 2 participants

Read full topic

🗳Meta-Governance Hop Protocol & DAO Report

Published: Apr 10, 2024

View in forum →Remove

Hop Protocol & DAO Q1 ‘24

Welcome to our Quarterly report on the progress of Hop Protocol, the leading crypto bridge focused on enhancing blockchain modularity. In this update, we’ll highlight key advancements achieved over the past quarter, from technical upgrades to protocol and DAO growth.

Hop Protocol

In Q1 2024, Hop Protocol’s volume surpassed the $5 billion mark. Hop Protocol currently supports transfers between the following networks; Ethereum mainnet, polygon, gnosis, optimism, arbitrum one, arbitrum nova, base, linea, and polygon zkEVM.

The total volume this quarter was a 63.62% increase from last quarter with a total volume of $478 million vs $292 million Q4 2023. The total volume this quarter was higher than each quarter in 2023.

While Hop V1 has been around since 2021 and has successfully bridged over $5 billion, Hop V2 is around the corner as development inches closer to mainnet. Significant progress has been made with off-chain infrastructure as the front-end, explorer, etc. for V2 is complete after ongoing testing. Additionally, on-chain contract development is progressing well, with a focus on transaction simulations taking place in the testnet.

The bonder network continues to perform and is preparing for changes coming up with V2 and its push to decentralize the bonder role.

Hop supports liquidity pools in Ethereum mainnet, polygon, gnosis, optimism, arbitrum one, arbitrum nova, base, linea, and polygon zkEVM . The TVL of Hop’s liquidity program is roughly $28.4 million with 76.2% TVL being in ETH. USDC.e comes in second place with 6.3% of the TVL and DAI in third with 5.5%. The protocols with greatest liquidity mining TVL are Arbitrum One with 33.6%, Optimism with 26.2%, Base with 19.9%, and Polygon with 12.5%. The upgrade of the USDC bridge to CCTP has launched and this move supports cheaper and more efficient native USDC transactions and the upgrade to Hop V2.

The list below shows each source chain’s most frequented destination chains during Q1 ‘24

Ethereum > Polygon

Polygon > Ethereum and Base

Optimism > Base

Arbitrum One > Ethereum and Base

Base > Ethereum

Linea > Optimism, Arbitrum, and Base

The lifetime transfer count is roughly 3.7 million and 74.10% of transfers have been in ETH. USDC is second with around 13.64% of transfers. The average weekly transfer counts this quarter was 29k transfers.

The current circulating Hop token supply is around 75 million which represents around 7.5% of the total token supply.

The table below demonstrates that the Hop token’s current liquidity is around $682,711 in various exchanges and chains.

Hop DAO

The DAO has approximately 1.47k governance participants and 78 delegates that have been actively participating recently according to karmahq.xyz/dao/delegates/hop.

During this quarter the DAO has had five passing Snapshot votes while Tally has had three passing votes and one failed vote. The failed vote was [HIP-39] Community Multisig Refill (4) which failed due to lack of quorum.

This quarter the DAO passed the following proposals: Treasury Diversification for Ongoing DAO Expenses (HIP44), Treasury Diversification and Protocol Owned Liquidity, Delegate Incentivization Trial (Third Cycle), Community Moderator Role and Team Compensation, and finally, Head of DAO Ops Role and Election.

There are three main proposals that are currently live in the forum: Grants Committee Renewal and Redesign, Protocol Financial Stability Part 1, and Hop Single Sided Liquidity.

Topics for future discussion: Migrating to L2 for Voting and Token Redelegation.

This report solely represents the views and research of the current Head of DAO Operations which could be subject to errors. Nothing in this report includes financial advice.

2 posts - 1 participant

Read full topic

🐰Hop Ecosystem [RFC] HOP Single Sided Liquidity

Published: Apr 08, 2024

View in forum →Remove

Summary

Hop DAO should LP 25,000,000 HOP as single sided liquidity in the HOP/ETH 0.30% Univ3 pool on Ethereum. This will increase HOP liquidity and market depth while providing a natural source of diversification for the DAO.

Motivation

As Hop prepares for the launch of v2 and hopefully the ensuing growth of the protocol, now is the time to improve HOP liquidity and position the DAO for increased demand. Hop has not engaged any market makers or incentivized liquidity for HOP to date. I believe that this is the correct approach, however HOP liquidity is extremely low. This makes it difficult to enter positions and leads to significant price volatility. As of this past week, ~$500k of total buy orders would basically exhaust the entire Univ3 pool. This is a relatively small amount of liquidity and could prove problematic if Hop v2 significantly increases demand. I understand that some might say, “wow token shortage good, price go up big”, but that approach is not sustainable. The current TVL of the HOP pool is approximately $280k. This proposal would increase the TVL by $1.2m which would put HOP in line with similar assets. Much of the TVL will be well above the current spot price of HOP which should limit potential adverse effects. As a community, I believe that having well aligned HOP holders is in our long-term best interest. Increasing HOP liquidity will create avenues for more participants to get on board in a reasonable manner.

Recent efforts to increase HOP liquidity are positive but still leave a ways to go. Single sided liquidity lets us LP meaningful amounts of HOP solely to the “upside” of the price range. As the price of HOP increases, the DAO is gently selling HOP for ETH based on market demand. If the price of HOP decreases once this position is in range, there will be significantly more market depth to absorb selling. From my perspective, the biggest downside of this proposal is that it effectively creates resting sell orders for HOP that will need to be filled for the price to increase in the pool. I have tried to size the proposal appropriately to increase liquidity while not overburdening the market for HOP. The impact of additional HOP appears to be very reasonable. The next section explains the mechanics and practical implications for the position.

Mechanics of Execution

I will caveat this by saying that it is difficult to model Univ3 positions and that this should be generally accurate but may be slightly off – please keep in mind that everything is priced in ETH terms so that is an additional variable that makes precision challenging. If there is a great tool for modeling Univ3 positions, please let me know because this was all done manually.

This proposal would take 25,000,000 HOP held by the DAO and LP in a range from just above the current spot price to tick #6400 which equates to roughly a $6 HOP price. To provide context for the additional liquidity, this will add ~$250k of depth between the current spot price and $0.10 HOP. As the price of HOP increases, the dollar value of the depth increases as well (e.g. there is about 2x as much additional depth between $0.10 and $0.20, etc.). The current liquidity is fairly concentrated near the spot price; this proposal would greatly increase the longer tail of liquidity throughout this range. For the purpose of illustration, if the entire range were to be filled it would yield about $31m in ETH for the DAO. This proposal will not provide any liquidity for people to “dump” on at the current prices and only comes into range if the price of HOP increases. If we determine that the liquidity is having negative impacts on HOP or unintended consequences, we can pull the liquidity at any time (I assume this would require a subsequent vote).

I believe that this proposal is a positive step towards creating a more robust environment for HOP ahead of v2 while also providing a gentle means of diversification for the DAO. Please let me know if you have any comments, suggestions or concerns.

Voting Options

  • LP 25,000,000 HOP in specified range
  • No action
  • Abstain

12 posts - 6 participants

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 02 Apr 2024 16:00:00 +0000

Published: Apr 03, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0xed1cb21099c17c1fd6e0b72240dd652b5811b74df3ca568aa8f8a98a7fb9daea
Merkle root total amount: 280084.479537088822848506 (280084479537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1712073600 (2024-04-02T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1712073600

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0xed1cb21099c17c1fd6e0b72240dd652b5811b74df3ca568aa8f8a98a7fb9daea
totalRewards: 280084479537088822848506

1 post - 1 participant

Read full topic

🗳Meta-Governance [RFC] Protocol Financial Stability - Part 1

Published: Mar 27, 2024

View in forum →Remove

Useful Links

Summary

  • This RFC outlines the importance of prioritizing financial stability for HopDAO and introduces the first part of a three-part proposal aimed at effective treasury management.
  • The core of the Framework emphasizes the importance of a structured approach to managing the treasury to guarantee resilience and promote sustainable growth. This includes transparent operations, a clear financial plan, and strategies to uphold token stability.
  • Prior to the snapshot, active engagement with the community is essential to gather feedback and refine the proposed framework

Intro

After discussing with delegates and reviewing poll results, along with understanding the DAO’s potential from a developer’s perspective, it’s clear that prioritizing Hop Financial Stability is crucial. Managing the treasury involves many aspects, from basic budgeting to dealing with complex issues like protocol-owned liquidity and spreading out investments.

Our plan is straightforward:

First, we’ll identify the main problems and decide what’s most important. We’ll use feedback from the community and look at the DAO’s current financial situation, especially with the upcoming V2 launch. This will help us figure out where to focus our efforts.

Next, we’ll create a simple plan for managing the treasury. We’ll outline basic rules and goals for how to handle our money. This will give us a clear path forward and make sure everyone knows what to do. This is the initial part and it will be covered in this post.

Then, we’ll look at how to manage the liquidity of our token. We’ll talk about why it’s important and come up with ideas to keep our token stable and attractive to investors. This will be the second part of our RFC.

Lastly, we’ll set up a plan for how to actually do all this. We’ll decide who’s in charge, what goals we want to meet, and how we’ll measure our success. This will be like a roadmap for the DAO to follow and it will be the third and the last part of our RFC.

With this plan, we’ll be ready to manage our treasury effectively and make sure Hop stays strong and successful.

Let’s work together to make it happen!

Problem Statement

Improving our financial stability is essential for the DAO’s success, just like it is for any other business. We need to consider all aspects of our finances, but we also need to prioritize what’s most important.

The Hop Protocol makes a lot of money, but not all of it goes into our HopDAO Treasury. We have a great community, and it’s important to listen to them to understand how they see the financial situation and what’s important to them when making investment decisions.

Recently, we did a survey to find out what the community thinks we should focus on. From the feedback we received from delegates, the survey, and my own experience, here are the things that HopDAO needs:

a) Hop requires a systematic and well-thought-out approach to managing its treasury to ensure the DAO’s financial resilience and create a platform for sustainable growth.

b) To achieve this, we need a Treasury Management Framework. This framework should include:

  • a transparent operational model with actions, accountable persons, and governance structure,
  • a well-defined financial plan that aims to establish a sustainable runway, liquidity targets, supplemented by proactive budgeting, and regular reporting,
  • a HOP token liquidity management plan to ensure Protocol’s token stability and attractiveness for current and future investors.

These priorities form the foundation for the Treasury Management Framework that I’m about to introduce.

Updated Hop Treasury Management Framework

We have laid out the basic structure needed for a successful treasury management plan here.

Now, let’s refine and tailor it to fit the priorities of our DAO. As mentioned earlier, the key is to focus on setting clear principles and objectives from the start. Once we have this solid foundation, our governance process will become transparent and flexible enough to adjust to changes in the market and our financial needs over time.

TMF Initial Components

The structure of the content within the Treasury Management Framework is designed to provide a comprehensive approach to treasury management for Hop. It encompasses the following key components:

1. Treasury Management Principles

Following these principles helps us manage our DAO’s finances well. We focus on being open, inclusive, and responsible. These principles help us handle our funds in a decentralized way, aiming for transparency, reducing risks, and aligning with our goals.

  • Transparency: We believe in open communication and sharing relevant information about the DAO’s finances. Everyone should have access to know how funds are allocated and investments are made. Transparency builds trust and ensures that everyone is on the same page. We commit to monthly financial reporting, accessible via the Hop Community Forum.
  • Simplicity: “Simplicity is the ultimate sophistication”. We believe in simplicity over complexity. Treasury management doesn’t have to be convoluted and confusing. We aim to streamline our processes, making them accessible and easy to understand for everyone involved. By keeping it simple, we reduce risks and avoid unnecessary complications.
  • Diversification: We encourage spreading the risk by diversifying our investments. Instead of relying on HOP, we explore various opportunities across stablecoins, ETH, and WBTC, deployed across multiple DeFi protocols and strategies. Diversification keeps us balanced and safeguards against unexpected exploits.
  • Accountability and Decentralisation: We recognize the importance of having a dedicated individual or team responsible for driving decisions forward. This ensures that inertia is overcome and progress is made. While we value efficiency, we also advocate for a democratic process where the DAO collectively votes on fundamental guidelines. This strikes a balance between swift action and maintaining oversight to prevent reckless trading practices.
  • Risk Management: Our foremost priority is safeguarding our financial resources. We meticulously assess various risk metrics, from market-related risks to strategy-specific vulnerabilities, ensuring our treasury remains robust and sustainable.
  • Focus: Our treasury management decisions should align with our DAO’s objectives, focusing on ensuring the long-term sustainability of the DAO and it’s token. By keeping our objectives in sight, we make purposeful and impactful decisions.

2. Treasury Management Objectives

As framed in the problem statement section, the goal of the Treasury Management is to build:

“…a systematic and well-thought-out approach […] to ensure the DAO’s financial resilience and create a platform for sustainable growth.”

I.e., translating this into 5 key objectives that inform our work for the framework:

  1. Meeting Operational Needs: One of the primary objectives of the DAO’s treasury management is to ensure that a DAO has sufficient cash and liquidity to meet its financial obligations and fund its day-to-day operations. This involves optimizing cash flows, managing working capital effectively, and maintaining appropriate levels of liquidity to mitigate the risk of cash shortages. The rest of the longer-term oriented capital, e.g., not needed within the next 24-36 months for operational needs, should be used to take advantage of investment opportunities with different risk profiles.
  2. Provide a sustainable liquidity for HopDAO token: Liquidity is a foundational element for the success and stability of any token-based project. It ensures that new investors can seamlessly enter the market while providing an exit path for those looking to divest. HopDAO Treasury should be able to identify mid-term liquidity improvement strategies, and propose long-term solutions to enhance protocol liquidity.
  3. Data-Driven Decision Making: By leveraging data, we aim to optimize investment choices, risk management strategies, and operational efficiency. This principle ensures that our DeFi strategies are grounded in solid analysis of on-chain and off-chain data, as well as utilising state-of-art statistical concepts, enhancing the effectiveness of our treasury management.
  4. Overseeing Risks, related to Treasury: Treasury management aims to identify, assess, and manage various financial risks faced by a DAO. This includes protocol risk, liquidity risk, credit risk, market risk, and operational risk. The objective is to implement strategies and/or hedging techniques to minimize the impact of these risks on the DAO’s financial performance.
  5. Regulatory Considerations: We stay updated on DeFi regulations to adapt our strategies and partnerships accordingly. This helps us avoid legal risks and adjust our investments as needed. We stay resilient and flexible amid regulatory changes.

Once the DAO approves these initial components, we can move forward with implementing and executing the framework.

TMF Top Priority Components Overview and Next Steps

Now, we’re at a critical point where we need to discuss the practical execution of our objectives. The proposal will be divided into two main areas: Protocol Liquidity Management and Treasury Management execution. Here’s what you can expect in Part 2 and Part 3:

Part 2 - Protocol Owned Liquidity Management Overview:

  1. POL Research: Every decision made by the DAO should be rational and data-driven. I aim to provide the Hop community with an overview of common practices used to manage protocol liquidity. The goal is to assess various protocols and strategies, ranking them based on our specific needs.
  2. Protocol Liquidity Management Framework: In this section, I’ll introduce the principles I plan to use to achieve deep liquidity and price stability.

Part 3 - Treasury Management Mandate:

This final part of the RFC will be structured as a mandate, which will be submitted to the snapshot after gathering all necessary feedback from the community and delegates. Here are the topics we will discuss:

  1. Treasury Management Operational Model & Governance: This section outlines the central components of the framework, aligning all stakeholders and ensuring clarity in the treasury management process. It details procedures, policies, and tools for efficient cash flow management, risk mitigation, and decision-making within the DAO.
  2. Financial Management and Budgeting: Emphasizing cash flow forecasting and liquidity management, this section explores strategies to ensure adequate liquidity for operational needs. It aligns budgeting practices with the DAO’s financial goals.
  3. Treasury Asset Allocation: Efficient allocation of treasury funds is vital for maximizing returns while managing risk. This section covers the selection and allocation process for deploying the DAO’s financial resources, including diversification, rebalancing, and investment strategies.
  4. Reporting: Transparency is essential for the success of a DAO. This section focuses on reporting tools and mechanisms to ensure full transparency in treasury management, emphasizing timely and accurate reporting for stakeholders.
  5. KPIs, Deliverables, and Terms: This section outlines the specific terms, expectations, timeline, and compensation associated with the mandate.

Conclusion

By implementing the Treasury Management Framework, HopDAO can establish a robust and sustainable approach to managing its financial resources. This framework guides every aspect of treasury management, from principles to governance, empowering DAOs to make informed decisions and achieve their financial goals while maintaining transparency and mitigating risks.

I’m thrilled to be part of this journey and excited to contribute to HopDAO’s Treasury success. Your feedback on this proposal is invaluable, and I welcome any thoughts or suggestions you may have. Feel free to reach out to me directly through DMs for a more detailed conversation. Let’s work together to shape the future of HopDAO!

2 posts - 2 participants

Read full topic

🗳Meta-Governance October 2023 - March 2024 HIP 4 Delegate Compensation Reporting

Published: Mar 27, 2024

View in forum →Remove

Hey Hop DAO,

Since [HIP-46] Renewal of Hop Delegate Incentivization Trial (Third Cycle) passed, it’s time to create a new Delegate Compensation Reporting Thread for this period (October 11, 2023 – March 27, 2024). The last delegate compensation reporting period ended October 10, 2023, therefore anything from then up until March 27,2024 falls within this reporting period.

To make matters easier for delegates who are eligible for incentives, the Head of DAO Operations will verify the voting and communication requirements for each delegate but each delegate is expected to share in this thread their voting and communication ratio, their lowest HOP delegated during the period, and finally their incentive rewards amount for the period based on the calculation. Please use the calculation from the recent snapshot vote to renew the delegate incentivization program. Please share your communication publicly in each proposal’s governance forum thread or create your own voting and communication thread for the Head of DAO Operations to verify. Please include your Ethereum address as well.

Delegates can utilize this Dune Querry 12 5 to find the lowest level of Hop within the time period.

Delegates can also use this graph 9 5 to determine their compensation by using their lowest Hop for the specified period.

Delegates can go ahead and report below in this thread.

Below are the snapshot votes and Tally votes since the last reporting period.

Snapshot Votes Since Last Delegate Reporting Thread

  • [Temperature Check] Treasury Diversification & Protocol Owned Liquidity (multichain HOP/ETH LPs)
  • [HIP-41] Incentivize Hop AMMs on Supported and Upcoming Chains
  • Hop Community Moderator Compensation
  • [HIP-43] Proposal to create Head of DAO Operations
  • [HIP-44] Treasury Diversification for Ongoing DAO Expenses
  • [HIP-45] Head of DAO Operations Election
  • [HIP-46] Renewal of Hop Delegate Incentivization Trial (Third Cycle)

Tally Votes Since Last Delegate Reporting Thread (August, September, October)

  • [HIP-39] Community Multisig Refill (4) on Feb 5th, 2024 was defeated
  • [HIP-40] Treasury Diversification & Protocol Owned Liquidity on Feb 5th 2024 passed
  • [HIP-39] Community Multisig Refill (5) on Feb 15th, 2024 passed
  • [HIP-44] Treasury Diversification for Ongoing DAO Expenses on Mar 4th 2024 passed

For example;
francom.eth
Voting: 11/11
Communication: 9/9 (HIP 40 & HIP 44 had to be voted on in snapshot and tally but you only have to communicate rationale once for each of these proposals).

lowest Hop during period: x
incentive rewards during this period: x

20 posts - 7 participants

Read full topic

🐰Hop Ecosystem [Request for Comment] Launch Hop on NEO X (NEO EVM) Mainnet

Published: Mar 18, 2024

View in forum →Remove

Launch Hop on NEO X (NEO EVM) Mainnet

Point of Contact: Tony Sun

Proposal summary

We propose to Hop community to deploy Hop Bridge protocol to the NEO Ethereum Virtual Machine rollup known as “NEO X” on behalf of the community.

We believe this is the right moment for Hop to deploy on NEO X, for several major reasons:

¡ NEO X is a new zk-rollup that provides Ethereum Virtual Machine (EVM) equivalence (opcode-level compatibility) for a transparent user experience and existing NEO ecosystem and tooling compatibility. Additionally, speed of fraud proofs allows for near instant native bridging of funds bridge (rather than waiting seven days).

¡ An Ethereum L2 scalability solution utilizing cryptographic zero-knowledge technology to provide validation and fast finality of off-chain transaction computations.

¡ A new set of tools and technologies were created and engineered and are contained in this organization, to address the required recreation of all EVM opcodes for transparent deployment and transactions with existing Ethereum smart contracts.

¡ NEO X is aligned with NEO and its values.

¡ Hop is already deployed on POS with good success

¡ Hop gaining market-share through early mover advantage

NEO X main net will launch in May to June. Our aim is to have Hop as one of our early stage bridge partners, as we view Hop as a highly crucial product for users to bridge their assets across various chains.

About NEO X

Neo was founded 2014 and has grown into a first-class smart contract platform. NEO is one of most feature-complete L1 blockchain platforms for building decentralized applications.

Neo X is an EVM-compatible sidechain incorporating Neo’s distinctive dBFT consensus mechanism. Serving as a bridge between Neo N3 and the widely adopted EVM network, we expect Neo X to significantly expand the Neo ecosystem and provide developers with broader pathways for innovation.

In this pre-alpha version of the TestNet, we have aligned the Engine and dBFT interfaces. The main features are as follows:

dBFT consensus engine support has been added to Ethereum nodes. Geth Ethereum node implementation is taken as a basis.

A set of pre-configured stand by validators act as dBFT consensus nodes. All the advantages, features and mechanics of dBFT consensus are precisely preserved.

Ethereum P2P protocol is extended with dBFT-specific consensus messages.

Invasive existing Ethereum block structure modifications are avoided as much as possible to stay compatible with existing Ethereum ecosystem tools. MixHash block field is reused to store NextConsensus information and Nonce field is reused to store dBFT Primary index.

Multisignature scheme used in Neo N3 is adopted to the existing Ethereum block structure, so that it’s possible for Neo X consensus nodes to add M out of N signature to the block and properly verify signature correctness. Extra block field is reused for this purpose.

Secp256k1 signatures are used for block signing.

The multi-node dBFT consensus mechanism, enveloped transactions, and a seamless native bridge connecting Neo X with Neo N3 will be introduced in subsequent versions.

*Please be aware that this pre-alpha version of NeoX is in the active development phase, meaning that ALL data will be cleared in future updates.

Motivation

There’s significant value in Hop being available on an EVM. Deploying early on NEO X helps solidify Hop’s place as a leading DEX and a thought leader.

Additionally, given the community and user uptake Hop has seen on NEO X PoS, it’s only natural to make its deployment on NEO X a priority.

Partner Details

Neo Global Development

This proposal is being made by Tony Sun, an employee of Neo Global Development. Neo Global Development is a legal entity focused on the ecosystem growth and maintenance of the suite of NEO.

Partner Legal

The legal entity that is supporting this proposal is Neo Global Development Ltd, a British Virgin Islands corporation known as “NGD”.

Delegate Sponsor

There is no delegate co-authoring or sponsoring this proposal. Instead, this is a proposal submitted by Tony Sun of NGD to support the growth of NEO as part of the overall NEO community.

Conflict of Interest Declaration

There are no existing financial or contractual relationships between NGD and any of Sushiswap’s legal entities, including Sushiswap, SUSHI tokens, nor investments of Sushiswap…

What potential risks are there for this project’s success? How could they be mitigated?

Deploying on NEO X should pose minimal risks, relative to deploying on alternate blockchains. As an Ethereum Layer Two, it uses Zero Knowledge proofs to inherit NEO’s core safety, while allowing developers to easily deploy existing EVM codebases. The bridge has been disintermediated, and Sushiswap can expect reputable Oracle providers to be available as data providers from Day One. NEO X’s EVM testnet has been running for the past two months. Additionally, the deployment has been audited multiple times, by auditors including Red4Sec. Welcome to NEO

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 05 Mar 2024 16:00:00 +0000

Published: Mar 06, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x04ee319bdff2f925dacd5b7b5e7e565de0ca4319c31ee30eb77f1cc0b8b7a1d8
Merkle root total amount: 275744.119537088822848506 (275744119537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1709654400 (2024-03-05T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1709654400

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x04ee319bdff2f925dacd5b7b5e7e565de0ca4319c31ee30eb77f1cc0b8b7a1d8
totalRewards: 275744119537088822848506

1 post - 1 participant

Read full topic

🗳Meta-Governance [RFC] Treasury Diversification for Ongoing Expenses

Published: Feb 15, 2024

View in forum →Remove

This RFC is made with the same goals as the original [RFC] Treasury Diversification. Updated parameters are below.

Summary

Sell 25% of Hop DAO’s ARB holdings (209,251 ARB) for USDC. This should raise approximately $440k for Hop DAO at current prices (~$2.10).

Motivation

Hop DAO will need stablecoins to cover ongoing expenses. The current and future ongoing expenses that currently exist are:

Execution

The onchain execution of the proposal will send a message through the Arbitrum messenger to trigger a transfer of the ARB currently in the Hop Treasury alias address to the Community Multisig. The Community Multisig can then complete the sale in a series of transactions.

Voting Options

  • Sell 25% of Hop DAO’s ARB holdings for USDC
  • No action
  • Abstain

11 posts - 9 participants

Read full topic

🗳Meta-Governance Nominations for Head of DAO Ops Election

Published: Feb 13, 2024

View in forum →Remove

The DAO has voted to create a Head of DAO Ops role with 2.5 million HOP tokens voting in favor. The Head of DAO Operations will connect different aspects of the DAO and make sure there is cross-collaboration and communication to propagate personal responsibilities for the DAO’s subgroups. To qualify for this role, one must have been materially active in the DAO for at least 6-months, having attended community calls, posted on the forum, used the hop bridge, and held HOP).

If you would like to take on this role, please share your nomination below with a short excerpt on who you are and why you would make a great Head of DAO Ops.

For the Head of DAO Ops discussion thread

For the Snapshot Vote

6 posts - 4 participants

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 06 Feb 2024 16:00:00 +0000

Published: Feb 07, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x66029da6eccca2631353cb14a4bc878d774c0ca304104df2720f41bd304e6de8
Merkle root total amount: 270774.119537088822848506 (270774119537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1707235200 (2024-02-06T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1707235200

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x66029da6eccca2631353cb14a4bc878d774c0ca304104df2720f41bd304e6de8
totalRewards: 270774119537088822848506

1 post - 1 participant

Read full topic

🗳Meta-Governance RFC: Renewal of the Hop Delegate Incentivization Trial (3rd Period)

Published: Jan 27, 2024

View in forum →Remove

References

Original Delegate Incentivisation Trial 18

Delegate Amendment 11

Renewal of hop delegate incentivization trial

Simple Summary

Over the past year, Hop DAO trialed delegate compensation to participate in Hop DAO Governance actively. This is a proposal to extend the Hop Delegate Incentivization Program for a period of twelve months.

Motivation

Through the delegate incentivisation trial over the past year, delegates have been incentivised to actively participate in Hop DAO Governance by taking part in discussions on the forum, voting on proposals and ensuring to share their rationale for voting a particular way. This program has created a healthy culture of governance-related engagement in the Hop DAO.

Renewing this Hop Delegate Incentivization trial will ensure that the quality of Hop DAO does not decline.

This Incentivisation Program will retain the delegate talent that the Hop DAO has attracted; its continued existence will allow future delegates to join the Hop DAO and allocate resources to improving the Hop Protocol.

How did the program work?

Delegates are required to vote on proposals and communicate their rationale for voting in their delegate thread on the Hop DAO Forum.

Under the current Delegate Compensation Program, delegates are compensated using a formula where delegate incentives increase with the amount of HOP delegated, but in a decreasing fashion. This means that a small delegate will receive a greater incentive per voting weight than a large delegate, but a larger delegate will receive more.

Delegates utilise this Dune Querry 3 to ascertain their lowest level of Hop for that month and use this visualization graph 1 to ascertain the incentives they are due according to their lowest amount of Hop for that month.

Finally, delegates would self-report under a thread dedicated to reporting delegate eligibility each month. The following information would be required of a delegate.

Vote participation percent = (Number of proposals voted upon á all proposals) * 100

Communication participation percentage = Number of proposals referenced with voting position and reasoning for that position á all proposals ) * 100

Lowest Amount of Hop for the period

Incentives to be paid out.

Previous Changes to the Program

Sequel to the feedback provided in the previous proposal, the following changes were implemented in the previous renewal proposal.
There shall be a new formula used to calculate incentives.
Where:
I = Incentives to be received
h = lowest level of HOP delegated that month
M = Multiplier based on consecutive participation periods

To calculate the multiplier (M), we can use the following formula:

M = 1 + (0.1 * P)

Where:
P = Number of consecutive completed 6-month participation periods (capped at a certain value, e.g., 5)

This multiplier starts at 1 for new delegates and increases by 0.1 for each consecutive completed 6-month participation period, capped at a certain value (e.g., 1.5 after five periods).

The Participation rate would be removed; therefore, new delegates who reach the 90,000 threshold could join as delegates at any time.

The incentive formula would be amended to include a Multiplier based on consecutive participation periods.

The primary import of these changes is that new delegates can join Hop Governance at any time, while old delegates are incentivized to continue participating.

There is no opt-in or opt-out mechanism. Delegates will have to self-report to receive compensation.

Specification

This proposal requests for the Hop Delegate Incentivization Program to be renewed for another twelve months; the program will retain all the guidelines and procedures laid down by the original proposal 18, its subsequent amendment 11 and amendment in previous renewal.

Next Steps.

If this proposal passes, the Hop DAO Delegate Incentivization Program will continue for another twelve months.

25 posts - 10 participants

Read full topic

🗳Meta-Governance [RFC] V2 Open questions (financial perspective)

Published: Jan 24, 2024

View in forum →Remove

Related discussion(s)

Rationale

We have a V2 launch coming up soon, and we need to ensure that we have all financial aspects covered. The market could be in our favor, but as we discussed before, we should be proactive. Here is my list of open questions for the upcoming V2 launch from both financial, Ops, and Growth perspectives. Feel free to add your questions; the aim is to keep this as a record of things we should consider.

Open questions:

  • Protocol-Owned-Liquidity
    • Single-sided vs. bonding vs. PALM vs. Tokemak V2 etc.
  • HOP Valuation
    • What would be the parameters of the floor price, etc.? ( For valuation I would suggest a competitors analysis combined with a secondary market sentiment analysis)
  • Treasury
    • Circulation, payments, principles, diversification (Will create another Temp Check following this post to gather feedback)
  • Risk management
    • Liquidity risk, eg. stablecoin reserves, HOP volatility etc.
    • Market risk
    • Governance risks, eg. Governance attacks
  • Growth
    • Do we plan to have grant programs or any other incentives?
  • Token Utility
    • How do we ensure we have enough intrinsic value to hold and use HOP?
    • What would be the tokenomics?

Next Steps

  1. Define a rough timeline for the V2 launch so we can prepare accordingly
  2. POL research: @fourpoops is rocking with his proposal. I would love to double down on it and create a comparison between different options as part of a larger Risk Management initiative (plan to post it this week)
  3. I strongly encourage the community to start discussions on Growth and Token Utility
  4. Treasury and risk management setup
  5. Fast Head of DAO nomination. I think this volume of work shouldn’t be on the devs’ shoulders but rather on one person who can handle these aspects

I have created a working file where I am already preparing the research on all topics covered here. Feel free to comment and participate!

1 post - 1 participant

Read full topic

🗳Meta-Governance [RFC] Head of DAO Governance and Operations Role

Published: Jan 11, 2024

View in forum →Remove

DAOs can be chaotic due to their unstructured nature but there are DAO service providers who create governance frameworks and push along the day-to-day operations for the benefit of the respective DAOs. Hop DAO has had help from several groups since its inception. GFX Labs was the first to help with DAO operations given their extensive experience in governance and ops. When they exited the DAO, StableLab took on the role of helping the DAO with operations and governance initiatives. Unfortunately, StableLab has decided to refocus their resources elsewhere during this prolonged bear market and have recently exited the DAO.

While it is hard during this extensive bear market to retain talent…. the show must go on. With that in mind, I propose creating a new role titled Head of DAO Operations where an individual is responsible for the day-to-day operations acting as a liaison between the different participants and subgroups of the DAO such as; core developer team, grants committee, ambassadors, delegates, multisig signers and more.

The Head of DAO Operations is not to be construed as a central figure of authority but more like the glue that connects different aspects of the DAO and makes sure there is cross collaboration and communication and helps propagate personal responsibilities for each of the DAOs subgroups. It is imperative for the Head of DAO ops to be fully aligned. Therefore, to qualify for the role one must have been materially active in the DAO for at least 6-months, having attended community calls, posted on the forum, used the hop bridge, held hop).

The Head of DAO Ops will be in charge of community calls, the governance forum (pushing proposals from start to finish with the respective author), and pushing along the grants committee, ambassador program and multisig signers.

Additional responsibilities:
⁃ evaluating and defining compensation for existing and new committees and their members
⁃ Assigning budgets to committees
⁃ Verifying the data posted each month for the delegate compensation thread
⁃ Providing an overview of DAO ops at a regular cadence
⁃ Oversee and manage the transition from old committee members to new committee members when appropriate
⁃ More rapidly iterate on HIP amendments when they are needed
⁃ Help reform the grants committee and handle some of the ops side
⁃ 30 hours a month (1.5xday)
⁃ If a good faith effort to accomplish the tasks set forth as the Head of DAO ops is not made, the DAO will not pay the compensation.

This role should go through a 6-month initial term to make sure DAO operations continue to run smoothly in the short term while preparing for a long-term solution regarding ongoing operations. Since this role requires constant participation and time commitment, I believe compensation for this role should be $3k/month with a 1-year vesting period.

Compensation to be made in HOP token. Vesting starts the day after the election when the role officially begins and the work is to commence. Payment to be made retroactively every 3-months.

22 posts - 11 participants

Read full topic

Core (2)
Ethereum Magicians
EIPs EIP-7773: Amsterdam Network Upgrade Meta Thread

Published: Sep 26, 2024

View in forum →Remove

discussion-to thread for the Amsterdam Meta EIP

1 post - 1 participant

Read full topic

EIPs interfaces E.I.P.-8900: Transparent Financial Statements

Published: Sep 25, 2024

View in forum →Remove

Discussion about E.I.P.-8900: Transparent Financial Statements

Abstract

This proposal defines a standard A.P.I. that enables E.V.M. Blockchain-based companies (or also called “Protocols”) to publish their financial information, specifically Income Statements and Balance Sheets, on-chain in a transparent and accessible manner through Solidity Smart Contracts. This standard aims to emulate the reporting structure used by publicly traded companies in traditional stocks markets, like the S.E.C. 10-Q filings. The financial statements include key information, namely as Revenue, Cost of Goods Sold, Operating Expenses, Operating Income, Earnings before Interest, Taxes, Depreciation, and Amortization (E.B.I.T.D.A.) and
Earnings Per Token (E.P.Share-Token), allowing investors to assess the financial health of blockchain-based companies in a standardized, transparent, clear and reliable format.

Motivation

The motivation of this E.I.P. is to Bring Seriousness to the CryptoCurrencies Investments Market. Currently, the situation is as follows:

Most ERC-20 Tokens representing E.V.M. Blockchain-based companies (or also called “Protocols”), DO NOT work the same way as a Publicly Traded Stock that represents a Share of ownership of the equity of that such company (so the user who buys a Protocol’s ERC-20, is also now a share-holder and co-owner of the business, its profits and/or its dividends), but rather function as “Commodities” such as oil; they are consumable items created by said E.V.M. Blockchain-based company (or “Protocol”) to be spent in their platform. They are Publicly Traded and advertised to be representing the underlying Protocol like a Share, working in practice the same way as a Commodity and without any Public, Transparent and Clear Financial Information as publicly traded stocks have.

Added to that, most token research analysis reports that can be currently found on the internet are informal Substack or Twitter posts, with lots of abstract explanations about the features of the said Protocol to invest in, that lack of transparent financial numbers and factual financial information, that are made by anonymous users without real exposed reputations to affect.

This E.I.P. will improve that by giving users and investors transparent, clear and factual financial information to work with when analyzing as a potential investment the such
E.V.M. Blockchain-based company that implements this E.I.P. in their Solidity Smart Contracts, and that will generate Trust, Transparency and Seriousness in the CryptoCurrencies Investments Market long term.

FEEDBACK:

Please, everyone is invited to respectfully and constructively provide useful feedback. Have a nice day! :smiley:

1 post - 1 participant

Read full topic

ERCs ERC-4D: Dimensional Token Standard (DTS)

Published: Sep 25, 2024

View in forum →Remove

ERC: 4D
Author: ĂŚkiro
Type: Standards Track
Category: ERC
Status: Draft
Created: 2024-09-24
Requires: 20, 721, 1155, 6551, 404


Abstract

ERC-4D introduces dimensional tokens that combine ERC-20 and ERC-6551, creating tokens that function both as tradable assets and wallets. Each token holds assets within its account, enabling multi-layered assets, recursive token structures, and cyclic ownership. This standard adds liquidity to various asset classes while providing advanced management capabilities.

Motivation

Current token standards like ERC-20 and ERC-721 have paved the way for diverse use cases in the blockchain space, but they operate independently with limited interaction. As decentralized applications evolve, we need a token standard that can handle more complex scenarios. ERC-4D introduces a multi-dimensional approach, merging fungible and non-fungible behaviors. This enables each ERC-4D token to function both as a tradable asset and as a multi-layered asset container. It unlocks new possibilities for asset management, liquidity, and innovative financial instruments.

Specification

The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.

  1. Dual-Root Structure: Each ERC-4D token has both an ERC-20 root and an ERC-6551 root.
    • ERC-20 Root: Enables traditional fungible token behavior.
    • ERC-6551 Root: ERC-721 NFT serving as a wallet with its own token account.
  2. Multi-Layered Assets (Dimensions): ERC-4D can hold other assets, including
    other tokens, NFTs, or even more ERC-4D tokens.
  3. Cyclical Ownership: The ERC-6551 token account is designed to own the original ERC-20 token that created it. This creates a loop in ownership, adding complexity and novel functionalities to the token’s behavior.
  4. Deque Architecture: The ERC-6551 root allows double-ended queue operations, enabling flexible asset management through both LIFO and FIFO approaches.

Some core functionalities can be defined as follows:

// Creates an account of a specific NFT. If the account exists, returns the address
function createAccount(address implementation, bytes32 salt, uint256 chainId, address tokenContract, uint256 tokenId) external onlyOwner returns (address)

// Excludes an account from the ownership of NFTs. If the account possesses NFTs they are sent to the contract
function setERC721TransferExempt(address account, bool value) external onlyOwner

// Withdraws an NFT from addresses whose balance drops below the threshold
function withdrawAndStoreERC721(address from) internal virtual 

// Mints an NFT to addresses whose balance exceeds a predefined threshold
function retrieveOrMintERC721(address to) internal virtual

Rationale

By combining the capabilities of ERC-20 and ERC-6551, ERC-4D creates a new standard that facilitates more complex asset interactions, addressing the limitations of existing token models in managing nested or recursive assets.

Applications

Including, but not limited to:

  1. Multi-layered Asset Management: Recursive ownership of assets, enabling advanced portfolio or index tokenization.
  2. Liquid Wallets: Tradable wallets containing multiple assets.
  3. RWA Tokenization: Real-world asset tokenization for easier trade and management.
  4. On-Chain Artifacts: Can be utilized to hold digital data or on-chain artifacts such as intellectual property or historical data.
  5. Security Protocols: Self-owning tokens introduce automated governance mechanisms.

Backward Compatibility

ERC-4D remains fully compatible with existing ERC-20 and ERC-721 standards.


This spec is a WIP and will be updated as implementation progresses.

1 post - 1 participant

Read full topic

Protocol Calls & happenings ePBS breakout #10, September 27 2024
Protocol Calls & happenings All Core Devs - Consensus (ACDC) #144, October 3 2024

Published: Sep 21, 2024

View in forum →Remove

Agenda

Consensus-layer Call 143 ¡ Issue #1158 ¡ ethereum/pm ¡ GitHub moderated by @ralexstokes

Stream

1 post - 1 participant

Read full topic

Protocol Calls & happenings Stateless implementers call #24, September 23 2024
Protocol Calls & happenings EOF Implementers call #58, September 18 2024

Published: Sep 20, 2024

View in forum →Remove

Agenda

EOF Implementers Call #58 ¡ Issue #1146 ¡ ethereum/pm ¡ GitHub

Notes

Notes by @shemnon [Copied from ethereum/pm]

  • Client Discussion
  • Discussed Split
    • pt 2 should follow w/in 3-6 months
    • mild preference for one merge, but not enough to block
    • concern about scope creep and moving actual shipment to 1 year
  • Compiler Updates
    • None
  • Specs
    • Tracing
      • evmone will look into implementing, but may have changes to proposle
    • HASCODE/ISCONTRACT
      • Discussed AA concern in discord
        • AA is concerned about the pattern where non-eoa accounts are barred, HASCODE could be used to perpetuate that and slow AA adoption
        • Also, 721 could be solved better with ERC-165 interface
        • counter: AA is slowed by lack of smart contract signatures
        • counter: Banning EOAs possible w/o HASCODE
        • No conclusion yet
      • Could pectra split allow it to be added in V1?
        • Some preference to be in a follow-on fork, but preference may have been driven by time to gather data
        • Split is because of EIP bloat, adding a new EIP would counter the solution
        • At least 1 client wants to include it for V1
          • Absence could slow adoption of EOF (Any ERC-721/ERC-1155 or flashloan project for example)
        • There is concern that 721 and 1155 are badly designed, and so this pattern won’t re-occur. An update of 721 could provide the same protections and conform to modern practices.
        • AA accounts could implement 165, but then they would have to have the 721 callbacks active.
        • See note below about EXTDELEGATE and proxies
      • EXTCALL return codes
        • intent
        • 1 - gas was not burned as part of the violation
          • User reverts
          • Some failures related to call process
        • 2 - all gas consumed as part of the failure
          • Out of gas
          • RETURNDATA copy oob in legacy
          • static call violation
          • data stack overflow
        • No action today
      • Allow EXTDELEGATECALL to legacy
        • This is another use case for HASCODE, to ensure EOF proxies won’t delegate to a legacy contract
          • This could be solved with a “handshake” method or a trial delegate call
  • Testing
    • PRs will be reviewed
    • 7702 testing
      • many clients were rejecting incorrectly
      • execute mode in EEST can address this problem - uses JSON-RPC only to interact with node
  • Bikeshedding
    • Can we rename types to stack-io in the spec? types was not terribly clear.
      • stack-io
      • section-info or section-spec
      • code-info
      • signature(s)
  • Standing agenda should move testing to the first items

1 post - 1 participant

Read full topic

ERCs ERC-7772: Partial Gas Sponsorship Interface

Published: Sep 20, 2024

View in forum →Remove

Discussion topic for EIP-7772 https://github.com/ethereum/ERCs/pull/649

This proposal defines the necessary interface that decentralized applications (dApps) must implement to sponsor a portion of the required gas for user operations utilizing a Paymaster that supports this standard. The proposal also provides a suggested code implementation that Paymasters can include in their current implementation to support dApp sponsorship. Partial sponsorship between more than one dApps may also be achieved through this proposal.

1 post - 1 participant

Read full topic

ERCs ERC-7771: Router Proxy

Published: Sep 19, 2024

View in forum →Remove

Abstract


The Router Proxy introduces a streamlined approach to managing multiple implementations behind a single proxy, similar to the Diamond Proxy Standard (ERC-2535). Unlike the latter, this method hardcodes module addresses within the proxy’s implementation contract, offering a simpler, more explicit, and gas-efficient mechanism. This design reduces complexity, making it easier to reason about and improving overall efficiency while retaining the flexibility to manage multiple modules.

ERC Pull Request


1 post - 1 participant

Read full topic

Protocol Calls & happenings PeerDAS breakout #9, October 1 2024
ERCs ERC-7769: JSON-RPC for ERC-4337 Account Abstraction

Published: Sep 18, 2024

View in forum →Remove

This ERC describes the JSON-RPC calls used to communicate with ERC-4337 bundler.
It was previously included in ERC-4337 itself, but was extracted into a separate document.

This specification is currently identical to the one previously dedfined in ERC-4337 and does not require any modifications existing bundlers or ERC-4337 deployed contracts.

This is the PR for this new ERC: ERC-4337 extension: New JSON-RPC APIs by forshtat ¡ Pull Request #628 ¡ ethereum/ERCs ¡ GitHub

1 post - 1 participant

Read full topic

ERCs ERC-7766: Signature Aggregation for Account Abstraction

Published: Sep 18, 2024

View in forum →Remove

The core ERC-4337 previously included the specification for signature aggregation.
However, as this feature is not required for the functioning of the Account Abstraction, it is being extracted from the core ERC-4337 into a standalone specification.
This specification is currently identical to the one previously implemented in ERC-4337 and does not require any modifications to the deployed ERC-4337 contracts.
However, being a standalone proposal, “ERC-7766: Signature Aggregation” will continue evolving separately from ERC-4337 on its own timeline.

This is the PR to create the new ERC:

This is the PR to remove the Signature Aggregation specification from ERC-4337:

1 post - 1 participant

Read full topic

Protocol Calls & happenings Pectra testing call #5, 16 September 2024

Published: Sep 17, 2024

View in forum →Remove

Summary

Update by @parithosh (from Eth R&D Discord )

pectra:
eof :
  • Testing is going well and more clients are achieving readiness for a devnet
  • Devnet would depend on fork split decision, likely at ACD this week
  • We’d do interleaved devnets, so eof-devnet-0… and a decision needs to be made if its done as prague or as osaka
  • We’d need to be careful not to trigger the same bugs as pectra devnets to reduce debugging overhead. One approach would be to remove the faucet (we plan this for peerDAS), but the downside is that EOF would benefit a lot from public testing
peerDAS :
  • Update on local devnet and its issues
  • Lodestar endianness bug
  • Public devnets once local issues are triaged
  • Devnet cycle would depend on fork split decision made likely during ACD this week
fuzzing :
  • Marius would focus on pectra fuzzing and bad block generators (also open call for other clients to implement this and take some load off of him/speed up the fork)
general :
  • We had a brief temp check on fork splitting and what open tasks that would make shipping quickly a blocker

1 post - 1 participant

Read full topic

Protocol Calls & happenings CFP Ethereum Zurich 2025

Published: Sep 17, 2024

View in forum →Remove

Hey magicians,

I am excited to announce the Call for Papers (CFP) for the Community Track of Ethereum Zurich, dedicated to fostering collaboration, innovation, and alignment within the Ethereum community. This track is open for you magicians ! You can suggest any topic that you’d like to see and then collect the result of your proposal by watching the talks in Zurich. Get ready to travel to Switzerland 30-31 January 2025 more info at EthereumZuri.ch

To kick start the community track I propose the following :

CFP Proposal - community track

  1. Localization and Accessibility
  • Ensuring Accessibility in Ethereum Applications (IETF style)
  • Case Studies on Successful Localization Projects

Ethereum Zurich already has two tracks completed you can prepare your paper already !

CFP - Research/Academia

1. P2P Networking and Storage

  • Data Propagation and Gossip Protocols
  • Distributed Storage Systems
  • Network Topology and Optimization
  • Resource Allocation and Incentives

2. Consensus Mechanisms, Tokenomics, and Game Theory

  • Novel Consensus Algorithms
  • Tokenomics and Incentive Design
  • Game Theoretical Models in Blockchain
  • Staking and Governance Mechanisms

3. Formal Methods and Security

  • Formal Verification of Smart Contracts
  • Security Protocol Design and Analysis
  • Attack Vectors and Defense Mechanisms
  • Privacy-Preserving Technologies

4. Policy, Regulations, and Ethics

  • Legal and Regulatory Frameworks
  • Ethical Implications of Decentralized Systems
  • Blockchain and Governance
  • Data Sovereignty and Decentralization

5. Algorithms, Zero-Knowledge Proofs, and Computational Efficiency

  • Zero-Knowledge Proofs (ZKPs) and Cryptographic Primitives
  • Algorithmic Optimization for Blockchain Environments
  • Virtual Machine and Code Execution Optimization
  • Computational Complexity in Blockchain
  • Efficient Hashing & Verification Algorithms

CFP Proposal - Industry

Decentralized Applications (DApps)

  • DeFi: Decentralized Finance and Lending Protocols
  • DeSci, Social Networks and Impact Platforms Powered by Blockchain
  • Blockchain in Gaming, Metaverse, and Virtual Economies

Infrastructure

  • Layer 1 and Layer 2 Solutions: Scaling and Performance Optimization
  • Interoperability Between Blockchain Networks and Bridging Solutions
  • DePin and Supportive Infrastructure

Tokenization and Bridging Real-World Assets (RWA)

  • Tokenizing Physical Assets
  • Legal and Regulatory Implications of Tokenized Economies
  • Building Digital Marketplaces and Ecosystems for Tokenized Assets

Security and Audits

  • Smart Contract Audits and Security Best Practices
  • Managing and Mitigating Cybersecurity Threats in Blockchain
  • Incident Response and Fraud Prevention in Decentralized Systems

Startup Challenges

  • Fundraising in Blockchain: VCs, ICOs, and DAOs
  • Navigating Legal and Regulatory Frameworks for Blockchain Startups
  • Building and Scaling a Blockchain Team: Talent, Culture, and Growth

See you in Zurich !

4 posts - 3 participants

Read full topic

EIPs Meta EIP-7768: No-Ether transactions with free-for-all tips

Published: Sep 16, 2024

View in forum →Remove

Here is sandwiching as a way do third-party pay transactions.


eip: 8888
title: No-Ether transactions with free-for-all tips
author: William Entriken (@fulldecent)
status: Draft
type: Meta
created: 2024-09-14


Abstract

A technique is introduced where an externally-owned account having no Ether can send transactions and pay tips using a new “free-for-all” bucket and using their own origin.tx. This requires no client changes and is compatible with existing ecosystem parts.

Motivation

There is much interest in third-party-pay transactions on Ethereum and competing networks.

Other proposals require changes to the Ethereum client, that transactions be sent to the network (i.e. tx.origin) using a separate account and/or other additional things.

In contrast, this proposal introduces and standardizes a solution to this problem that works only with existing client and technology, and which preserves the tx.origin of the originator of a transaction.

Specification

End user process

  1. An end user who controls an externally-owned account, say Alice, will prepare transaction(s) she would like to execute and she signs this (series of) transactions.
  2. If Alice will like to provide consideration for executing these transactions, she will ensure that a well-known address on the network, “the free-for-all bucket” will control tokens (such as ERC-20, ERC-721, ERC-1155 tokens) at the end of her series of transactions.
  3. Alice orders her transaction nonces carefully considering that what will eventually be executed may be:
    1. None of them;
    2. Only the first;
    3. The first then the second;
    4. The first, then the second, … then the Nth transaction, which is not the last in her series of transactions; or
    5. All her transactions, in order.
  4. Alice sends this series of transactions to a service that communicates with block proposers.
    1. Currently mempools in baseline clients would not propogate such transactoins.

For example, if consideration is sent to the free-for-all address, this would typically be the last in her series of transactions.

Block preparer process

  1. Sign a transaction (from any origin) to send Ether to Alice representing the current gas price times the current block size.
  2. (Optional) Prepare and sign a transaction to the free-for-all account, to preload any necessary responses.
  3. Start an execution context and include this send-Ether transaction and all of Alice’s transactions.
  4. In the execution context, identify tokens (e.g. ERC-20, ERC-721, ERC-1155) sent to the free-for-all contract address or other valuable consideration accrued to the free-for-all account.
  5. Sign a transaction to (from any any origin) to take security of the consideration from the free-for-all account and include this transaction in the execution context.
  6. Evaluate the total gas spent.
  7. Rollback the execution context. And repeat steps 1 through 4 with these changes:
    1. Step 1: use the actual required gas amount (in Ether).
    2. Step 4: abort if the consideration received in this second iteration is not the expected amount from the first iteration.
  8. Use some local business logic to compare the Ether spent in step 1 (second iteration) versus the consideration receieved in step 4 and classify the result as favorable or not.
  9. If the result is favorable, commit this execution context to the mainline. Or if the result is not favorable, rollback this execution context.
    1. The result of this decision may feed into a reputation tracking system to avoid evaluating future unfruitful transaction(s).
  10. Continue execution, and publish the block.

Free-for-all bucket

This approach requires that the end user must be able to send consideration the block proposer without knowing who they are, and the block proposer must be able to realize this consideration.

This EIP proposes to use a well-known contract account deployment for this purpose. And here is the required interface:

interface FreeForAll {
  // Performs a call
  function execute(address recipient, memory bytes, uint256 gasLimit, uint256 value);

  // Prepare return values for the next N times this contract is called only in this block
  // [TODO: spell this out]
  function preloadExecutions(memory bytes[]);
  
  // Return the next return value in this block from preloadExecutions
  fallback() {
  }
}

Rationale

This approach can be useful for end users that do not want to or are not able to add Ether to their account.

This approach allows to use the correct origin.tx which may be required for important transactions like ERC-721 setApprovalForAll.

This approach may use more gas than other approaches where the consensus client is changed or where transactions can execute from (origin.tx) a different account.

Alternatives considered

  • Update EIP-1559 so that transactions with gasPrice = 0 are legal, but only if the comensurate amount of gas will be burnt by the block preparer in that same block.
  • Create a new transaction type that encapsulates another signed transaction.
  • Create a new opcode to get the coinbase of the next block.

Backwards Compatibility

…

Reference Implementation

…

Security Considerations

…

Copyright

Copyright and related rights waived via CC0.

2 posts - 2 participants

Read full topic

EIPs interfaces EIP-####: SSZ Transaction / Receipt proofs
ERCs ERC-7770: Fractional Reserve Token

Published: Sep 16, 2024

View in forum →Remove

Abstract

We propose a new token standard for synthetic assets that are only partially redeemable to their underlying asset, but fully backed by other collateral assets.

The standard defines an interface to mint fractional reserve assets, and a standard to reflect economical risk related data to the token holders and lenders.

Motivation

The Cambrian explosion of new L1s and L2s gave rise to bridged assets which are synthetic by nature. Indeed, ETH on Arbitrum L2, or WETH on Binance Smart Chain are not fully fungible with their mainnet counterpart. However, these assets are fully backed by their mainnet counterpart and guaranteed to be redeemable to their mainnet underlying asset, albeit with certain time delay.

Fractional reserve tokens can allow an ecosystem (chains, L2s, and other networks of economic activity) to increase its supply by allowing users to mint the asset not only by bridging it to the ecosystem, but also by borrowing it (typically against a collateral).

As an example, consider a fractional reserve token, namely, frDAI, that represents a synthetic DAI.
Such token will allow users to mint 1 frDAI upon deposit of 1 DAI, or by providing a collateral that worth more than 1 DAI.
Quick redemption of frDAI to DAI is available as long as there is still some DAI balance in the frDAI token, and otherwise, the price of frDAI may temporarily fluctuate until borrowers repay their debt.

Fractional reserve tokens may delegate minting capabilities for multiple risk curators and lending markets. Hence, a uniform standard for fractional reserve minting is needed.
Fractional reserve banking does not come without risks, such as insolvency or a bank run.
This standard does not aim to dictate economic risk management practices, but rather to have a standard on how to reflect the risk to token holders.

Specification

The proposed standard has the following requirements:

Interface

interface IERCXXX is IERC20 {
    // events
    event MintFractionalReserve(address indexed minter, address to, uint256 amount);
    event BurnFractionalReserve(address indexed burner, address from, uint256 amount);
    event SetSegregatedAccount(address account, bool segregated);

    // functions
    // setters
    function fractionalReserveMint(address _to, uint256 _amount) external;
    function fractionalReserveBurn(address _from, uint256 _amount) external;
 
   // getters
    function totalBorrowedSupply() external view returns (uint256);
    function requiredReserveRatio() external view returns (uint256);
    function segregatedAccount(address _account) external view returns (bool);
    function totalSegregatedSupply() external view returns (uint256);
}

Reserve ratio

The reserve ratio reflects the ratio between the token that is available as cash, i.e., available for an immediate redemption (or alternatively, a token that was not minted via a fractional reserve minting), and the total supply of the token. Segregated accounts MUST be subtracted from the cash balance.
Lower reserve ratio gives rise to higher capital efficiency, however it increases the likelihood of depeg or a run on the bank, where token holders cannot immediately redeem their synthetic token.

Formally, the reserve ratio is denoted by $$\frac{totalSupply() - totalBorrowedSupply() - \sum_{a \in \text{Segregated Accounts}} \text{balanceOf}(a)}{totalSupply()}$$.
Additional fractional reserve minting MUST NOT occur when the reserve ratio, multiplied by 1e18 is lower than requiredReserveRatio().

Mint and burn functionality

The fractionalReserveMint and fractionalReserveBurn functions SHOULD be called by permissioned addresses, e.g., risk curators or lending markets. These entities SHOULD mint new tokens only to addresses that already locked collateral in a dedicated contract.

The reserve ratio is denoted by $$\frac{totalSupply() - \sum_{a \in \text{Segregated Accounts}} \text{balanceOf}(a)}{totalSupply() + totalBorrowedSupply()}$$.
fractionalReserveMint MUST revert if the reserve ratio, multiplied by e18 exceeds requiredReserveRatio().

A successful call to fractionalReserveMint(_to, _amount) MUST increase the value of totalSupply(), totalBorrowedSupply(), and the token balance of address _to, by _amount units.
A call to fractionalReserveMint MUST emit a MintFractionalReserve event.
A call to fractionalReserveMint MUST revert if after the mint the reserve ratio, multiplied by 1e18 exceeds the value of requiredReserveRatio().

Similarly, a successful call to fractionalReserveBurn(_from, _amount) MUST decrease the value of totalSupply(),totalBorrowedSupply(), and the token balance of address _from by _amount units.
A call to fractionalReserveBurn MUST emit a BurnFractionalReserve event.

Segregated accounts

Increasing the total supply could be a concern if a token is used for DAO votes and/or if dividends are distributed to token holders.
In order to mitigate such concerns, segregated accounts are introduced, with the premise that money in these accounts is not counted towards the reserve, and therefore, additional token supply cannot be minted against them.

At every point in time, it MUST hold that the sum of token balances for segregated addresses equals to totalSegregatedSupply().

Account balance

The fractionalReserveMint SHOULD be used in conjunction with a lending operation, where the minted token is borrowed. The lending operation SHOULD come with an interest rate, and some of the interest rate proceedings SHOULD be distributed to token holders that are not in segregated accounts.
This standard does not dictate how distribution should occur.

Rationale

The proposed standard aims to standardise how multiple lending markets and risk providers can interact with a fractional reserve token. The actual lending operation should be done carefully by trusted entities, and it is the token owner’s responsibility to make sure the parties who have fractional reserve minting credentials are reliable.

At the core of the coordination relies the need to understand how much additional supply is available for borrow, and at what interest rate. The additional borrowable supply is deduced from the required reserve ratio, and the total, borrowable and segregated supply.
The interest rate SHOULD be monotonically increasing with the current reserve ratio.

The standard does not dictate how the accrued interest rate is distributed. One possible distribution is by making the token a rebased token. An alternative way is to introduce staking, or just airdropping of proceeds.

While a fractional reserve is most useful when it is backed by a known asset, e.g., frDAI and DAI, it can also be used in isolation. In such a case, a token will have a fixed initial supply, however additional supply can be borrowed. In such cases the supply temporarily increases, but the net holdings (totalSupply() - totalBorrowedSupply()) remains unchanged.

Backwards Compatibility

Fractional reserve tokens should be backwards compatible with ERC-20.

Reference Implementation

// The code below is provided only for illustration, DO NOT use it in production
contract FractionalReserveToken is ERC20, Ownable {

    event MintFractionalReserve(address indexed minter, address to, uint256 amount);
    event BurnFractionalReserve(address indexed burner, address from, uint256 amount);
    event SetSegregatedAccount(address account, bool segregated);

    /// @notice token supply in these accounts is not counted towards the reserve, and
    /// therefore, additional token supply cannot be minted against them.
    mapping(address => bool) public segregatedAccount;

    /// @notice ratio between the token that is available as cash (immediate redemption)
    /// and the total supply of the token.
    uint256 public requiredReserveRatio;

    uint256 public totalBorrowedSupply;

    constructor(
        string memory _name,
        string memory _symbol
    ) ERC20(_name, _symbol) Ownable(msg.sender) {}

    function fractionalReserveMint(address to, uint256 amount) external onlyOwner {
        _mint(to, amount);
        totalBorrowedSupply += amount;
        emit MintFractionalReserve(msg.sender, to, amount);

        uint256 reserveRatio = (totalSupply() - totalBorrowedSupply - segregatedSupply) * 1e18 / totalSupply();
        require(reserveRatio >= requiredReserveRatio, "reserveRatio");
    }
    function fractionalReserveBurn(address from, uint256 amount) external onlyOwner {
        _burn(from, amount);
        totalBorrowedSupply -= amount;
        emit BurnFractionalReserve(msg.sender, from, amount);
    }

    // ------------------------------------------------------------------------------
    // Code below is not part of the proposed standard
    // ------------------------------------------------------------------------------
    uint256 internal segregatedSupply; // supply of segregated tokens

    function _update(address from, address to, uint256 value) internal override {
        // keep the reserve up to date on transfers
        if (!segregatedAccount[from] && segregatedAccount[to]) {
            segregatedSupply += value;
        }
        if (segregatedAccount[from] && !segregatedAccount[to]) {
            segregatedSupply -= value;
        }
        ERC20._update(from, to, value);
    }

    function mint(address account, uint256 value) external onlyOwner {
        _mint(account, value);
    }

    function burn(address account, uint256 value) external onlyOwner {
        _burn(account, value);
    }

    function setSegregatedAccount(address account, bool segregated) external onlyOwner {
        if (segregated) {
            require(!segregatedAccount[account], "segregated");
            segregatedSupply += balanceOf(account);
        } else {
            require(segregatedAccount[account], "!segregated");
            segregatedSupply -= balanceOf(account);
        }
        segregatedAccount[account] = segregated;
        emit SetSegregatedAccount(account, segregated);
    }

    function setRequiredReserveRatio(uint256 value) external onlyOwner {
        requiredReserveRatio = value;
    }
}

Security Considerations

Fractional reserve banking comes with many economic risks. This standard does not aim to provide guidelines on how to properly mitigate them.

Copyright

Copyright and related rights waived via CC0.

3 posts - 2 participants

Read full topic

Protocol Calls & happenings All Core Devs - Consensus (ACDC) #142, September 19 2024

Published: Sep 14, 2024

View in forum →Remove

Agenda

Consensus-layer Call 142 ¡ Issue #1154 ¡ ethereum/pm ¡ GitHub moderated by @ralexstokes

Summary

Summary by @ralexstokes [copied from Eth R&D Discord]

  • Began the call with Pectra
    • Touched on status of pectra-devnet-3 ; it has been launched and generally going well
      • deployed a “bad block” fuzzer which surfaced some bugs; relevant teams are debugging
    • Turned to discuss how to handle scoping of the current Pectra fork into a more manageable size
      • Lots of convo, with many different perspectives across core devs here; catch the recording for the full discussion
      • Landed on two key decisions to make
        • Do we focus on the EIPs currently deployed to pectra-devnet-3 as a target for Pectra (next hard fork)?
        • Assuming we split off pectra-devnet-3 from the rest of development for the Pectra hard fork, do we want to determine the scope of the hard fork after Pectra?
      • Again, many inputs and ideas but we agreed to determine the Pectra hard fork as the EIPs currently deployed to pectra-devnet-3 .
        • There’s still some polish for this EIP set remaining, but the timeline from devnet-3 to mainnet is order of a few months as we move to a ‘spec freeze’ for Pectra and keep iterating devnets along the way to testnet and ultimately mainnet.
      • There was a lot less consensus around determining the scope of the fork after Pectra. Obvious candidates are EOF and PeerDAS (having already been scheduled for Pectra so far), but there is some uncertainty around other features like Verkle, or additional EIPs with benefits like EIP-7688.
      • We agreed to move ahead with pushing pectra-devnet-3 to production, and tabling the conversation around the scope of the next fork until a later ACD call.
  • Next, we looked at a number of open PRs concerning the “polish” of the devnet-3 feature set so that we can get to a spec freeze ASAP.
  • After the Pectra discussion, we moved to look at the status of PeerDAS and a consideration of the blob parameters
    • A quick check-in on PeerDAS devnets: teams are still working on implementing the latest specs and debugging local issues
    • Then, had a presentation to support raising the target and/or max blob count in Pectra (with consideration for these changes going into the fork after Pectra as well)
    • This proposal touches on the fork scheduling conversation above, and as we can expect there are lots of views/inputs to this decision
    • An interesting point was raised around the deployment of IDONTWANT in the gossip layer, as this feature should save some bandwidth for a node and give us more room to consider raising the blob parameters, even ahead of PeerDAS
      • Implementation is under way, but clients have different amounts of progress here
  • Consensus on the call was that raising the blob target in Pectra could be reasonable, especially pending further mainnet analysis that supports the headroom for an increase
    • Otherwise, it seemed too risky to raise the maximum blob count without PeerDAS

Recording

Additional Info

1 post - 1 participant

Read full topic

Protocol Calls & happenings All Core Devs - Execution (ACDE) #197, September 26 2024

Published: Sep 14, 2024

View in forum →Remove

Agenda

Execution Layer Meeting 197 ¡ Issue #1153 ¡ ethereum/pm ¡ GitHub moderated by @timbeiko

Stream

2 posts - 2 participants

Read full topic

Protocol Calls & happenings Verkle implementers call #24, September 9 2024
Protocol Calls & happenings ePBS breakout #9, September 13 2024

Published: Sep 12, 2024

View in forum →Remove

Agenda

EIP-7732 breakout room #9 ¡ Issue #1150 ¡ ethereum/pm ¡ GitHub

Notes

Notes by @terence [copied from X]

  • Julian presented an argument that slot auction gives an out-of-protocol trusted advantage by running an MEV-Boost auction at the execution stage
  • Mark presented new engine API methods for retrieving payloads
  • We talked about whether withdrawals could be on executions and what the blocker for this is

Recap by @potuz [copied from Eth R&D Discord]

  • @JulianMa presented an analysis pretty much showing that on slot auctions there’s a “trusted advantage” that does not seem to be possible to avoid. This seems like a serious problem on slot auctions and it’d be nice to weigh against the known problems of block auctions. We decided to keep building on block auctions for the time being cause it’s an easy switch to slot auctions if they are decided later.
  • @ethDreamer proposed a new method in the Engine API to requests payloads by range. We agreed that it was sensible to have this method. I raised that we shouldn’t even need to have the payload to sync. Would like to talk to @m.kalinin about this, Perhaps we can schedule an informal call Misha? I think once the requests are sent outside of the payload and the block is hashed as SSZ, we should not need to get the payloads ever on the CL, and would only need to have the payload HTR.
  • @terence proposed that we moved the processing of withdrawals to the execution phase. Mark seemed to strongly prefer it, I don’t oppose it and even like it, but acknowledge that it’s different on the beacon chain and would want someone else besides me vouching for it. We decided to requests here for signals from teams if they rather want to move this to the EL payload processing.

Recording

ePBS (EIP-7732) breakout room #9

Additional Info

1 post - 1 participant

Read full topic

ERCs ERC7765: Privileged Non-Fungible Tokens Tied To RWA

Published: Sep 10, 2024

View in forum →Remove

Here is my PR:

This EIP defines an interface to carry a real world asset with some privileges that can be exercised by the holder of the corresponding NFT. The EIP standardizes the interface for non-fungible tokens representing real world assets with privileges to be exercised, such as products sold onchain which can be redeemed in the real world.

4 posts - 4 participants

Read full topic

Protocol Calls & happenings Pectra testing call #4, 9 September 2024

Published: Sep 10, 2024

View in forum →Remove

Summary

Update by @parithosh (from Eth R&D Discord)

pectra-devnet-3:

Huge thanks to the testing team for providing reusable execution spec tests on devnets!

Status update posted on interop: ⁠interop-:night_with_stars:⁠

Devnet launch slated for tomorrow morning, clients passing tests will be added at genesis (clients with non-consensus issues would be included)

Discussion about why local testing yielded passing tests but devnet revealed issues (Mainly due to the RPC/EngineAPI inclusion route used by devnets)

Nethermind spoke about the issues they faced with the latest tests

EOF:

3 clients are mostly ready for devnet-4, in a good place for a devnet in ~weeks timeline after devnet-3

Discussions have already been had about changes for devnet-5

peerdas-devnets:

Lighthouse-prysm seem to be stable on a local devnet Issues with teku were discussed, some issues only show up after many epochs and debugging is still going on

fuzzing:

no update given this week

1 post - 1 participant

Read full topic

Wallets Why does Sign in With Ethereum have such bad UX?

Published: Sep 09, 2024

View in forum →Remove

On the weekend I hacked at ETHWarsaw and we used the “Sign in with Farcaster” to build an app. Today, knowing the difficulties of signing in with Ethereum, I’m wondering why Ethereum wallets are so much worse: What’s stopping us to reaching parity?

Just for those who are unfamiliar, here’s an example

My naive understanding is that the Sign in with Farcaster flow just authenticates with the app that this is me who is signing in. But that’s the same for SIWE, right? Does SIWE give any extended information about the user that Farcaster doesn’t that would justify the SIWE dialogues to be THAT MUCH worse than sign in with Farcaster?

For comparison, here is the completely awful flow using Ethereum. This is a huge point of churn. It makes me furious that this hasn’t notably improved over the last year despite many dapp builders very vocally complaining about this:

And this is only slightly better when using a different wallet, I consider Rainbow to be one of the best ones, yet it is still really bad. Notice how, for example, I had to look for the SIWE request in the notifications tab as it didn’t open by default. Other wallets have this and other issues too. It is not isolated to Rainbow. But the real question is why Connect Wallet and SIWE are even split into two actions.

A few things that I don’t understand:

  • Why can’t we combine “Connect Wallet” and “SIWE” in one dialogue?
  • Why can’t we automate the SIWE flow?
  • Why can’t we parse the SIWE flow and make it look pretty instead of signing an unformatted string that looks sketchy as hell to a consumer?
  • Are there security concerns for auto-signing SIWE? And if so, why don’t these security concerns apply to Warpcast? If there are security concerns, can we maybe have a two pronged approach of “Sign in with Ethereum” and “Step: 2: Allow the dapp to spend money after the user has signed in?”
  • Why is Connect Wallet always so much worse than whatever Warpcast is doing here? It feels way more laggy somehow, why?

I’m a dapp builder so I don’t really have any meaningful power to change these things through building on my platform, so this is mostly a call to action for wallet providers etc. What’s stopping us here to get to parity? If we don’t get to parity then Ethereum wallets will simply not be the developer’s choice when it comes to consumer dapps. I don’t think we want all consumer dapps to be logged in through Warpcast, a closed source app, do we?
Sorry if this is rage bait for you but I’m in rage when I see this. I have complained about this for months and nothing ever changes. We really need to get our shit together here otherwise we won’t be able to compete. Thanks for reading

4 posts - 4 participants

Read full topic

RIPs RIP-7767: Gas to Ether Precompile

Published: Sep 09, 2024

View in forum →Remove

Description

A precompile that allows the caller to burn a specified amount of gas, and return a portion of the gas burned to the caller as Ether

Motivation

Some EVM chains include a form of contract secured revenue (CSR), where a portion of the gas used by the contract is returned to the owner of the contract as Ether via some mechanism. CSR, in its straightforward form, creates a perverse incentive for developers to write bloated code, or run loops of wasteful compute / state writes in order to burn gas.

This proposal describes a way for CSR to be implemented in a computationally efficient and scalable way. Additionally, it provides a standardized interface that improves cross-chain compatibility of smart contract code.

This proposal is also designed to be flexible enough to be implemented either as a predeploy, or a precompile.

Benefits

  • Enable baked-in taxes for token transfers (e.g. creator royalties).
  • Increase sequencer fees, so that L2s can feel more generous in accruing value back to L1.
  • Promote development on L2s, which are perfectly suited for novel incentive mechanisms at the core level.
  • Tax high frequency trading on L2s (e.g. probabilistic MEVs) and route value back to authentic users.

Details

For nomenclature, we shall refer to this precompile as the Gasback precompile.

The behavior of the precompile can be described by the following Solidity smart contract.

  • Caller calls Gasback precompile with abi.encode(gasToBurn).
  • Upon successful execution:
    • The precompile MUST consume at least gasToBurn amount of gas.
    • The precompile MUST force send the caller Ether up to basefee * gasToBurn. The force sending MUST NOT revert. This is to accommodate contracts that cannot implement a fallback function.
    • The precompile MUST return abi.encode(amountToGive), where amountToGive is the amount of Ether force sent to the caller.
  • Else, the precompile MUST return empty returndata.

Suggested Implementation (op-geth level)

Security Considerations

As long as the contract always returns less than or equal to the gas burned, the L2 chain implementing it can never become insolvent.

To make DDoS infeasible, the amount of Ether returned by a call can be adjusted to a ratio (e.g. 50-90%). Alternatively, each call to the contract can burn a flat amount of gas that will not be returned as Ether.

To manage the basefee, the precompile can be dynamically configured to switch to a no-op if the basefee gets too high.

5 posts - 2 participants

Read full topic

Uncategorized Security concerns when deploying contracts with the same account on different chains

Published: Sep 08, 2024

View in forum →Remove

using the plain old CREATE opcode, when I’m deploying contracts, the contract addresses depend on the account’s current nonce.

When I’m using the same account / EOA to deploy contracts on different chains, there might exist completely unrelated contracts on different chains.

Regardless of the UX implications (multichain explorers, eg tenderly would display unrelated transactions across chains), might that lead to security related issues? Could eg someone find a transaction casted for one contract on the L1 and replay that on an L2 (since the contract addresses are the same?)

2 posts - 2 participants

Read full topic

ERCs Decentralized Data Bond Proposal

Published: Sep 08, 2024

View in forum →Remove

Decentralized Data Bonds Standard

Introduction: I believe that data will become the next big asset class in the coming decades. As this transformation occurs, financial instruments will be built around it, such as ETFs, bonds, and much more.

Why Decentralized Data Bonds?

  1. Democratization of Data Ownership: Data is perhaps the most inclusive asset class in the world. Anyone with internet access can generate valuable data. By building a permissionless and decentralized solution, we empower individuals to reclaim ownership of their data.
  2. Collective Bargaining Power: Individual data points often hold limited value, but collectively, they become immensely valuable. Our standard allows users to pool their data, increasing their bargaining power and potential returns.

How It Works

  1. Data Pools / Bonds: Users can contribute their data to specific pools or bonds. Each bond represents a collection of similar or complementary data types.
  2. Verifiable Data Generation: Privately generate proofs for data with protocols such as:
    • Multiparty Computation (MPC) TLS
    • TLS Proxy
    • Trusted Execution Environments (TEE)
  3. Tokenization: Contributors to a bond receive transferable tokens representing their share of the data pool. These tokens serve as both proof of contribution and a means to receive rewards.
  4. Data Utilization: Companies, DAOs, and decentralized protocols can purchase or subscribe to access the aggregated data. Every time the data is accessed or purchased, token holders receive a yield proportional to their contribution.
  5. Governance: Token holders have voting rights on decisions related to their specific data bond, such as pricing, access controls, and data usage policies.

Example: Social Media Data Pool for LLM Training

Imagine a Decentralized Data Bond called “Social Bond” where users can contribute their Reddit and Twitter data:

  1. Users connect their Reddit and Twitter accounts to the platform.
  2. The platform uses MPC-TLS to securely gather social verifiable data
  3. Contributors receive “Social Bond” tokens proportional to the quality and quantity of their data.
  4. An AI company developing a new language model purchases access to this data pool for training purposes.
  5. The revenue from this purchase is distributed to token holders based on their contribution.
  6. Token holders can vote on data usage policies, such as restricting access to non-commercial research only.

The proposed architecture consists of several key components:

  1. Smart Contracts: Manage token issuance, data access rights, and reward distribution.
  2. Decentralized Storage: Utilize solutions like IPFS or Filecoin to store encrypted data off-chain.
  3. Prover Layer: MPC-TLS, TLS Proxy, TEE

Challenges
Design a secure architecture where none of the parties involved, besides the data owner and the buyer, can access the stored data.

Next Steps

  1. Architect the standard
  2. Validate with security researchers
  3. Build the first open-source proof of concept

I am very happy to share my first ever ERC proposal, and hopefully we can bring a new standard to Ethereum that allows users to leverage and take back ownership of their own data.

4 posts - 3 participants

Read full topic

Protocol Calls & happenings PeerDAS breakout #8, September 17 2024

Published: Sep 07, 2024

View in forum →Remove

Agenda

PeerDAS Breakout Room #8 ¡ Issue #1145 ¡ ethereum/pm ¡ GitHub

Notes & chat log

[Copied from Eth R&D Discord]

  • Teams progressing well and will continue with local testing and iron out forking issues before peerdas-devnet-2 launch :rocket:
  • Distributed Block Building breakout call later today :hammer_and_wrench:
  • The main remaining spec change before we can ship PeerDAS is validator custody

Recording

PeerDAS Breakout Room #8

1 post - 1 participant

Read full topic

Wallets Interest in a Chain-Specific Transaction Hashes EIP?

Published: Sep 06, 2024

View in forum →Remove

[Continuing the discussion from Chain-specific addresses]

@vid mentioned in the post above that a addressing standard for transaction hashes which includes the chain it’s on would be helpful. I answered above that there is a draft “CAIP” (chain-agnostic IP, using the CASA chain-specification URI format) which might be useful to revive for cross-VM use-cases, or just useful to copy-paste into a simpler EVM-only EIP achieving the same ends. Comment if you’ve been looking for such a thing or would be interested in prototyping/coauthoring an EIP and/or a CAIP on the subject!

1 post - 1 participant

Read full topic

ERCs ERC-7764: Buyer-Seller Negotiable Pricing

Published: Sep 06, 2024

View in forum →Remove

PR:

This proposal introduces a new smart contract mechanism that allows buyers and sellers to freely negotiate and determine transaction prices on the Ethereum network. A new trading mode is added, where goods can be negotiated rather than only priced. This mechanism allows the seller to set an initial price, and the buyer can propose a new price through negotiation with the seller. Ultimately, both parties can reach an agreement and complete the transaction.

1 post - 1 participant

Read full topic

Ethereum Research
Sharding Steelmanning a blob throughput increase for Pectra

Published: Sep 26, 2024

View in forum →Remove

Steelmanning a blob throughput increase for Pectra

With the discussions about the Pectra hardfork scope continuing, I want to provide some empirical input on the current state of the network.
I’ll try to do so by answering some commonly raised questions that arise in discussions on the proposed blob target/limit increase for Pectra.

The arguments for shipping EIP-7691 in Pectra are:

  • Continue scaling DA - with EIP-4844, we have only set the foundation.
    • Provide existing L2s and their apps enough blob space for further scaling.
    • Avoid creating a precedent of “blob fees can explode and are unpredictable” (h/t Ansgar); this harms future adoption if rollups have to account for rare fee spikes over extended periods.
  • The number of reorgs has been trending down since Dencun.
  • The impact of blobs on reorgs has decreased as well.

How did the number of reorgs evolve over time?

reorged = “nodes saw a block by the proposer of the respective slot”
missed = “no sign that the proposer was online”

  • Within the last 365 days, 5,900 blocks were reorged.
  • This equates to 0.225% of the blocks in that time interval.
  • At the same time, 14,426 slots were missed, representing 0.549%.
  • On average, we observe 492 reorgs and 1,202 missed slots per month.

The number of reorgs has been decreasing, which is a positive development, though not surprising, as core devs continuously improve client software. Interestingly, contrary to expectations that the most recent hard fork (= Dencun) would lead to a significant rise in reorgs, we actually observed the opposite trend.

Since the Dencun upgrade, the number of reorgs halved.

It’s challenging to identify the exact reason for the change in trend, but it may be attributed to the ongoing improvements made by core devs to their client software.

What’s the impact of blobs on reorgs?

Initial analysis conducted a few months after the Dencun hardfork showed that blocks with 6 blobs were reorged 3 times more frequently than 0-blob blocks. In general, we observed that the reorg rate has increased steadily with a growing number of blobs.

Updating this analysis presents a different picture today. Even though we still see that 6-blob blocks are reorged more frequently than 0-blob or 1-blob blocks, the numbers have decreased significantly, showing no substantial difference between blocks with one blob and those with six blobs.
We still observe a small difference in the reorg rate for 0-blob blocks and x-blob blocks (where x > 0).

reorgrate_animation

How well are blobs distributed over blocks?

Plotting the distribution, we can see that most blobs contain either 0 or 6 blobs, with blocks containing 1 to 5 blobs representing the minority. However, the situation has improved since the last study, with fewer slots at the extremes of 0 blobs and 6 blobs.

all_blobs_day

Related work

Title Url
On Attestations, Block Propagation, and Timing Games ethresearch
Blobs, Reorgs, and the Role of MEV-Boost ethresearch
Big blocks, blobs, and reorgs ethresearch
On Block Sizes, Gas Limits and Scalability ethresearch
The Second-Slot Itch - Statistical Analysis of Reorgs ethresearch

2 posts - 2 participants

Read full topic

Uncategorized Proposal: Delay stateRoot Reference to Increase Throughput and Reduce Latency

Published: Sep 25, 2024

View in forum →Remove

Proposal: Delay stateRoot Reference to Increase Throughput and Reduce Latency

By: Charlie Noyes, Max Resnick

Introduction

Right now, each block header includes a stateRoot that represents the state after executing all transactions within that block. This design requires block builders and intermediaries (like MEV-Boost relays) to compute the stateRoot, which is computationally intensive and adds significant latency during block production.

This proposal suggests modifying the Ethereum block structure so that the stateRoot in block n references the state at the beginning of the block (i.e., after executing the transactions in block n - 1, rather than the state at the end of the block).

By delaying the stateRoot reference by one block, we aim to remove the stateRoot calculation from the critical path of block verification at the chain tip, thereby reducing L1 latency and freeing up capacity to increase L1 throughput.

Technical Specification (High-Level)

When validating block n, nodes ensure that the stateRoot matches the state resulting from executing block n-1 (i.e., the pre-state root of block n).

To be clear, there is no change to exeuction ordering. Transactions in block n are still applied to the state resulting from block n-1.

Motivation

stateRoot calculation and verification is unnecessary work on the critical path of block production. A builder cannot propose a block on MEV boost without first calculating the stateRoot and the attestation committee cannot verify a block without computing the stateRoot to compare with the proposed stateRoot. stateRoot calculation itself accounts for approximately half of time spent by all consensus participants working at the tip. Moreover, whatever latency implications the stateRoot calculation imposes are paid twice on the critical path: once at the block building stage and then again during verification.

    • When block builders submit blocks to relays, they are required to provide the calculated stateRoot. From surveying three of the four largest builders, each spends on average only 40%-50% of their time actually building each block, and the rest on stateRoot calculation.
    • When MEV-Boost relays recieve blocks from builders, they are supposed to verify their correctness. In Flashbots’ relay, also approximately half of the ~100ms (p90) verification time is spent on the stateRoot calculation.
    • When validators receive a new block, or when non-MEV-Boost validators (“home builders”) produce a block, they are also required to re-verify its execution and its stateRoot. Commodity hardware Reth nodes spend approximately 70% of its time in live-sync on the stateRoot (remainder on execution).
~70% of RETH's block processing time is spent on `stateRoot` calculation.

These participants - builders, relays, and validators - are highly latency sensitive. They operate under tight timing constraints around slot boundaries (particularly with the incresaing prevalence of timing games).

The latency introduced by stateRoot verification at the tip is unnecessary and removing it could allow us to improve the health of the block production pipeline, and network stability.

Benefits of Delaying the stateRoot

  • Higher L1 throughput, because the time currently spent verifying the stateRoot can be re-allocated to execution. stateRoot verification would be pipelined to occur in parallel with the next slot (i.e. during time that nodes are currently idle). Bandwidth requirement increases and state growth would also need to be acceptable before activating a throughput increase.
  • Time saved by pipelining the stateRoot could also be allocated towards lowering slot times - improving L1 Ethereum UX, and likely resulting in tighter spreads for users of decentralized exchanges.
  • Builders and relays avoid an unnecessary latency speedbump. Both are highly latency-sensitive actors. We want to minimize the sophistication it takes to be a relay or validator. Removing stateRoot latency from the critical path of block verification means they will no longer have to worry about optimizing it, improving the health and efficiency of the block production pipeline.

Potential Downsides and Concerns

Impacted Applications

  1. Light Clients and SPV Clients
  • Impact: These clients rely on the latest stateRoot to verify transactions and account balances without downloading the entire blockchain. A one-block delay introduces a latency in accessing up-to-date state information. Cross-chain communication protocols (like bridges that utilize light clients) would also experience this delay.
    • Consideration: We do not see an obvious issue with light clients being delayed by a single block.
  1. Stateless Client Protocols
  • Impact: Stateless clients rely on the latest stateRoot to verify transaction witnesses. A one-block delay could affect immediate transaction validation.
    • Consideration: If these clients can tolerate a one-block delay, the impact may be minimal. This aligns with ongoing discussions in the statelessness roadmap.

Rationale

Why This Approach?

  • Efficiency: Removing stateRoot computation from the critical path significantly reduces block verification time.
  • Simplicity: The change is straightforward in terms of protocol modification, affecting only the placement of the stateRoot reference. This is backwards-compatible with the existing block production pipeline (i.e., native building and MEV-Boost). Other proposals which include execution pipelining, like ePBS, are significantly broader in scope and complexity. Delaying the stateRoot is a simpler change we can make with immediate benefit and little risk.
  • Minimal Disruption: While some applications may be affected, we think most (all?) can tolerate a one-block delay without significant issues. We should collect feedback from application developers to validate this.

Backwards Compatibility and Transition

  • Hard Fork Requirement: This change is not backwards compatible and would require a network hard fork.
  • Application Adaptations: Affected applications (light clients, Layer 2 solutions, stateless clients) may need to adjust their protocols or implementations.

Request for Feedback

We invite the community to provide feedback on this proposal, particularly:

  • Feasibility: Are there technical challenges that might impede the implementation of this change?
  • Upside: How much throughput will we be able to eke out from pipelining stateRoot calculation, and reallocating the time to exeuction?
  • Affected Applications: We don’t obviously see a class of widely used applications which would be affected. We hope any developers whose applications do depend on same-block stateRoot will let us know.

Next Steps

We plan to formalize this proposal into an EIP for potential inclusion in Pectra B.

Acknowledgements

Thanks to Dan Robinson, Frankie, Robert Miller, and Roman Krasiuk for feedback and input on this proposal.

14 posts - 11 participants

Read full topic

Layer 2 Understanding Minimum Blob Base Fees

Published: Sep 25, 2024

View in forum →Remove

Understanding Minimum Blob Base Fees


by Data Always - Flashbots Research

Special thanks to Quintus, Sarah, Christoph, and Potuz for review and discussions.

tl;dr

The myth that blobs pay zero transaction fees is false. Depending on type of data being posted and the state of gas prices, it costs submitters between $0.10 and $3.00 per blob in mainnet execution fees. EIP-7762, the implementation of a ~$0.01 minimum blob base fee, should have a minimal impact on the market, yet vastly reduce the time that the blob market spends in PGAs during surges of demand while blob usage remains below the blob target.


Proposals to set a blobspace reserve price are controversial in the community, but this may stem from a misunderstanding of how blobs find their way on chain. A common impression is that blobs are currently contributing zero fees to the protocol, but this is misguided and only true when we restrict our analysis to blobspace fees.

Although the blobspace fee market has been slow to reach the targeted level of demand, thus suffering from the cold-start problem initially predicted by Davide Crapis a year before Deneb, blob carrying transactions still pay mainnet gas fees, both for execution and priority. The current concern, raised by Max Resnick, is that the hard limit of six blobs per block and the slow response time of the blobspace fee market creates the potential for long-lasting priority gas auctions (PGAs) when the network sees periods of high demand. During these PGAs it becomes much harder for L2s to price their transactions, and when coupled with the current strict blob mempool rules, blob inclusion becomes less predictable.

EIP-7762 aims to minimize future dislocations between the price of blobspace and blob demand until the adoption of L2s pushes us past the cold-start problem. The current configuration, with the minimum blobspace base fee set to 1 wei, requires at least 30 minutes of fully saturated blocks for blobspace fees to reach $0.01 per blob and to begin to influence blob pricing dynamics. Under the current system, when surges of demand arise the network sees a reversion to unpredictable PGAs as L2s fight for timely inclusion.

As an example, on June 20th the network saw its second blob inversion event, stemming from the LayerZero airdrop. It took six hours of excess demand for blobs until the network reached equilibrium.

Source: https://dune.com/queries/4050212/6819676


The State of Blob Transaction Fees

Six months post-Deneb blobspace usage remains below the target. As a result, the blobspace base fee has remained low and the majority of blobs have incurred negligible blobspace gas fees. To date, there have only been three weeks where the average cost of blobspace rose above $0.01 per blob: the weeks of March 25 and April 1 during the blobscription craze and the week of June 17th during the LayerZero airdrop.

Source: https://dune.com/queries/4050128/6819454

In contrast to fees in blobspace, blob carrying transactions (also known as Type-3) are still required to pay gas fees for execution on mainnet. Despite gas prices falling to a multi-year low, the average blob pays between $0.50 to $3.00 in execution fees. When compared to the price of call data historically posted by L2s these costs are insignificant and blobs are essentially fully subsidized by the network, yet this small cost is important when framing a minimum base fee for blobs.

Source: https://dune.com/queries/4050088/6819431

If we go a step further and segment the execution cost of blob carrying transactions by their blob contents we see that market is highly heterogeneous. Transactions that carry only one blob pay the highest fees per blob, while transactions that carry five or six blobs pay little-to-no fees per blob. In fact these five or six blob carrying transactions pay significantly lower total fees.

Source: https://dune.com/queries/4053870/6825747

A large factor in this discrepancy is the variance in blob submission strategies of different entities: Base, OP Mainnet, and Blast, as well as many smaller L2s, are extremely financially efficient because they post their data to an EOA which requires only 21,000 mainnet gas for execution regardless of blob count, but these transactions are not well suited for fraud proofs. These chains account for the vast majority of transactions that carry five or more blobs, pushing down the perceived price of submitting many blobs in one transaction. By contrast, L2s that post more complex data to better enable fraud proofs, for instance: Arbitrum, StarkNet, Scroll, ZkSync Era, Taiko, and Linea, use significantly more mainnet gas and tend to submit fewer blobs (often only a single blob) per transaction.


Following from the statistics above, if we combine the blobspace and execution fees on a per transaction basis, we see that outside of the brief surges in demand for blobs, which would not have been affected by adding a minimum base fee, the current distribution of fees paid is almost entirely concentrated in execution fees. This demonstrates that the blobspace fee market is currently non-functional and that there is room to raise the minimum cost of blob gas without meaningfully raising the total cost paid by blobs.

Source: https://dune.com/queries/4034097/6792385

By contrast, if we focus on the periods when the blobspace fee market entered price discovery we see that the majority of fee density rapidly transitions into blobspace fees. When the market works, it appears to work well. As such, the most valuable issue to address is the repeated cold-start problem—where the market currently finds itself.

Source: https://dune.com/queries/4060561/6837143

When the blobspace fee market is in an execution fee-dominant environment it benefits blob submitters who post less execution data—mostly OP Stack chains. It also complicates the block building process: historically many algorithms were deciding blob inclusion by priority fee per gas, but since the mainnet gas usage of these transactions varied greatly it forced the L2s that submit higher quality proofs to pay higher rates for the entirety of much larger transactions, further amplifying the advantage of submitting less execution data. By moving closer to a blobspace fee-dominant environment we decrease this advantage.


The Impact of a Minimum Fee

At the current value of ether, Max’s original proposal opted to price the minimum fee per blob at $0.05 per blob. Supplementing the cost of execution with this new minimum fee, the proposal would have increased the average cost per blob by 2%.

The revised proposal has decreased the minimum blob base fee to 2^25, about 1/5th the originally proposed value or $0.01 per blob under the same assumptions. Since the beginning of July, this implies an average increase in cost of 0.7% for blobs, but due to the dispersion of financial efficiencies amongst blob submitters the percentage changes are not uniform across entities.

Blob Submitter Dataset Size Current Cost per Blob Proposed Cost Historic Impact
Base 385,077 $0.0687 $0.0797 16.0%
Taiko 271,786 $3.0152 $3.0262 0.4%
Arbitrum 178,127 $1.0099 $1.0209 1.1%
OP Mainnet 106,979 $0.0830 $0.0940 13.3%
Blast 78,430 $0.1655 $0.1765 6.6%
Scroll 49,632 $2.1304 $2.1414 0.5%
Linea 37,856 $0.5817 $0.5927 1.9%
zkSync Era 11,837 $2.6971 $2.7081 0.4%
Others 233,494 $0.6273 $0.6384 1.8%
Total 1,354,218 $1.5734 $1.5844 0.7%

Table: Blob submission statistics by entity from July 1, 2024 to September 17, 2024, assuming a ETH/USD rate of $2,500. Source: https://dune.com/queries/4089576

Modifying the earlier per-transaction breakdown to account for a 2^25 wei minimum blobspace base fee, and only considering transactions where the original blobspace base fee was less than the proposed new minimum, we see that although the profile begins to meaningfully shift, the blob base fee remains a minority component for all affected blob carrying transactions. The highly efficient transactions submitted by Base and OP Mainnet that carry five blobs would see an increase between 10 to 30% depending on the state of L1 gas prices, which should be easily absorbed. Less efficient transactions, particularly those carrying one to three blobs would see total fee increases of less than 10%.

There have been no blob carrying transactions to date where a minimum blob base fee of 2^25 would have accounted for the majority of the cost paid by the transaction.

Source: https://dune.com/queries/4034254/6792625


Blobspace Response Time

Under EIP-4844, the maximum interblock update to the blobspace base fee is 12.5%. Starting from a price of 1 wei, it takes 148 blocks at max capacity, over 29 minutes with 12 second block times, for the base fee to rise above 2^25 wei. This updating period has been framed as the response time of the protocol, but it still only represents a minimum amount of time. Due to market inefficiencies blocks do not end up full of blobs, vastly increasing the duration of price discovery.

Leading into the LayerZero airdrop on June 20th, the blob base fee was sitting at its minimum value of 1 wei. At its peak, the blob base fee reached 7471 gwei ($3,450 per blob). Although this level could have theoretically been reached in under 51 minutes, the climb took nearly six hours. Under Max’s proposal this maximum could have theoretically been reached in 21 minutes, but it’s clear that these theoretical values are not accurate approximations.

Rather than focusing on time, the goal of the proposal is to price the minimum blob base fee below, but close to, the inflection point where blobspace fees begin to form a measurable share of total fees paid by blobs. On June 20th, despite the surge in blobs beginning just after 11:00 UTC, it wasn’t until 15:17 UTC that blobspace fees began to contribute 0.1% of total fees paid by blobs, and it wasn’t until 15:41 UTC that a base fee of 2^25 wei (0.0335 gwei) was exceeded.

Source: https://dune.com/queries/4050166/6819510

By contrast, had the minimum base fee been 2^25 wei during the LayerZero airdrop, the network may have leapfrogged the cold-start problem and minimized the dislocation between price and demand. We might expect the distribution of blob fees to have behaved as follows, with the blob market still taking an hour or longer to normalize.

Source: https://dune.com/queries/4050746/6820583

In summary, raising the minimum blobspace base fee is not a magic bullet, but it should be viewed as a welcome change to the protocol. The market impact from the leading proposal should be minimal, with only the cheapest and lowest quality blobs seeing a price increase larger than 1%, while still remaining significantly cheaper than their competitors.


Open Questions

  • Will the blobspace fee market reach an equilibrium before the Pectra hardfork(s)?
  • Will we see additional cold-start problems each time the blob limit is increased with future hard forks?
  • Will the blob market move towards private mempools?
  • How have block building algorithms changed to better handle blobs since the LayerZero airdrop?
  • Should revenue from these PGAs be captured by proposers or by the protocol?

3 posts - 3 participants

Read full topic

Applications The Portable Web: Hackable, No Data Lock-in, and Crypto-native Web World

Published: Sep 25, 2024

View in forum →Remove

About this post

In the current web environment, users find it difficult to manage their own data and are often locked into specific services. The Portable Web operates as a parallel web alongside the existing one, aiming to provide users with greater control over their data and the ability to make choices. Applications on the Portable Web are primarily envisioned to serve as public infrastructure.

In this post, I will introduce the core ideas of the Portable Web. Detailed specifications unrelated to its feasibility are not included. This is still a rough draft, but I’m submitting this because if I waited for it to be perfect, I’d never finish.

Summary

  • Hackable: Users Can Customize Web Applications

    • A cluster represents a single application unit.
    • Anyone can create a cluster, and within the cluster, entities other than the creator can create their own clients, provide servers, define API schemas, and write migration scripts.
    • Clients and servers are loosely coupled and connected through an API schema, allowing different developers to create them independently.
    • For example, users can create customized UIs to tailor applications to their specific needs, making them easier to use. Additionally, they can develop their own API schemas and host servers to extend particular features. In this way, the Portable Web allows not only developers but also regular users to actively contribute to the evolution of applications.
  • No Data Lock-In: Users Have Control Over Their Data

    • A client caches the user’s data.
    • By using a server that conforms to the API schema, clients can share cached data across different servers.
    • Cached data on a client can also be migrated using migration scripts.
    • A client caches the data that a user sends and receives, but by transmitting this data to a server chosen by the user, it is managed in a decentralized manner. If needed, users can migrate their data to other servers, ensuring that their data is not locked into any particular entity.
  • Crypto-Native: Crypto-Economics as an Incentive Mechanism

    • In the Portable Web, cluster providers issue tokens and are incentivized by offering clusters that create real demand for those tokens.
    • All payments within a cluster are made using the issued tokens.
    • The presence of new participants contributes to the growth of the cluster, so the original cluster providers do not exclude them.
    • While Web2 operates as a monopoly and winner-takes-all game, the Portable Web promotes a collaborative and inclusive approach.

Background

Web2

The Web3 community has extensively discussed the problems of Web2, so I won’t delve deeply into that here. However, it is important to emphasize that the root of Web2’s problems lies in its architecture—specifically, the way browsers directly access target URLs.

In the Web2 architecture, users submit the content they generate directly to the service, without retaining ownership or local copies. User accounts and content exist within the service, and the service accumulates this data. This accumulation accelerates the creation of new data. It is extremely difficult for users to switch to another service and achieve the same level of utility. To do so, users would need to transfer their content, and other users would also need to migrate en masse.

The existing Web architecture leads to content lock-in and account lock-in, which in turn fosters the concentration of power and a winner-takes-all dynamic.

Web3

While Web3 often claims to challenge existing power structures and maximize user rights, in reality, it is currently just adding a blockchain layer on top of Web2.

Although blockchain is decentralized, the fact that existing Web3 applications are built on top of the current Web architecture undermines its potential

Portable Web Architecture

To solve the above issues and achieve a decentralized web while maximizing user rights, it is necessary to build a new architecture. The proposed solution is the Portable Web. This new web architecture provides an environment where users have complete control over their data and identity and enables developers and service providers to collaboratively evolve a single application.

Components of the Portable Web

Portable Web Browser

The browser plays several key roles in enabling the Portable Web.

  1. Controlled Server Communication: It limits the servers with which the client can communicate. Clients cannot interact with servers unless explicitly intended by the user.
  2. Currency Restriction: It restricts the currency used for payments in applications. The browser contains a wallet, ensuring that payments can only be made using the currency initially set by the cluster. By default, the browser interacts with an internal exchange (DEX or CEX), so the user is unaware of the currency being used.
  3. Identity Management: It manages the user’s identity as a Self-Sovereign Identity (SSI), preventing servers or clients from locking in the user’s identity.
  4. Built-In Support for Bootstrapping: It comes with built-in client and server information for the index cluster to support bootstrapping. Users can later connect to other clients or servers.
  5. Data Migration and Updates: It executes migration scripts specified by the client to transfer data and manages client updates.

Cluster

A cluster represents a single application, identified by its purpose document.

The components that make up a cluster are:

  • Purpose Document
  • API Schema
  • Migration Script
  • Client
  • Server

Anyone can contribute components other than the purpose document to help develop and evolve the cluster.

Index cluster

The index cluster functions like an App Store within the Portable Web (although anyone can provide it).

Providers of cluster components register their data with the index cluster. The index cluster hosts this registered data, offering users information and software. Additionally, the information includes details such as version and compatibility.

The index cluster knows which components belong to which clusters and understands the relationships between servers and API schemas, clients and API schemas, as well as clients and migration scripts.

Components of a Cluster

Purpose Document

The purpose document serves to enable and promote community-driven development. It defines:

  1. The Ultimate Goal: The overarching objective that the cluster aims to achieve.
  2. Tokens Used: The specific tokens to be utilized within the cluster.

This document is made public upon the cluster’s creation and remains immutable thereafter. While the ultimate goal stated in the purpose document does not have any systemic function, the community uses this document as a basis for improving and adding features.

API Schema

The API schema is a protocol that defines the communication methods between clients and servers. It needs to be in a developer-readable format. By adhering to this schema, clients and servers created by different developers can communicate with each other.

If there is compatibility between API schemas, servers and clients can support multiple Web API schemas.

Migration Script

A migration script assumes that the client has a specific data model. It allows data transfer and synchronization between clients that refer to the same migration script.

Client

A client consists of static content like HTML or JavaScript and can operate independently without relying on constant internet connectivity or specific servers. The client can only communicate with destinations specified by the user. It should not be implemented to depend on a specific server.

The client can cache data that the user sends to or receives from the server. It must specify a particular migration script and cache data in a data structure that allows data migration by executing that script.

Server

A server provides APIs that conform to the API schema. Any functionality that can be defined in the API schema can be provided.

Versioning

In the Portable Web, a cluster is a single application unit, but it can behave differently depending on which components are used. Since anyone can create components such as migration scripts, API schemas, clients, and servers, various versions coexist within a cluster.

Migration Script

Version management of migration scripts is represented using a Directed Acyclic Graph (DAG) and can be updated by anyone. When creating a new migration script, you must specify a backward-compatible migration script. The new migration script must be able to migrate data by transforming the data structure, even when executed from clients that supported the specified backward-compatible migration script. Since anyone can create migration scripts, they may branch but can also merge.

By executing the appropriate number of migration scripts, data can be migrated from older clients to clients that support the latest migration script. For example, a client that supports migration script ‘a’ can migrate data to a client that supports migration script ‘e’ by executing migration scripts three times(b→c→e or b→d→e).

API Schema

A new API schema does not carry information about relationships with other API schemas, such as backward compatibility. Clients and servers can support multiple API schemas, so compatibility management is handled individually by clients and servers. They can support additional API schemas as long as compatibility is not broken.

Client

Client updates are mainly divided into three types. For all types of updates, the user can choose whether to accept the update.

  1. Type 1: Updates that do not change either the migration script or the API schema.
  2. Type 2: Updates that change the API schema.
  3. Type 3: Updates that change the migration script.

Type 1 does not affect other components.

In the case of Type 2, compatibility with the servers that the user usually uses may be lost unless the server also updates to the corresponding API schema.

In the case of Type 3, the client can update to a new migration script that specifies the current migration script as backward-compatible. Data can be migrated from a client supporting the previous migration script, but since it’s only backward-compatible and not fully compatible, data cached in the client that has updated the migration script cannot be migrated back to other clients still using the older migration script. In such cases, as shown in the diagram below, other clients need to either support the updated migration script or create a new one to ensure compatibility.

Server

A server update means changing or adding the corresponding API schema. You can update by registering the updated API schema information in the index cluster.

The Economics of the Portable Web

For the Portable Web to function sustainably and for developers and service providers to actively participate, economic incentives are essential. This section explains the economic system that supports this architecture.

Incentives for Participants

Cluster Creators

Cluster creators launch new applications within the Portable Web ecosystem. They can issue tokens specific to their clusters, which become the foundation of the cluster’s economy. By designing tokens that encourage widespread adoption of their applications, cluster creators can earn revenue through seigniorage (profit from token issuance).

As the cluster gains popularity and more users join, the demand for these tokens increases. This heightened demand raises the value of the tokens, providing economic incentives for cluster creators to continue developing and improving their applications. The success of the cluster is directly linked to the value of the tokens, aligning the interests of cluster creators with those of users and other participants.

Server Providers

Servers within the Portable Web host data and provide APIs that conform to the cluster’s API schema. Server providers can monetize their services through various billing models, such as subscription fees, pay-per-use charges, or offering premium features. Since users manage their own data and can choose which servers to interact with, service providers are encouraged to offer high-quality, reliable services to attract and retain users.

By accepting payments in the cluster’s tokens, service providers also participate in the cluster’s economy. If the token’s value increases, the potential revenue for service providers grows as well. In this way, a symbiotic relationship is formed where service providers contribute to the cluster’s growth while profiting from its success.

Client Developers

Client developers create software that provides the cluster’s user interface and caches data sent to and received from servers. They can monetize their efforts by selling premium clients or offering additional features for a fee—all transacted in the cluster’s tokens.

Anyone can provide clients, and because of the interoperability within the cluster’s ecosystem, developers are encouraged to innovate and offer more valuable user experiences. They are motivated to continuously improve the products they provide.

Users

By using the Portable Web, users enjoy greater control over their data and the ability to customize their application experience. They participate in the cluster’s economy by using tokens to access premium features and more. Additionally, users who hold tokens may see their value increase as the cluster grows, providing an incentive to support and promote the cluster.

By engaging in the cluster’s economy, users have more opportunities to actively participate, provide feedback, and contribute to the community. Their involvement is expected to help develop the ecosystem further.

Discussions

Here, I briefly outline concerns and future challenges.

Versioning

With the current version management method, there’s a risk that migration scripts and API schemas could proliferate uncontrollably, negatively impacting user experience and data portability.

At present, it might be desirable for the initial cluster creator to have initiative over specifications while still allowing anyone to customize.

Incentives for Data Lock-in

The initial cluster creators have a disincentive against implementing data lock-in. This is because their goal is to profit from token seigniorage rather than from data lock-in (if they aimed to profit from data lock-in, they would not choose the Portable Web architecture). To profit from token seigniorage, they need to increase the real demand for the token, thereby boosting its price. To increase this demand, cluster creators must offer more attractive applications to users. Applications that appeal to users typically offer:

  • No data lock-in
  • Customizability by anyone, fostering diversity and rapid development.

Given this, cluster creators are likely to see remaining open as more beneficial than implementing data lock-in.

In other words, within the cluster, at least one component set (server, API schema, client, and migration script) must support data portability.

Service providers who join later and are not token stakeholders have a positive incentive for data lock-in, similar to conventional web environments. However, since users can choose components from the cluster, the most user-preferred components will be utilized. In an environment without data lock-in, if users still choose a locked-in component, it is a result of their own decision. This is also part of the value the Portable Web offers, and it cannot deny this choice.

Economics

If payments can be made through methods other than those provided by the browser’s standard, the system’s economy could collapse, rendering this architecture unviable.

When the Purpose of a Cluster Changes

The components of a cluster must align with its purpose. If functionalities that do not follow the cluster’s purpose are implemented, the cluster will lose its distinct identity—the symbol that differentiates it from other clusters. This would be similar to Facebook and LinkedIn—which have different purposes—losing their boundaries and becoming inconvenient applications. Moreover, if a feature does not align with users’ objectives, it is unlikely to gain their support.

Q&A

Is the Portable Web feasible? (click for more details) What is the difference between Fediverse? (click for more details) Why am I posting this here? (click for more details) Is this post the final version? (click for more details)

I welcome your feedback and collaboration to further develop and refine the Portable Web concept.

1 post - 1 participant

Read full topic

ZK Rollup Using FRI for DA with Optimistic Correctable Commitments in Rollups

Published: Sep 21, 2024

View in forum →Remove

Abstract

Scaling blockchains and transitioning from Web2 to Web3 require efficient solutions for storing and accessing large volumes of data. We present a new technique for using FRI commitments combined with optimistic correctable commitments to implement Data Availability (DA). This allows for a significant reduction in the volume of data stored on-chain, by hundreds and thousands of times compared to the volume of data served by the distributed network. For each cluster of megabytes, it is sufficient to store only a short 32-byte hash on-chain. In the context of rollups, we introduce a new commitment construction that ensures reliable storage and data availability when used in rollups instead of the standard commitment C. Combined with recursive rollups, our solution paves the way for unlimited blockchain scaling and the transfer of Web2 to Web3, ensuring reliability, security, and data availability.

Introduction

The Problem of Scaling and Data Storage in Blockchain

Blockchains produce enormous amounts of data, and efficient management of this data is critical for their scaling and widespread application in Web3. Traditional solutions, such as Filecoin, do not provide reliable data storage at the consensus level. Other solutions, like Arweave, while offering storage reliability, are not suitable for dynamic Web3 applications due to the impossibility of modifying or deleting data.

Modern solutions, such as EthStorage and 0g, aim to solve these problems but face limitations:

  • EthStorage uses data replication, requiring the storage of multiple copies of the same volume of data to ensure reliability.
  • 0g applies Reed-Solomon codes for data sharding but uses KZG10 commitments, which are not optimal for building recursion — a key mechanism for scaling through recursive rollups.

The Potential of Our Solution

  • Scaling through recursive rollups: Support for recursive rollups allows for a significant increase in network throughput. This is achieved by processing large volumes of data off-chain and storing only minimal commitments on-chain. This approach provides unlimited scaling without compromising security and decentralization.

  • Ease of Use through Account Abstraction: Despite the technical complexity of recursive rollups, they remain transparent for end users and developers. With the implementation of account abstraction, users don’t need to concern themselves with which specific rollup stores their funds — they see a total balance and can perform operations without additional complications. Similarly, developers of decentralized applications can create services without worrying about which rollup processes their data. This is akin to how Bitcoin wallets use UTXO abstraction: users see their total balance without delving into technical details. Thus, our solution provides scalability without compromising convenience and accessibility for users and developers.

  • Economical data storage: Our solution allows storing hundreds or thousands of times less information on-chain compared to the volume of data processed off-chain. For each megabyte of data distributed in the network, only a short 32-byte hash is stored on the blockchain. This significantly reduces storage and transaction costs, making the technology more accessible for mass adoption.

  • Flexibility through off-chain block transfer: Using recursive rollup technology, blocks can subsequently be moved off-chain without losing data integrity and availability. This provides flexibility in network architecture and optimizes resource usage.

  • Transformation of Web2 to Web3: Our technology provides infrastructure for transferring existing Web2 applications and services to the decentralized Web3 environment. This opens up new opportunities for companies, allowing them to leverage the advantages of blockchain — security, transparency, and decentralization — without the need to completely rethink their business models.

  • Competitive advantage: Unlike existing solutions, our proposal combines efficiency, scalability, and compatibility with advanced technologies such as recursive rollups. This creates a significant competitive advantage and sets new standards in the industry.

Our Goal

We propose a solution that allows the use of FRI (Fast Reed-Solomon Interactive Oracle Proofs of Proximity) for DA with optimistic correctable commitments. This provides:

  • Efficient data storage: Storing only a small commitment (e.g., 32 bytes) on-chain for data clusters of several megabytes.
  • Compatibility with recursive rollups: Reducing the volume of data required for on-chain storage, contributing to unlimited blockchain scaling.
  • Data reliability and availability: Ensuring data correctness and availability even in the presence of errors or malicious actions.

Preliminary Information

FRI and Its Application

FRI is a method used in zkSNARK protocols to verify the proximity of data to Reed-Solomon code. It allows creating compact commitments to large volumes of data and provides efficient verification of their correctness.

Problems When Using FRI for DA

Direct implementation of FRI for DA faces problems:

  • Incorrect commitments: A malicious participant can present a commitment with errors in correction codes. If these errors are few enough, such a commitment will be accepted by the network.
  • Lack of connection between commitment and data: Even if the data is recovered, establishing a connection with the original commitment is difficult due to possible errors in the original commitment.

Our Solution: Optimistic Correctable Commitments

We propose an extension over FRI that solves these problems by introducing a new commitment construction \mathcal{C} = (C, \chi, \{a_i\}), where

\mathbf{D} - data we want to store in the network,

C=\mathrm{Commit}(\mathbf{D}) - commitment to the data,

\mathrm{Shard}_j - shards into which the data is divided using Reed-Solomon codes,

H_j = \mathrm{Hash}(\mathrm{Shard}_j) - hash of the shard,

\chi = \mathrm{Challenge}(C, \{H_j\}) - pseudorandom challenge,

\{a_i\} - result of opening the commitment C at \chi.

This construction provides:

  • Connection between data and commitment: Ensures that the data corresponds to the commitment and can be verified.
  • Possibility of error correction: The system is capable of detecting and correcting errors without trusting validators or clients.
  • Use in rollups: Provides mechanisms for integration with rollups, allowing them to use \mathcal{C} instead of the standard commitment C.

fig0

Implementation Details

Data Structure and Sharding

Data Representation

Let the data D be represented as a matrix of size T \times K, where |D| = T \cdot K. We consider the matrix as a function of columns.

Domain Extension

We apply domain extension to the matrix, increasing the number of columns from K to N using Reed-Solomon codes and the Discrete Fourier Transform (DFT). This allows for redundancy for error correction.

Data Sharding

The resulting matrix has a size of T \times N. Each column \text{Shard}_j (for j = 1, \dots, N) is considered as a separate data shard. These shards are distributed among network nodes for storage.

Shard Hashing

Each shard is hashed using a cryptographic hash function:

H_j = \mathrm{Hash}(\text{Shard}_j).

fig1

Commitment Generation and Opening

Commitment to Data

We apply FRI to the matrix, considering it as a function of rows. This allows obtaining a commitment C = \mathrm{Commit}(D), compactly representing all the data.

Random Point Generation

We use the Fiat-Shamir heuristic to calculate a pseudorandom point \chi, dependent on the commitment and shard hashes:

\chi = \mathrm{Challenge}(C, H_1, H_2, \dots, H_N).

Commitment Opening

We perform a polynomial opening of the commitment C at point \chi, obtaining proof \{a_i\}:

\{a_i\} = \mathrm{Open}(C, \chi).

New Commitment Construction for Rollups

Use in Rollups

When applied in rollups, \mathcal{C} is used instead of the standard commitment C.

Shard Correctness Verification

For each shard \text{Shard}_j, a node can verify it without the need to store all the data:

  1. Calculate the shard value at point \chi:

    s_j = \mathrm{Eval}(\text{Shard}_j, \chi).

  2. Calculate the opening value at the point corresponding to the shard:

    s'_j = \mathrm{Eval}(\{a_i\}, P_j),

    where P_j is the point associated with sharding.

  3. Compare values:

    s_j \stackrel{?}{=} s'_j.

If the equality holds, shard \text{Shard}_j is considered correct.

fig2

Correctness Lemma

Lemma: If for shard \text{Shard}_j the equality s_j = a_j holds, then with high probability \text{Shard}_j is a correct shard of data D.

Proof:

Using the Schwartz-Zippel lemma, we know that two different polynomials of degree d can coincide in no more than d points out of |F|, where F is the field. Since \chi is chosen randomly, the probability that an incorrect shard will pass the check is \frac{d}{|F|}, which is negligibly small for large fields.

System Architecture

Data Writing Process

  1. Transaction Initiation:

    • The client forms a transaction with data \mathbf{D} and metadata \mathcal{M}:
      • Commitment C.
      • Shard hashes \{H_j\}.
      • Opening \pi = \{a_i\} at point \chi.
      • Construction \mathcal{C} = (C, \chi, \{a_i\}).
  2. Transaction Validation:

    • Validators check the client’s signature.
    • Verify the correctness of metadata and opening \pi.
    • Verify the correctness of \mathcal{C}.
  3. Transaction Signing:

    • If all checks are successful, validators sign the transaction and transmit metadata and shards to network nodes.
  4. Shard Distribution:

    • Data \mathbf{D} is sharded, and shards \{\text{Shard}_j\} are distributed among network nodes.
  5. Node Verification:

    • Nodes receive their shards and check:
      • Correspondence of shard hash H_j and received shard \text{Shard}_j.
      • Correctness of opening \pi and construction \mathcal{C}.
      • If all checks are successful, nodes sign the transaction, after which the transaction can be included in a block.
  6. Metadata Storage:

    • Nodes parse the verified metadata into fragments, sign them using threshold signature, and distribute these fragments for storage among themselves.

Optimistic Error Correction

If errors or inconsistencies are detected:

  • Decentralized Recovery: Nodes jointly recover correct data using the redundancy of Reed-Solomon codes.
  • Fraud Proofs: Nodes can form fraud proofs if they detect incorrect actions.
  • Use of SPoRA Mining: In the process of SPoRA mining, nodes find in their data structure the data for which they can receive a reward. We have improved SPoRA mining to incentivize the node to recover all original data and commitments for the sectors involved in mining. Thus, to check the correctness of shard hashes and commitment, the miner only needs to compare several hashes, which minimally uses their resources but stimulates the miner to actively check and recover data.

Classification and Elimination of Errors

Types of Defects

Uncorrectable Defects

  • Incorrect metadata structure: Inability to interpret C, \pi, \{H_j\}, \mathcal{C}.
  • Mismatch between hash and shard: H_j \neq \mathrm{Hash}(\text{Shard}_j).

Reaction: The transaction is rejected by validators and nodes.

Metadata Inconsistency

  • Incorrect opening: Checking \pi returns false.
  • Mismatch in shard verification: s_j \neq a_j.
  • Incorrect calculation of \chi: \chi does not correspond to the calculated value from C and \{H_j\}.

Reaction:

  • Nodes form fraud proofs, proving incorrectness.
  • Validators who proposed such a transaction are subject to penalties.

Correctable Defects

  • Errors in correction codes: Small errors in shards that can be corrected.

    Circuit for this fraud proof for 1 MiB cluster requires \sim 3 \cdot 2^{15} (16 \to 8) poseidon2 hashes.

  • Partial incorrectness of shards: Some shards are damaged but can be recovered from others.

    Circuit for this fraud proof for 1 MiB cluster requires \sim 2^{15} (16 \to 8) poseidon2 hashes.

Reaction:

  • Nodes use data redundancy to recover correct shards in the mining process.
  • Miners will form fraud proofs consisting of a zkSNARK that recalculates this data correctly. Since the data of one block is just hundreds of thousands of field elements, there is no difficulty for the miner to generate such a zkSNARK. The proof and verification of the zkSNARK will be paid from the penalty of validators who proposed such a transaction.

Security Guarantees

It’s important to note that the optimistic elements of the protocol do not reduce the security of the system, as the assumption of the presence of a sufficient number of honest nodes is as reliable as these elements themselves. If an honest node misses defects or refuses to store data, it still won’t be able to mine and receive rewards.

  • Data preservation with an honest minority: If there is a sufficient number of honest nodes in the network, the data will be correctly stored and correspond to the corrected code close to the original code used to generate the commitment. Even if the commitment or shards were generated with errors, the network will be able to correct them and restore the data without errors.

  • Protection against incorrect changes: In the process of error correction, a dishonest majority will not be able to substitute shard hashes with incorrect ones or replace the commitment with one that does not correspond to the corrected code. This guarantees the immutability of data and their correspondence to the stated commitment. If there is already an accepted commitment in the network, it is impossible to introduce errors into it during the error correction process, even if the client, all validators, and all nodes are malicious.

Examples and Scenarios

Example 1: Data Recovery with Errors

Situation: Several shards are damaged due to failures.

System Actions:

  1. Error Detection: Nodes detect incorrect shards during verification.
  2. Data Recovery: Using Reed-Solomon code redundancy, nodes recover correct shards.
  3. Metadata Update: Update corresponding shard hashes H_j.
  4. Continued Operation: The system functions without interruptions, data remains available.

Example 2: Use in Rollups

Situation: A developer implements our system in a recursive rollup.

Actions:

  1. Integration of \mathcal{C}: The rollup uses the construction \mathcal{C} = (C, \chi, \{a_i\}) instead of the standard commitment C.
  2. Additional Opening: The rollup performs commitment opening at point \chi and includes this in the state proof.
  3. State Verification: Using \mathcal{C}, the rollup proves the correctness of its state in zkSNARK or zkSTARK.
  4. Scaling: Data corresponding to \mathcal{C} is not stored on-chain, which allows for a significant reduction in the volume of data stored on-chain.

Explanation of Key Concepts

Fiat-Shamir Heuristic

A method of transforming interactive protocols into non-interactive ones using hash functions. In our case, it is used to generate a pseudorandom point \chi dependent on the commitment and shard hashes.

Schwartz-Zippel Lemma

A theorem stating that two different polynomials of degree d can coincide in no more than d points from field F. It ensures that the probability of successful forgery of verification is negligibly small.

Reed-Solomon Codes

Error correction codes that allow data recovery in the presence of errors or losses. Used to create redundancy and ensure reliability of data storage.

\mathcal{C} Construction in Rollups

  • Why it’s needed: Provides a link between data and commitment, allowing rollups to prove the correctness of their state.
  • How it’s used: The rollup includes \mathcal{C} in its proofs, which guarantees data availability and its correspondence to the commitment.
  • Advantages:
    • Reduction of on-chain data volume.
    • Increasing proof efficiency.
    • Ensuring data reliability and availability.

Conclusion

We have presented a technique for using FRI for DA with optimistic correctable commitments, introducing a new construction \mathcal{C} = (C, \chi, \{a_i\}), which is especially useful in the context of rollups. Our system allows for a significant reduction in the volume of data stored on-chain and ensures data reliability and availability. It provides developers with new tools for creating scalable decentralized applications, integrating with recursive rollups, and incentivizing nodes to behave honestly through SPoRA-mining mechanisms and fraud proofs.

Appendices

Detailed Explanation of Commitment and Opening

Commitment Generation

  1. Data: Matrix D of size T \times K.
  2. Function: consider rows of D as values of T-1 degree polynomial function f(x) over field F.
  3. FRI Application: Apply FRI to f(x) to obtain commitment C.

Generation of Random Point \chi

  • Use a hash function to generate \chi:
    \chi = \mathrm{Hash}(C, H_1, H_2, \dots, H_N).

Opening at Point \chi

  • Calculate values \{a_i\} necessary to prove that f(\chi) corresponds to the data.

Use in Rollup

  • Additional Opening: The rollup includes in the proof the opening of the commitment at point \chi, i.e., \{a_i\}.
  • \mathcal{C} Construction: The rollup publishes \mathcal{C} = (C, \chi, \{a_i\}).
  • Advantages:
    • Provides data verifiability without the need for access to all data.
    • Guarantees that data is available and stored in the network.
    • Allows the rollup to reduce the volume of data needed for on-chain storage.

Proof of the Correctness Lemma

Assumption: Let \text{Shard}_j be incorrect but pass the check s_j = a_j.

Probability:

  • The probability that an incorrect shard will coincide at point \chi with a correct one is \frac{d}{|F|}, where d is the degree of the polynomial.
  • For a large field F, the probability is negligibly small.

Conclusion: With high probability, if a shard has passed the check, it is correct.

1 post - 1 participant

Read full topic

Proof-of-Stake Vorbit SSF with circular and spiral finality: validator selection and distribution

Published: Sep 20, 2024

View in forum →Remove

Thanks to Francesco D’Amato and Barnabé Monnot for feedback, and to participants of the RIG + friends (1, 2) meetup where parts of Orbit came together—including Barnabé’s finality stairwell.

By Anders Elowsson

1. Introduction

Ethereum will transition to single-slot finality (SSF) to provide fast and strong economic guarantees that transactions included in a block will not revert. This requires upgrades to the consensus algorithm (1, 2, 3, 4) and the signature aggregation scheme (1, 2, 3), as previously outlined. A third requirement is to upgrade Ethereum’s validator economics and management, with current progress on rotating participation presented in the Orbit SSF proposal. A few considerations in this area are how to incentivize validator consolidation (1, 2), how to temper the quantity of stake (1, 2, 3, 4), and how to select validators for the active set.

With SSF, the consensus mechanism will still consist of an available chain (e.g., RLMD GHOST) and a finality gadget (e.g., resembling Casper FFG or Tendermint). It remains unlikely that all validators will be able to participate in every slot, eventhough validator consolidation from EIP-7251 can offer a tangible improvement. This means that validators must be partitioned into committees, with each committee voting on the head of the available chain and/or finalizing successive checkpoints. Committees voting on the available chain must rotate slowly, but such strict requirements do not apply to the finality gadget (after validators have finalized their checkpoint).

This post will take a closer look at cumulative finality when finality committees rotate quickly or moderately, proposing strategies for validator selection and distribution. A forthcoming post is intended to review the dynamics of slower validator rotations, with a focus on the available chain. To properly model the impact of consolidation, equations for generating a “pure Zipfian” distribution for a specific quantity of stake are first presented in Section 2, with modeled staking sets generated at varying levels of purity. A method for generating committees in a new type of “epoch” is then presented in Section 3 and cumulative finality analyzed under different levels of validator consolidation and stake quantities—applying various committee selection criteria. A good evaluation measure is the “aggregate finality gap”, tallying missing finality for a block during its progression to full finality. It turns out that the activity rate of a validator should not strictly be determined by its size. Ideally, it varies with the quantity of stake and the composition of the validator set, hence “Vorbit", as in variable Orbit.

Cumulative finality is impeded at epoch boundaries when committees are shuffled. Circular finality is therefore suggested in Section 4, wherein successive epochs are repeated across a longer era, such that finality accrues in a circular fashion. A mechanism for shuffling the validator set in a spiral fashion is also introduced, to improve finality at shuffling boundaries. The impact of various selection and distribution methods is analyzed in Section 5, and the effect on finality across deposited stake is presented in Section 6. Section 7 reviews methods for predicting the optimal number of validating committees, and Section 8 reviews features related to consensus formation and staking risks.

2. Zipfian staking sets used for modeling

2.1 Pure Zipfian distribution

To model committee-based SSF, it is necessary to define the expected level of consolidation in the validator set, including a realistic range. The idea is to generate validator sets across this range and then explore how to optimally partition each set into committees. Optimization criteria relate to for example cumulative finality. An achievable consolidation level will also serve as a healthy bound when exploring consolidation incentives in a forthcoming study.

Vitalik reviewed the distribution of stakers in the early days of Ethereum and established that it was roughly Zipfian. The relationship between the staking deposit size D and the quantity of stakers N was then stipulated to D=32N\log_2{N} under a “pure” Zipfian distribution. A straightforward procedure for generating a “pure” Zipfian staking set is to distribute stakers’ balances as

\frac{32N}{1}, \frac{32N}{2}, ..., \frac{32N}{N}.

When N is large (as in this case), the associated harmonic series

1 + \frac{1}{2} + ... + \frac{1}{N}

approaches \ln(N)+\gamma, where \gamma is the Euler–Mascheroni constant, approximately 0.577. The total quantity of stake is then

D = 32N(\ln(N)+\gamma),

which is close to Vitalik’s approximation. Appendix A.1 shows that N therefore can be determined as

N = e^{ W \left( \frac{D}{32} e^\gamma \right) - \gamma},

where W denotes the Lambert W function. These equations provide the blueprint for generating a pure Zipfian staking set given any specific D. The equation is first applied to D to determine N, and the harmonic series involving N is used to create the distribution. The corresponding two lines of Python code are provided in Appendix A.1.

2.2 Modeled validator sets

Figure 1 shows the resulting distribution of staker balances in cyan. In purple is a second distribution (“1/2 Zipfian”) created by removing half the stakers (every other staker in the sorted set, starting with the second largest), and reallocating the removed ETH across 32-ETH validators. This aims to capture a scenario where many larger stakers maintain 32-ETH validators. Even if they eventually consolidate, it could still represent an intermediate distribution of “nominal” staker set sizes over the next few years as consolidation slowly progresses. This post uses several such distributions, including also a 9/10 Zipfian distribution (removing every tenth staker), a 4/5 Zipfian distribution, and a 2/3 Zipfian distribution.

Figure 1. Log-log plot of distributions of staker set sizes used for modeling in this post, at D= 30M ETH. The set sizes in cyan follow a “pure” Zipfian distribution, and the set sizes in purple remove every other staker and reallocates the stake to 32-ETH validators.

The pure Zipfian distribution has N\approx79\,000 at D= 30M ETH staked. Ethereum’s node count is hard to estimate; crawlers can only provide lower bounds. But it would appear that the node count is a bit below the staker set size for this hypothetical distribution. The 1/2 Zipfian distribution in purple has N\approx481\,000. This is a point that hopefully will be passed through on the way to a consolidated validator set; yet it is uncertain how quickly progress will be made.

The staking sets are converted to validator sets \mathcal{V} by having stakers with more than 2048 ETH (excluding those already reallocated to 32-ETH validators) divide their stake into validators of the maximum allowed size (s_{\text{max}}=2048), thus capturing the ideal outcome. The last two validators in this procedure are set to an equal size below 2048. For example, a staker with 5048 ETH will have validators of size {2048, 1500, 1500}.

Most of the stakers hold less than 2048 ETH under the Zipfian distributions, so this only adds around 9000 validators for the pure Zipfian distribution and around 5000 validators for the 1/2 Zipfian distribution. For the Zipfian staking set, Appendix A.2 shows that the corresponding Zipfian validator set size V=|\mathcal{V}| can be estimated quite precisely as

V = \frac{N}{64} \left(63+\ln(N/64) + 2\gamma \right).

The distribution of validator counts and sums across consolidation levels is shown in Figure 2.

Figure 2. Distribution of validator count and sum at 30M ETH staked in the five modeled validator sets. Axes are log-scaled.

3. Committees, cumulative finality, and aggregate finality gap

3.1 Generation of committees

Define \hat{V_a} as a desirable upper limit for the active validator set size V_a. The protocol can allow (and wants) V_a to increase up to \hat{V_a}, but not beyond this limit. This post sets \hat{V_a}=31250, which corresponds to the committee size when 1 million validators are split up into 32 committees (reflecting approximately the current committee size). There has been some progress in enabling clients to handle larger committees, yet the finality gadget may have a slightly different profile than today (e.g., subject the network to twice the signature load). Smaller committees such as \hat{V_a}=4096 could therefore also be modeled using the same framework if required.

Let C denote the number of committees in a new form of “epoch” constituting a full rotation of the validator set. The validator set is first split up into C disjoint regular committees, ensuring V/C<\hat{V_a}. As an example, the 4/5 Zipfian staking set at D= 30M ETH consists of around 233 thousand (k) validators. An epoch must therefore be split up into at least C=8 committees, with each regular committee in that case consisting of around 29100 validators. Setting C=8 leaves room to include around V_{\mathrm{aux}}=\hat{V_a}-29\,100=2150 auxiliary validators in each committee—validators that also have been assigned to participate in some other regular committee. Once these 2150 validators have been added, the final full committees consist of \hat{V_a} validators.

To select auxiliary validators for the committees, each validator of size s ETH is assigned a weight w. The baseline weighting is

w(s)=\frac{s}{s_{\mathrm{max}}},

where s_{\mathrm{max}}=2048 as previously discussed. This is similar to the thresholding operation of Orbit SSF, but it uses s_{\text{max}} rather than 1024. This change differentiates validators in the range 1024-2048. Vorbit performs optimally under full differentiation, and the change also makes individual consolidation incentives (discussed in Section 8.3) reasonable above 1024. Section 5.1 discusses how Orbit can adopt full differentiation. The probability P(s) for a validator of size s ETH to be drawn as the next auxiliary validator to be included in a committee is given by:

P(s) = \frac{w(s)}{\sum_{v \in \mathcal{V}_{¢}} w(s_v)},

where v represents each validator in the complementary set \mathcal{V}_{¢} not already part of the committee, and s_v is the size of validator v. The smallest validators will then tend to participate in roughly 1/C of the slots and larger validators more frequently, with outcomes depending on the quantity of stake and consolidation level (see also Figure 22).

3.2 Cumulative finality

Figure 3 shows the stepwise committee-based cumulative finality for the 4/5 Zipfian staking set, with committees finalizing consecutive slots. For transactions included in block n, aggregate finality is visually accounted for at the conclusion of the slot that the committee voted in. Finality when only using the regular committee is illustrated using a dashed blue line. Since each regular committee is completely disjoint and proportionally reflects the overall distribution, each committee adds an equal marginal cumulative finality to non-finalized transactions/blocks. The solid blue line in Figure 3 shows cumulative finalization when each regular committee in the example has been supplemented by auxiliary validators up to \hat{V_a} (“Full”).

Figure 3. Cumulative finalization of block n for the 4/5 Zipfian staking set. The finality gap (blue arrow) gradually falls. The aggregate finality gap is the sum of all finality gaps until full finality (cyan area).

3.3 Aggregate finality gap

Let D_f be the quantity of stake that has finalized a block, and D the total quantity of stake deposited for staking. The finality gap F_g is the proportion of the stake that has not yet finalized a block:

F_g = \frac{D-D_f}{D}.

While D_f is a relevant measure of economic security in isolation, D-D_f is less useful if D is unknown. A block’s finality gap will fall with each new slot as long as new validators participate in the finalizing committees. The example with full committees has a lower finality gap due to the additional finality afforded by the auxiliary validators. Since they are selected in a weighted fashion, the effect is rather pronounced eventhough only around 2150 additional validators were added in this example. The difference in the finality gap diminishes as full finality approaches. At this point, most validators will have been present as part of their regular allocation anyway, and repeating a validator does not, in this comparison, improve upon finality (an argument could potentially be made for higher economic security when repeating a validator, but this is beyond the scope of this post).

A useful utility measure when dealing with cumulative finality is the aggregate finality gap \widetilde{F}_{\!g} that a block is subjected to during consensus formation, until full finalization. It is represented by the cyan area in Figure 3 and is calculated as

\widetilde{F}_{\!g} = C - \sum_{i=1}^{C} F_{g}(i).

3.4 Auxiliary committees

What happens if the 4/5 Zipfian set is divided into 9 committees instead of 8? The added auxiliary committee (C_{\mathrm{aux}}=1) results in \hat{V_a}/9\approx3470 auxiliary validators in each committee, facilitating a further reduction in \widetilde{F}_{\!g}. A comparison between epochs of 8 committees (blue) and 9 committees (purple) is shown in Figure 4. The difference in the finality gap \Delta F_{\!g} for block n is indicated in green for the slots when there is a reduction in the gap, and in red when there is an increase. Cumulative finality first improves due to the additional auxiliary validators, and this is the most pronounced effect. As the number of duplicated validators increases, the reduction diminishes. At the beginning of slot n+7, the validator set divided into 8 committees instead has a lower F_g, and it reaches full finality one slot earlier, at the start of slot n+8.

Figure 4. Cumulative finalization of block n for the 4/5 Zipfian staking set. When adding an auxiliary committee, there is more room for auxiliary validators with high balances in each committee (purple line), and finality therefore accrues faster during the initial phase.

The aggregate finality gap continues to fall when more auxiliary committees are added, as indicated in Figure 5. In the comparison between C_{\mathrm{aux}}=3 and C_{\mathrm{aux}}=4, \Delta F_{g} is negative starting at the beginning of slot n+5, and continues to fall all the way up to slot n+12. As a result, the aggregate finality gap \widetilde{F}_{\!g} is about equal for these two configurations.

Figure 5. Cumulative finalization of block n for the 4/5 Zipfian staking set, comparing the outcome between different numbers of auxiliary committees.

Figure 6 shows the same example for a purely Zipfian staking set. At C_{\mathrm{aux}}=4, almost 20M ETH (2/3 of the stake) will finalize the block in the first slot. As in the previous example, the benefit of adding auxiliary committees diminishes as more are added (\widetilde{F}_{\!g} stops decreasing and eventually reverses).

Figure 6. Cumulative finalization of block n for a pure Zipfian staking set, comparing the outcome between different numbers of auxiliary committees.

Figure 7 instead shows the outcome with a 1/2 Zipfian staking set. The approximately 486k validators need to be split into at least

\left\lceil \frac{486000}{\hat{V}_{\!a}} \right\rceil = 16

committees. With no full committees, only 1.875 million ETH will finalize each round. This might seem problematic since a committee that fails to finalize will hold up finality until a sufficient amount of stake in the committee has been replaced through an inactivity leak or a similar mechanism. An accelerated inactivity leak could be considered under such circumstances. From an accountability perspective, this level of stake has however been argued to be totally sufficient.

Figure 7. Cumulative finalization of block n for the 1/2 Zipfian staking set, comparing the outcome between different numbers of auxiliary committees.

4. Circular and spiral finality

The figures in the previous subsection have all captured cumulative finality over one epoch and are representative for the first block in an epoch. During each epoch, the complete validator set is iterated over, so full finality is reached at the end of the epoch. However, if the validator set is shuffled between epochs, then only the first block of the epoch will achieve full finality by the end of the epoch. For blocks in later slots of the epoch, full finality will not be reached until the end of the next epoch. Marginal cumulative finality decreases markedly at epoch boundaries, because committees on each side of the boundary will have more validator overlaps (even when using only regular committees). Thus, when stating that full finality can be reached within eight slots for the 4/5 Zipfian staking set in Figures 3-5, this is a qualified statement. For the second block of the epoch, full finality is not reached until after 15 slots (7 slots in the first epoch and 8 slots in the subsequent epoch). Recall that C denotes the number of committees in an epoch, which is also then the number of slots in an epoch during regular operation. The average number of slots to full finality \bar{S}_{\!f} then becomes

\bar{S}_{\!f}=C + \frac{C-1}{2}.

While full finality might be more of an ideational concern, degradation from shuffling begins already at the second slot/committee if the block was proposed in the last slot of the epoch. There are two ways to improve on this: circular finality and spiral finality. Both provide benefits starting from a block’s second slot of accruing finality.

4.1 Circular finality

The most straightforward solution is to avoid shuffling the validator set each epoch. Instead, the validator set is shuffled in eras, where each era can consist of multiple epochs. The number of epochs per era E_{\text{era}} is determined from the desired number of slots per era \hat{S}_{\text{era}} and C, rounded to the nearest integer:

E_{\text{era}}=\lfloor\hat{S}_{\text{era}}/C\rceil.

With this change, the first (E_{\text{era}}-1)C blocks of the era will be finalized in C slots, whereas the last C blocks will finalize in accordance with the previous equation C + (C-1)/2. Furthermore, and perhaps more importantly, cumulative finality will not degrade when crossing epoch boundaries within the era. The average number of slots to full finality among the E_{\text{era}}\times C blocks of an era becomes:

\bar{S}_{\!f}=\frac{C(E_{\text{era}}-1)C + C(C + (C-1)/2)}{E_{\text{era}}\times C},

which simplifies to

\bar{S}_{\!f}=C + \frac{C-1}{2E_{\text{era}}}.

As an example, set \hat{S}_{\text{era}}=64. The 4/5 Zipfian staking set with no auxiliary slots will then finalize 57 out of 64 blocks in 8 slots, with one block each among the remaining finalizing in 9, 10, 11 slots, etc. Furthermore, only the last 7 out of 64 slots in the era will suffer degraded cumulative finalization, whereas 56 out of 64 will do so without circular finality. The average number of slots to full finality becomes \bar{S}_{\!f} \approx 8.4. In contrast, without circular finality, the result is \bar{S}_{\!f}=C+(C-1)/2 = 11.5.

4.2 Spiral finality

While circular finality is effective in reducing \bar{S}_{\!f} and the proportion of blocks with degraded cumulative finality, it does not reduce the maximum time to full finality, which remains S_{f} = 2C-1. This maximum applies to the block proposed in the second slot of the last epoch of the era. A method to reduce this maximum is spiral finality, where limits are placed on how many slots validators may shift forward within the epoch when they are shuffled. This is controlled by the variable C_{\text{shift}}. Setting C_{\text{shift}}=2 means that validators may only shift two slots forward, but they can always shift back to the start of the epoch. The regular validators located in the first committee of the epoch \mathcal{C}_n can then be reassigned between committees \mathcal{C}_n and \mathcal{C}_{n+2}, the regular validators in committee \mathcal{C}_{n+1} can be reassigned between committees \mathcal{C}_n and \mathcal{C}_{n+3}, etc. If \hat{V}_a is set relatively low, it might be reasonable to make further stipulations on the random selection, to ensure an even distribution of large and small validators.

Circular and spiral finality can be combined to achieve a low average time to full finality, as well as a lower upper bound on it. In this setup, spiral finality is applied to the last epoch of an era.

5. Optimized selection and distribution of auxiliary validators

This section reviews two different methods for optimizing the distribution of auxiliary validators. The plots will as previously disregard epoch boundaries (presume circular finality). In fact, to provide a more stable results in light of the randomness inherent in the validatior selection process, finality is evaluated in a circular fashion in all plots of cumulative finality in this post. This involves computing results for C consecutive slots across all C different starting positions. Additionally, the approach ensures that spacing and distribution of validators are not attuned to epoch boundaries, and is particularly useful in Section 5.2 that introduces equally spaced validators.

5.1 Adjusted weighting

The most straightforward modification is to adjust the weighting scheme by adding the power p to the original equation:

w(s)=\Big(\frac{s}{s_{\mathrm{max}}}\Big)^p.

If p>1, larger auxiliary validators are further prioritized over smaller validators. This can be useful since the smaller validators are still guaranteed to be included in one committee, and C can be relatively small (short epochs). A potential change to the Orbit slow-rotation paradigm, when validators are selected directly from the weighting and there is no regular committee, is that p instead can be set below 1. This reduces the “slope” of the thresholding mechanism, allowing smaller validators to be selected with a higher probability than for example 1/32 or 1/64. This can be beneficial for reasons discussed in Section 8.3, and will be further explored in a post covering the slowly rotating validator set.

Figure 8 shows the difference in the finality gap in terms of finalized stake \Delta D_{f} when changing p from 1 to 2. The average outcome across the five validator sets (from 1/2 Zipfian to fully Zipfian) was used. The reader may also wish to review Figure 22 in Section 8.2, which shows how the change in weighting alters the probability for a validator of size s to be active.

Figure 8. Change in D_{f} at 30M staked when p is changed from 1 to 2, during a block’s progression to full finality.

As evident, D_{f} is on average almost 5M ETH higher for the first slot, when C_{\text{aux}} is between 3-4 (those lines are somewhat overlapping in the graph). This is a significant improvement, reducing the finality gap at the first slot by almost 1/6. The examples with 2-4 auxiliary committees then experience a slight reduction starting at n+5. This is because validators with the most stake become included in almost every committee: repeated validators do not increase the cumulative finality, and they occupy space in the committees, preventing new validators from finalizing the block.

5.2 Equal spacing

Since repeated validators do not increase cumulative finality, it is advantageous to equally space repeated auxiliary validators across the epoch so that repetitions occur as far apart as possible. The distribution of auxiliary validators can then be done slightly differently. The number of auxiliary inclusions \lambda can be set for a validator with stake s as

\lambda(s) = \frac{V_{\text{aux}}(C-1)w(s)-\widetilde{V}_{\!\text{aux}}}{\sum_{v \in \mathcal{V}} w(s_v)}.

In this equation, \widetilde{V}_{\!\text{aux}} sums the auxiliary validator instances added across the full epoch among validators that are present in every slot. It is initially set to zero. For any validator v with \lambda_v > C-1, an iterative procedure sets \lambda_v = C-1, removes the validator from \mathcal{V}, adds C-1 to \widetilde{V}_{\!\text{aux}}, and recomputes \lambda for the remaining validators. The iterative procedure relying on \widetilde{V}_{\!\text{aux}} is necessary because a validator can never be included more than once per slot (\lambda \not > C).

Given \lambda, each validator is guaranteed inclusion in \lfloor \lambda \rfloor committees, with any remaining fraction used when drawing validators that will be included in one additional auxiliary committee. The final outcome is denoted \lambda_f. Auxiliary inclusions for validators are equally spaced at intervals of C/(\lambda_f+1) slots. The spacing procedure starts from the regular committee position, rounding the computed distance to the nearest integer, and wrapping around epoch boundaries using the modulo operation.

Due to randomness in the distribution of validators, slots will with this procedure generally end up slightly below or above \hat{V}_a. In general, this should not be an issue because \hat{V}_a would typically allow for some flexibility. However, to maintain consistency with the random draw in the evaluation, an iterative procedure reallocated validators from committees with more than \hat{V}_a validators to committees with fewer, still ensuring no duplications of validators within a committee.

Let \equiv represent equal spacing and \not\equiv the spacing achieved due to random draw. The change \Delta D_f at 30M ETH staked, computed as D_f(\equiv) - D_f(\not\equiv), is shown in Figure 9. The variable p was set to 2 both for random and equally spaced validators.

The most significant improvement from equal spacing occurs in the second slot after the block has been proposed. By definition, the first slot will not contain any repetitions anyway. The improvements are most pronounced when there are fewer auxiliary committees, as these are the circumstances where 2048-ETH validators are not included in nearly every committee.

Figure 9. Change in D_{f} at 30M staked when validators are equally spaced across the epoch, during a block’s progression to full finality.

The outcome for the 4/5 Zipfian staking set using p=2 and equal spacing is shown in Figure 10. It can be compared with the previous plot in Figure 5, that shows the outcome with p=1 and random spacing. The changes increase D_f from 15M ETH to 20M ETH in the first slot when C_{\text{aux}} is 3-4 and in the second slot when C_{\text{aux}}=1.

Figure 10. Cumulative finalization of block n for the 4/5 Zipfian staking set, with p=2 and equal spacing \equiv.

6. Analysis across D

The analysis in this and the next section relies on p=2 and the randomized distribution of auxiliary validators \not\equiv. Figure 11 shows how the aggregate finality gap varies with C_{\text{aux}} across deposit size for the 4/5 Zipfian set. At lower quantities of stake, C_{\text{aux}}=0 gives the lowest \widetilde{F}_{\!g}. At higher quantities of stake, C_{\text{aux}}=5 gives the lowest among those plotted. But increasing C_{\text{aux}} all the way up to 7 will spuriously give the lowest results above 70M ETH staked. However, outcomes are very tightly overlapping at higher settings (hence they were not plotted), implying that in terms of \widetilde{F}_ {\! g}, venturing above C_{\text{aux}}=4 will not offer significant improvements.

The characteristic shark fin-pattern emerges when validators are redistributed due to changes in C. As D increases while the distribution is kept fixed, V also increases. Each “fin” represents the addition of one committee. This addition gives room for more auxiliary validators, which reduces \widetilde{F}_ {\! g} when C_{\text{aux}} is relatively low. However, if C_{\text{aux}} is too high for the given validator set, the outcome is reversed, and the addition of one committee instead increases \widetilde{F}_ {\! g}. This is evident in Figure 12, which zooms in on the outcome below D= 35M ETH.

Figure 11. Aggregate finality gap for the 4/5 Zipfian set across D for various numbers of auxiliary committees.

Figure 12. Aggregate finality gap for the 4/5 Zipfian set with D\leq 35M ETH for various numbers of auxiliary committees.

Define \widetilde{F}^*_{\!g} as the minimum aggregate finality gap, achieved at the associated minimum auxiliary committees C^*_{\text{aux}}. This corresponds to the lowest line at any specific D in Figures 11-12. Figure 13 plots \widetilde{F}^ * _ {\!g} for all five staking sets. As evident, there are two fundamental factors that degrade committe-based cumulative finality: a higher quantity of stake and a lower level of consolidation.

Figure 13. The minimum aggregate finality gap across stake. Fast finality is degraded both by a higher quantity of stake and a lower level of consolidation.

Figure 14 instead focuses on how the optimal number of committees at \widetilde{F}^*_{\!g} varies. However, the optimal number of committees will fluctuate greatly due to the fin-like pattern evident in Figure 12, and it is also a discrete measure. Therefore, parabolic interpolation (see Appendix B.3) was applied to three points around the minimum, resulting in a smoother representation of total committees, here denoted C^y. Both the aggregate finality gap and the total number of committees rise linearly with an increase in the quantity of stake, keeping the distribution fixed.

Figure 14. Interpolated total number of committees that minimizes the aggregate finality gap.

7. Predicting the optimal number of auxiliary committees

7.1 Overview

How should the number of auxiliary committees (or any other setting such as p) be determined during operation if the suggested variant of committee-based finality is pursued? Five options can be highlighted:

  1. Generate committees and compute \widetilde{F}_ {\! g} (or some more appropriate measure, as further discussed in Appendix B.2) for various C_{\text{aux}}-settings (e.g., 0-6), selecting the one that minimizes \widetilde{F}_ {\! g}. This solution might have a high computational load if there are several hundred thousand validators.
  2. Run the process for only the current and adjacent C_{\text{aux}}-settings. If the analysis is performed frequently, store the results and rely on hysteresis, switching only if a clear majority of recent evaluations are in favor. For example, a threshold of 80% over the last week could be used.
  3. If the computational load in (2) is still too high, a reduced validator set \mathcal{V}_ r can be relied upon, with validators in \mathcal{V}_ r drawn evenly spaced from the ordered full set \mathcal{V}. The setting for \hat{V_a} should then be reduced proportional to the reduction in the validator set before generating committees.
  4. Compute some simple features of the validator set related to for example variability, and determine an appropriate number of auxiliary slots based on these features.
  5. Specify a fixed number of auxiliary committees so that the mechanism performs reasonably well under a wide range of validator sets, e.g., C_{\text{aux}}=2.

When it comes to implementation complexity, all options are rather straightforward. A benefit of Options 1-3 is that client will need to implement the function for assigning validators to committees anyway. The remaining process is then the evaluation function described in Appendix B.1. This process does not have a high time complexity, but it might still be too computationally intensive if there are many hundred thousand validators.

Option 2 is then rather appealing, with some parallels to how hysteresis is leveraged when updating validators’ effective balance. Option 3 can further reduce the computational requirements by at least an order of magnitude (10x), perhaps up to two (100x). A question then is what accuracy that can be achieved if the validator set is reduced to for example |\mathcal{V}_ r| = 1000 or |\mathcal{V}_ r| = 5000. This is studied in Section 7.2. Another question is of course the viability of Option 4, which could further reduce the computational requirements. This is studied in Section 7.3, and Option 5 is reviewed in Section 7.4. The conclusion of the experiment, expanded on in Section 9, is that Option 2, 3 or 5 seems like the most viable, with Option 5 as a natural starting point.

The ground truth for modeling was not based on the optimal number of auxiliary committees C_{\text{aux}} but instead on the optimal number of auxiliary validators V_{\text{aux}}, mainly to circumvent the fin-like pattern in Figures 11-12. To achieve a smoother target, a more refined point V^{y}_{\text{aux}} between neighboring V_{\text{aux}} values was derived via parabolic interpolation—as previously illustrated also for C^y in Figure 14. A further small adjustment was made before interpolation, slightly weighting up \widetilde{F}_ {\! g} when a large number of auxiliary slots relative to the number of regular slots was applied. Appendix B.2-3 explains the full procedure for generating the ground truth. Appendix B.4 then describes the generation of an additional log-normal validator set to provide a greater spread in the evaluated examples. It is shown in yellow in Figures 15-20. One thousand validator sets were generated for each of the six different distributions for the analysis with D in the range 10M-80M ETH, giving a total of 6000 examples.

7.2 Prediction accuracy with a reduced validator set

How accurate can the predictions be with a reduced validator set \mathcal{V}_r from Option 3, if \mathcal{V}_r consists of only 1000 or 5000 validators? To test this, the predicted optimal, V^{x}_{\text{aux}}, was computed on the reduced set, using the same evaluation procedure as when setting the ground truth V^{y}_{\text{aux}} on the full set (Appendix B.1). The outcome is shown in Figure 15 for 1000 validators (R^2=0.893) and in Figure 16 for 5000 validators (R^2=0.960). The broader black diagonal line represents perfect predictions, while the thinner black lines indicate the range where predictions fall within \hat{V}_{\!a}.

Figure 15. Predictions of the optimal number of auxiliary validators compared to ground truth, based on a reduced set of 1000 validators.

Figure 16. Predictions of the optimal number of auxiliary validators compared to ground truth, based on a reduced set of 5000 validators.

As evident, at higher values for V_\text{aux}, predictions become increasingly less accurate. This is related to the phenomenon shown in Figure 11, wherein the relative difference in \widetilde{F}_{\! g} will not be that large between the the best settings for C_{\text{aux}} (and thus V_{\text{aux}}). The broader implication is that getting C_\text{aux} slightly wrong will then not matter much. At the other end, getting it wrong at lower C_\text{aux} towards the bottom left corner is more of a concern. Note also that only examples where D> 40M ETH are problematic (review Figure 20 for the ground truth range 25M-35M).

The errors in the predictions stem from how the random selection influences the composition of the committees. Repeating the experiment several times and averaging the outcomes will therefore serve to improve accuracy (as would equally spaced validators described in Section 5.2). An example of the average outcome for four validator sets with V_{r} = {1000, 2000, 3000, 5000} respectively is shown in Figure 17. The predictions are now all within the \hat{V}_a boundary, and R^2=0.981. It must also be remembered that while V^{y}_{\text{aux}} by definition is the ground truth ideal outcome, it will also itself reflect a random division of validators during generation.

Figure 17. Predictions of the optimal number of auxiliary validators compared to ground truth, based on the average of four reduced sets.

7.3 Prediction accuracy using general features

Can general features be used to determine the optimal number of auxiliary validators? To explore this, features were generated for the simulated validator sets, capturing basic properties such as validator count, deposit size, and various measures of variability. Polynomial feature expansion of degree 2 was used to generate all monomials of the original features, capturing interactions and non-linear relationships. Predictions were then made using multiple linear regression. The final features were selected through a semi-automatic forward feature selection process, manually choosing among top predictors to premier those that are easier to interpret (a key requirement is a simple model). This process resulted in a linear regression model consisting of three features: {V\sigma, V_w\delta, D^2}.

The first feature is the number of validators V multiplied by the standard deviation of the validator set \sigma. If the standard deviation is high, auxiliary validators become particularly useful for reducing the finality gap. The second feature multiplies the average absolute deviation, denoted \delta, with a weighted count of validators V_w. The weighting assigned validators of size 32 and 2048 a weight of 1, with the weight then log-linearly falling to 0 at the mean validator size \bar{\mathcal{V}}. Specifically, each validator holding s ETH, received a weighting of:

\text{Weighted count}(s) = \begin{cases} 1 - \frac{\log(s) - \log(32)}{\log(\bar{\mathcal{V}}) - \log(32)} & \text{if } s \leq s_1 \\ 1 - \frac{\log(2048) - \log(s)}{\log(2048) - \log(\bar{\mathcal{V}})} & \text{if } s > s_1 \end{cases}.

Predictions with V^{x}_{\text{aux}}<0 were set to 0 (the number of auxiliary validators cannot be negative). The predictions had R^2=0.975 and are shown in Figure 18. The wider dispersion in the lower left corner is somewhat problematic, as previously discussed. Option 4 therefore gives slightly worse outcomes than Option 3. Also note that since there was no training/test split, quite a bit of overfitting can be assumed. If Option 4 is to be pursued seriously, there would need to be a test set and a wider variety in training examples.

Figure 18. Predictions of the optimal number of auxiliary validators compared to ground truth, based on general features capturing, e.g., variability in the validator set.

7.4 Prediction accuracy using a fixed number of auxiliary committees

It may also be interesting to review the outcome with a fixed number of auxiliary committees. The outcome when setting all examples to a fixed C_{\text{aux}}=2 is shown in Figure 19. It generates predictions in a vertical band that is \hat{V}_a wide, with deviations from the ground truth extending well beyond \hat{V}_a. Figure 20 instead focuses on the range 25-35M ETH. In this case predictions tend to fall within \hat{V}_a, except for the log-normal distribution, which has several examples with little to no variability in validator balances (in that case, auxiliary slots bring no benefit). This illustrates that if D is kept in a narrow range, the variation in the optimal number of auxiliary committees/validators is reduced considerably. Starting with a fixed number of auxiliary committees is therefore a viable baseline strategy.

Figure 19. Predictions of the optimal number of auxiliary validators compared to ground truth, with a fixed C_{\text{aux}}=2.

Figure 20. Predictions of the optimal number of auxiliary validators compared to ground truth, with a fixed C_{\text{aux}}=2, for validator sets in the range D= 25M-35M ETH.

8. Properties related to consensus formation

8.1 Committee rotation ratio

Define the committee rotation ratio R as the proportion of the stake that is replaced in successive committees following finalization. If all validators are replaced, R=1, and if all remain, R=0. Figure 21 shows how R changes across V_{\text{aux}} at 30M ETH staked. Aside from the relevance to the slow-rotation regime of the available chain, a ceiling has previously been discussed for a finality gadget leveraging Casper FFG. That suggestion, R=0.25, is indicated by a black horizontal line. The circles indicate the point where the aggregate finality gap is minimized (V^*_{\text{aux}}). This happens at rather modest rotation ratios and can of course readily be adjusted. Rotation becomes comparatively slow after adding around 150k auxiliary validator instances (C_{\text{aux}}\approx5), where 90% of the stake remains whenever a committee finalizes and rotates.

Figure 21. Committee rotation ratio R across auxiliary validators. Circles indicate the point where the aggregate finality gap is minimized.

8.2 Activity rate

The activity rate a captures the proportion of the committees that a validator is active in (defined as p in the Orbit post). The reciprocal a^{-1} captures the average number of slots until the validator has participated in one of them and will be referred to as a validator’s “apsis”—its orbital distance.

Figure 22 shows how a varies with validator size s when the aggregate finality gap is minimized (at V^*_{\text{aux}} marked by circles in Figure 21). As evident, a(s) is not a fixed property across validator sets, and will vary with, e.g., consolidation level and deposit size. Leveraging a variable orbit (“Vorbit”) seems natural, because a multitude of features that Ethereum wishes to optimize for vary with the composition of the validator set (a dynamic threshold has also previously been suggested).

Figure 22. The activity rate a across validator balances s at the minimized aggregate finality gap and 30M staked for the five sets. Note that the x-axis is log-scaled.

8.3 The activity ratio and its implications on staking economics

The activity ratio a_r = a(s_{\mathrm{min}})/a(s_{\mathrm{max}}) captures how often validators with a small balance are active, relative to validators with larger balances. The apsis ratio again denotes the reciprocal a^{-1}_r and can sometimes be easier to interpret: a^{-1}_r=32 means that small validators are present 32 times less frequenly than big validators. When a_r is small (and a^{-1}_r thus big), stakers running validators with a lower balance close to s_{\mathrm{min}} will bear a lower slashing risk than stakers running validators with higher balances close to s_{\mathrm{max}}. Inactive validators can hardly (by mistake or otherwise) make slashable actions. Someone running many small validators can therefore catch a faulty setup early so that a lower proportion of their validators are affected. Likewise, a small validator is less likely to get caught up in a catastrophic slashing event. Even if such an event only takes place every 100 years, it still meaningfully impacts the expected value of staking, particularly if the total staking yield decreases in the future.

In a slow-rotating mechanism, a_r is particularly relevant, given that stakers with a high average apsis on their validators can have even more time to, e.g., adjust faulty setups for inactive validators before they return as active. Yet a_r is relevant also in a fast-rotating mechanism. To encourage consolidation, individual incentives should compensate for the additional risks that large validators take on, relative to small validators. Individual incentives can potentially also be combined with collective consolidation incentives. The individual incentives will generally need to be higher when a_r is smaller, because the benefit of running smaller validators increases. It is therefore desirable to keep a_r closer to 1, whenever possible. This minimizes the yield differential between small and large validators, reducing “tensions” among stakers. Such tensions emerge under high yield differentials, where Ethereum will favor (or at least appear to favor) stakers with more capital.

Even if the additional yield is intended to compensate for increased risks, only stakers with high capital will have the option to choose between higher and lower risk. Stakers with more capital will also disproportionately benefit from the ability to adjust faulty setups among one of their many validators, should they decide to split up their stake. Tensions may therefore also emerge if a large staking service provider (SSP) relies on small validators to reduce its risk profile, and collective incentives as a result bring down everyone’s yield. There may for example be calls to discouragement attack the specific SSP’s validators, introducing an unhealthy dynamic to consensus formation. There are similarities to the type of issues that may emerge in MEV burn when using an English auction, where SSPs will need to specifically target each other through early builder bids to remain competitive.

In Figure 22, a_r is approximately 1/6 for the Zipfian staking set at p=2. Validators with 2048 ETH are always present, and validators with 32 ETH are only present as regular validators in one of the 6 committees of the epoch. This situation is generally better than if the smallest validators only are present once every 32 slots, as with the Orbit thresholding mechanism. The Orbit thresholding mechanism can however be adjusted by setting p<1. In a subsequent post, addressing slow rotation for the available chain, that avenue is intended to be explored further, together with other related consensus issues beyond the scope of this post.

Allowing 1-ETH validators would further reduce the activity ratio, requiring an increased yield differential. Smaller validator balances such as 1 ETH will thus require a communication effort to explain why these validators receive a markedly lower yield, and why large staking service providers cannot be prevented from relying on 1-ETH validators to lower their risk profile (but certainly nudged in the opposite direction via public discourse).

9. Conclusion and discussion

Cumulative committee-based finality has been reviewed under fast-rotating validator committees. A good measure for evaluating cumulative finality is the aggregate finality gap \widetilde{F}_{\!g}, which aggregates the finality gap for a block during its progression to full finality. The four main avenues for reducing (improving) \widetilde{F}_{\!g} are:

  • adding a few auxiliary committees (around 2-4) beyond those required for all validators to cast one vote in an epoch,
  • including the largest validators in almost every committee,
  • equally spacing auxiliary validators to minimize successive repetitions,
  • pursuing “circular finality” (repeating epochs over a longer era) and “spiral finality” (constrained shuffling) to mitigate degradation in cumulative finality during shuffling.

Five validator sets were used in the analysis, capturing various levels of Zipfianness. Section 6 made it clear that both insufficient validator consolidation and a higher quantity of stake (keeping the distributions fixed) impede finality, thereby increasing \widetilde{F}_{\!g}. When considering tempering the quantity of stake, one argument has been that it will generate a higher quantity of small validators. Regardless of the merits of this theory, it should be noted that a higher quantity of stake and a large proportion of small validators combines to produce the worst conditions for accruing finality, as shown in Figure 13.

Methods for dynamically adjusting the number of auxiliary commitees were reviewed. The best method is to simply simulate and evaluate the outcome with the same or one more/less auxiliary committees. This can be done on a reduced validator set to improve performance, if necessary. However, it is not a strict requirements that the number of auxiliary slots should change dynamically. The optimal setting for a given point in time is likely to remain viable for quite a while.

Properties related to consensus formation are important to keep in mind. As shown in Section 8, the committee rotation ratio R falls rather quickly with added auxiliary validators. It would be beneficial to map out more specific requirements on R from a consensus perspective going forward, both for the available chain and the finality gadget. Requirements regarding the activity ratio a_r are easier to understand in some respects; a higher ratio is better when considered in isolation, as it reduces tensions and yield differentiation.

The assumption of a pure Zipfian staking distribution becomes rather dubious if the range is extended much further. If the minimum staking balance is reduced from 32 ETH to 1 ETH, there will not necessarily be an exponential increase in stakers. One reason is that fixed costs for running a validator eventually surpass yield revenue as the staking balance decreases. For example, when focusing exclusively on fixed costs, if running a 32-ETH validator requires a 1% yield to remain profitable, then running a 1-ETH validator would require a 32% yield. Another point to keep in mind when considering a move to allow for 1-ETH validators is the decreased activity ratio a_r that it would bring. At the same time, allowing users with less capital to become active participants in the consensus process is of course fundamentally valuable and something to strive for.



Appendix A: Zipfian distribution

A.1 Quantity of stakers under a pure Zipfian distribution

For large N, the harmonic series

1 + \frac{1}{2} + ... + \frac{1}{N}

approaches \ln(N)+\gamma, where \gamma is the Euler–Mascheroni constant, approximately 0.577. The total quantity of stake is

D = 32N(\ln(N)+\gamma)

The task now is to deduce N, given a specific D. Let u = \ln(N) + \gamma. Then N = e^{u - \gamma}, and the equation can be rearranged as follows:

\frac{D}{32}e^\gamma = u e^{u}

Use the definition of the Lambert W function, which gives u = W(z), where
z = \frac{D}{32} e^\gamma:

u = W \left( \frac{D}{32} e^\gamma \right)

Recall that u = \log(N) + \gamma. Substituting this in gives

\log(N) + \gamma = W \left( \frac{D}{32} e^\gamma \right),

and thus

\log(N) = W \left( \frac{D}{32} e^\gamma \right) - \gamma.

Both sides are finally exponentiated to solve for N:

N = e^{ W \left( \frac{D}{32} e^\gamma \right) - \gamma}

The equation provides a simple way to deduce N given D, such that the baseline Zipfian distribution in the form of the harmonic series can be used in accordance with the previous specification. The following two lines of Python code generate the staking set S for a specific deposit size, where eg=np.euler_gamma:

N = round(np.exp(scipy.special.lambertw(D*np.exp(eg)/(32))-eg))
S = 32*N/np.arange(1, N + 1)

A.2 Quantity of validators under a pure Zipfian distribution

Recall that the number of stakers is as previously derived

N = e^{ W \left( \frac{D}{32} e^\gamma \right) - \gamma}.

Among these N stakers, 1/64 will have a stake of 2048 or higher:

2048 = \frac{32N}{N_{h}},
N_{h} = \frac{N}{64}.

Under ideal circumstances, that stake will be divided up into

V_{h} = \frac{1}{2048}\sum_{n=1}^{N_h} \frac{32N}{n} = \frac{32N}{2048} \cdot (\ln(N/64) + \gamma) = \frac{N}{64} \cdot (\ln(N/64) + \gamma)

validators. However, the \frac{N}{64} stakers will roughly add an expected \gamma validators each to that figure, due to waste when splitting up stake into its last validators (i.e., the cumulative effect of the fractional parts). There are 63N/64 stakers with less than 2048 ETH. The total number of validators under a pure Zipfian distribution, where stakers maximize consolidation, is thus

V=\frac{N}{64} \left(63+\ln(N/64) + 2\gamma \right).

The equation is a fairly exact approximation. For example, at 30M ETH, the outlined procedure gives 88065 validators, and the equation (after rounding N in the first step) gives 88065.385 validators.

Appendix B: Prediction and evaluation

B.1 Evaluation procedure

Cumulative finality of a block is simulated by a boolean mask that iterates through the committees, entering every validator seen up to and including a particular slot/committee. Thus, the starting point is a binary mask of all validators in the committee of the first processed slot. The operation then progresses through the slots of the epoch, entering previously unseen validators from a committee to the binary mask applicable to a specific slot. Finalized validators are then summed at each slot, from which D_f and thus \widetilde{F}_ {\! g} are computed. For best accuracy, the evaluation is performed in a circular fashion, as previously discussed. If S_{\text{ep}} is high, there can be an upper limit on how many starting points that are evaluated, e.g., starting at every other or every third slot. The evaluation for Section 6 used a limit of ten different starting points.

A potential optimization is to also compute a separate mask (or list of indices) that specifies only the newly unseen validators for each slot. This mask/list specifies if the validator is present in the current committee AND NOT present in the cumulative finality mask computed up to the previous committee. The benefit is that summation can be done only of the newly added validators, subsequently adding the cumulative sum derived at the previous slot/committee.

B.2 Weighted aggregate finality gap

The aggregate finality gap \widetilde{F}_ {\! g} generally seems like a well-balanced optimization criterion. Yet in some scenarios, it may be desirable to prioritize high finality in the initial slots (thus also slowing down rotation), while in others, a short time to full finality overall may be preferred (thus also increasing the activity ratio discussed in Section 8.3). A weighted aggregate finality gap can therefore be useful. This post used a weighting that provides a slightly shorter time to full finality on average. This helped resolve edge cases in the log-normal set (Appendix B.4) where the mechanism was brought very close to full finality, but the last fraction of finality took several slots. However, the opposite direction can also be explored, factoring in, for example, requirements concerning rotation speed.

Define a scenario where full finality can be achieved in 2 regular slots, but two auxiliary validators are added, as {2|4}. The weighting was designed to affect the following outcomes equally: {2|4}, {4|7}, {10|15} and {21|28}. This weighting for {a|b} was:

w = \frac{b\sqrt[3]{b-a}}{ka}.

The constant k was set to 2^6 and the weight applied to each evaluated number of auxiliary slots. This can be done in two ways: \widetilde{F}_ {\! gw} = \widetilde{F}_ {\! g}(1+w) or \widetilde{F}_ {\! gw} = \widetilde{F}_ {\! g} \ +w, with the first being used in this work.

B.3 Interpolated ground truth

The ground truth for modeling was not auxiliary committees C_{\text{aux}} but instead the number of auxiliary validators V_{\text{aux}} to be added. The main reason why C_{\text{aux}} is generally undesirable as a ground truth is related to the fin-like pattern in Figures 11-12. The optimal C_{\text{aux}} will shift at the boundaries where regular committees require one more regular slot, causing the ground truth to oscillate as D rises. Targeting auxiliary validators avoids this issue. The total number of committees C shown in Figure 14 could have been used as well, but this precludes parabolic interpolation with \widetilde{F}_ {\! gw} for the regular committee as one of the interpolation points.

A remaining issue is that the V_{\text{aux}} that minimizes \widetilde{F}_ {\! gw} is discretized into steps differing by \hat{V}_ {\!a}, since the minimum can only be defined at integers in C_ {\text{aux}}. Define the number of auxiliary committees that minimizes \widetilde{F}_ {\! gw} as C^*_{\text{aux}}. Parabolic interpolation was performed across the three neighboring points, \widetilde{F}_ {\! gw}(C^*_{\text{aux}}-1), \widetilde{F}_ {\! gw}(C^*_{\text{aux}}) and \widetilde{F}_ {\! gw}(C^*_{\text{aux}}+1) to derive a relative position:

w_{V} = \frac{\widetilde{F}_ {\! gw}(C^*_{\text{aux}}+1)-\widetilde{F}_ {\! gw}(C^*_{\text{aux}}-1)}{2(2\widetilde{F}_ {\! gw}(C^*_{\text{aux}}) - \widetilde{F}_ {\! gw}(C^*_{\text{aux}}-1) - \widetilde{F}_ {\! gw}(C^*_{\text{aux}}+1))}.

Define V_{\text{aux}}(C^*_{\text{aux}}) as the number of auxiliary validators at the optimal number of auxiliary committees. The ground truth V^{y}_{\text{aux}} is given by the w_{V}-weighted average of neighboring values. Thus, if w_{V}<0, it becomes

V^{y}_{\text{aux}}=V_{\text{aux}}(C^*_{\text{aux}}-1)(1-w_{V}) + V_{\text{aux}}(C^*_{\text{aux}})w_{V},

with a corresponding weighting applied if w_{V}>0. Predictions for the number of auxiliary validators in Section 7 are correspondigly defined as V^{x}_{\text{aux}}.

B.4 Generation of a log-normal distributed validator set

To provide a wider variety of examples of validator sets, an additional set with a log-normal distribution was generated. The mean \mu_{\mathcal{V}} was first drawn from a normal distribution centered at 400 ETH with a standard deviation of 128 ETH, restricted to lie within the range s_{\text{min}} = 32 ETH to s_{\text{max}} = 2048 ETH. Next, the standard deviation \sigma of the log-normal distribution was drawn uniformly from the interval [0, 3]. To provide edge cases with validator sets that have no variability at all, any \sigma below 0.2 was set to 0.

Given the selected mean \mu_{\mathcal{V}} and standard deviation \sigma, the mean of the log-normal distribution in the logarithmic space \mu was computed as

\mu = \ln(\mu_{\mathcal{V}}) - \frac{\sigma^2}{2},

with the goal of keeping the mean in the original space close to \mu_{\mathcal{V}}. Validators were then generated up to the sought quantity of stake D by sampling from a log-normal distribution defined by the parameters \mu and \sigma, ensuring that each generated validator remained within the bounds s_{\text{min}} to s_{\text{max}}.

1 post - 1 participant

Read full topic

Networking Privacy Problems in the P2P Network and What They Tell Us

Published: Sep 20, 2024

View in forum →Remove

Authors: Lioba Heimbach, Yann Vonlanthen, Juan Villacis, Lucianna Kiffer, Roger Wattenhofer

Preprint: [2409.04366] Deanonymizing Ethereum Validators: The P2P Network Has a Privacy Issue

TL;DR

The messages exchanged in the Ethereum P2P network allow a silent observer to deanonymize validators in the network, i.e., map a validator to an IP address of a node in the network. Our deanonymization technique is simple, cost-effective, and capable of identifying over 15% of Ethereum’s validators with only three days of data. This post discusses the technique, its implications, and potential mitigations to protect validators’ privacy in the P2P network.

Background

Ethereum’s P2P network is what allows validators to exchange important messages like blocks and attestations, which keeps the blockchain running. With over a million validators, it’s not practical for each one to send a vote (attestation) to every node for every block, especially to keep Ethereum accessible to solo stakers. To make things manageable, voting (i.e., attestations by validators) is divided in two main ways:

Time Division Across Slots: Validators only need to vote once per epoch (i.e., once every 32 slots), in a random slot. Thus, only a fraction are voting in a given slot.

Network Division Across Committees: Validators are split into 64 committees. Within each committee, a set of validators is assigned as aggregators that collect and combine attestations into a single aggregate. This division of attestations into committees is further mirrored in the network layer, which is also divided into 64 attestation subnets (overlay sub-networks). Each committee is assigned to one of these 64 subnets, and the corresponding attestations are broadcast only within the respective subnet. These subnets are also referred to as topics in the context of GossipSub, the underlying P2P implementation used by Ethereum.

Attestation Propagation in GossipSub: When a validator signs an attestation, it gets published to its specified subnet by sending it to a subset of peers that are part of the subnet. The node hosting a validator does not need to be subscribed to this specific subnet, since their committee changes every epoch. Instead, each node in the network subscribes to two subnets by default, and participates in propagating attestations only in these two subnets - these are known as backbone duties. Additionally, each node maintains a connection with at least one peer in each subnet so that their own attestations can be sent to the correct subnet in one hop.

Deanonymization Approach

Given the background on Ethereum node behavior, we describe how an ideal peer (a peer who gives us perfect information) would behave. Let us assume we are connected to a peer running V validators who is a backbone in two subnets. The peer’s validators will attest V times per epoch. Let us assume we receive perfect information from this peer, meaning they forward all attestations they hear about in their two backbones to us. In each epoch, we will receive V attestations from our peer for their validators, and N\cdot \frac{2}{64} for all other N validators.

Observation: An ideal peer will only send us an attestation in a subnet they are not a backbone of if they are the signer of the attestation.

Thus, in this scenario, we receive all attestations for the V validators of the peer and can distinguish them as the only attestations we do not receive from the two backbones of the peer. Thus, linking validators to peers in this scenario is trivial.

Case Study

In practice, however, network message data is not perfect. To showcase this, we plot the attestations received from an example peer across time and subnets. On this peer, we will identify four validators associated with the peer; their respective attestations are highlighted in red, blue, yellow, and green, while the remaining attestations are shown in pink. Notice that the attestations from these four validators, who happen to have consecutive identifiers, appear equally distributed across subnets. In contrast, the vast majority of attestations come from the two subnets where the peer acts as a backbone (subnets 12 and 13 for the sample peer). Thus, we can locate validators on our peers by observing how the attestations belonging to a validator, which we receive from the peer, are distributed across subnets.

Additionally, and this is where the imperfect information comes into play, the validators hosted on the peer are occasionally tasked with being aggregators in a subnet approximately every 30 epochs per validator. During these times, they temporarily become a backbone (smaller pink horizontal strips) for these subnets and receive attestations from multiple validators belonging to the subnet.

Heuristics

Based on the above observations and other network behaviors which lead to imperfect information such as temporary disconnects, we develop a set of heuristics to link a validator with a node. We verify our results (see pre-print for more details).

Comparison to Other Approaches

We are aware of three existing approaches to deanonymize peers in the P2P network that, similarly to us, only rely on observing messages.

A research post explores mapping validators to peers by observing which peer consistently first broadcasts a block. There also exists a medium post that discusses using attestation arrival times in a similar fashion. The presented analysis is based on data collected on the Gnosis Beacon Chain. Finally, in parallel to our work, a further research post discussed using dynamic subscriptions to deanonymize validators in the P2P network.

We believe that compared to these approaches our method requires significantly less data or concurrent network connections (in the case of timing analyses). Further, it is less prone to noise in comparison to those approaches based on arrival times and also works if a node hosts more than 62 validators (this is the limit of the approach based on dynamic subscriptions). Thus, we suspect it to be able to more precisely deanonymize a larger proportion of the network in less time.

Measurement Results

By deploying our logging client across four nodes over a period of three days, we were able to deanonymize more than 15% of Ethereum’s validators in the P2P network. Our nodes were located in Frankfurt (FR), Seoul (SO), Virginia (VA), and Zurich (ZH). By deploying a greater number of nodes and running the measurement for longer, we presume this figure would increase.

With the data we collected, we can also make additional observations about the geographic decentralization and hosting of validators, as well as the behavior of staking pools.

Geographic Decentralization

We show the distribution of validators across countries in the following figure both overall and separately for the four nodes we ran. We locate the largest proportion (around 14%) in the Netherlands. Further, 71.53% of the validators we locate are in Europe, 11.95% are in North America, 11.52% are in Asia, 4.90% are in Oceania, 0.06% are in Africa and 0.03% are in South America.

Additionally, we notice geographical biases, e.g., the SO node’s high relative proportion of deanonymizations in Australia and South Korea. Thus, we presume that the skew towards Europe could be a result of us running two out of the four nodes in Europe.

Cloud Hosting

We perform a similar analysis to understand how peers are run - if they run hosted on cloud providers or through residential ISP (likely home stakers). Overall, around 90% of the validators we locate are run through cloud providers, with the other 10% belonging to residential ISPs. We plot the distribution across organizations and find that eight out of the ten largest organizations are cloud providers. Further, we locate the largest number of validators in Amazon data centers, i.e., 19.80% of the validators we locate.

Staking Pools

We also take a deeper look at the practices of the five largest staking pools (Lido, Coinbase, Ether.Fi, Binance, and Kraken). On average, we observe 678 validators on a given peer for staking pools, with the largest node running 19,263 validators (!).

Additionally, many staking pools utilize node operators and many of the node operators run validators for various staking pools. This creates a dependency between the staking pools. In particular, we find five instances of validators from two different staking pools that utilize the same node operators being located on the same machine.

Security Implications

Taking Out Previous Block Proposers: One security issue that’s been discussed is the incentive for the proposer of slot n+1 to prevent the proposer of slot n from publishing a block. If successful, the slot n+1 proposer can include both the missed transactions and new ones in their block, earning more in fees. Since proposers are known in advance (about six minutes), an attacker could deanonymize the proposer for slot n and launch a temporary DoS or BGP hijack attack, preventing them from submitting their block. Importantly, this attack only needs to last for four seconds - the window for making a block proposal.

Breaking Liveness and Safety: Extending this attack, an attacker could continuously target the upcoming proposers to stop the network’s progress. If more than one-third of block proposals are missed, Ethereum’s finality gadget won’t be able to finalize blocks, halting the network. Even worse, safety could be compromised as many Ethereum light clients assume the chain head is finalized. By breaking network synchrony through DoS or network partitioning, attackers could cause serious issues.

Mitigations

To mitigate these security risks one can either improve privacy in the P2P network or protect against potential attacks. We discuss both avenues.

Providing anonymity

Increase Subnet Participation: Validators could subscribe to more subnets than the default, making it harder for adversaries to link specific attestations to validators. This increases the communication overhead on the network, potentially undermining Ethereum’s goal of enabling solo stakers to run validators with minimal resources. However, given the increase of the MAX_EFFECTIVE_BALANCE in the upcoming hardfork there might be room for a slight increase in the number of P2P messages.

Run Validators Across Multiple Nodes: Validators could distribute their attestation broadcasts across multiple nodes, making it harder to deanonymize them. While this increases operational costs, it can enhance privacy by spreading validator responsibilities across different IP addresses.

Private Peering Agreements: Both Lighthouse and Prysm clients allow validators to set up private peering agreements, where a group of trusted peers helps relay gossip messages. While this improves performance and reliability, it also provides some privacy, making it harder to trace validators to a single IP. Instead, an attacker would have to target multiple peers in the agreement. However, finding trusted peers can be costly and difficult, especially for smaller stakers.

Anonymous Gossiping: Protocols like Dandelion and Tor have been proposed to enhance anonymity. Dandelion, for example, sends messages through a single node first (the “stem” phase) before broadcasting to the network (the “fluff” phase), which helps conceal the message origin. However, these methods introduce delays and might not be fast enough for the Ethereum P2P network.

Defending Against DoS

Network Layer Defenses: The libp2p framework used for the Ethereum P2P layer already includes some defenses like limiting the number of connections, rate-limiting incoming traffic, and auto-adjusting firewalls. However, these aren’t foolproof, and manual intervention might still be needed during attacks.

Secret Leader Election: Another potential defense against DoS attacks is keeping the identity of block producers secret until they propose blocks. This idea, called secret leader election, avoids other issues and looks promising. Some proposals have been made for Ethereum, but they’re still in the early design phase as far as we are aware.

1 post - 1 participant

Read full topic

Block proposer FOCIL Resource Design Considerations

Published: Sep 19, 2024

View in forum →Remove

Special thanks to Thomas, Julian, Barnabe, and Jihoonsong for reviewing it

This document was motivated by our work on the FOCIL consensus spec, where we realized that the protocol required more thoughtful consideration around resource constraints since certain details were not explicitly specified in the FOCIL Ethereum research post.

Prerequisite

Before we begin, we assume the following setup to establish a clean baseline for our considerations:

  • The setup is based on the Electra hard fork. It also makes sense to revisit this on top of EIP-7732 (ePBS) for comparison
  • We are assuming solo block building and releasing, where the proposer is not running MEV-Boost. This is the first key component to get right, while the Builder API is a secondary consideration
  • We are assuming a solo staker setup with typical compute, memory requirements, and bandwidth that you can easily follow on the Ethereum chain today

Actors

Before we proceed, we assume the following actors are part of the protocol and analyze their responsibilities:

  • Inclusion List (IL) committee members, who are responsible for constraining the next slot proposer by its set of inclusion list transactions
  • The proposer, who is responsible for proposing the next slot
  • Attesters, who are attesting to the next slot for the head of the chain
  • Nodes, which are verifying and following the chain. Proposers and attesters are part of nodes that have staked Ether

Timeline

We assume the following timeline in which the IL committee, proposer, and attesters perform some honest actions:

  • Slot n-1, t=6: The IL committee publishes their local Inclusion Lists (ILs) over a global topic after learning the contents of block n-1
  • Slot n-1, t=9: Attesters and honest verifying nodes lock in their view of the local ILs
  • Slot n, t=0: The block proposer for slot n releases block B, which includes the payload which should satisfy the IL requirement
  • Slot n, t=4: Attesters for slot n vote on block B, verifying the IL aggregation by comparing it to their local IL views and confirming whether block B is “valid”
    • We overload the word “valid” when referring to a block, but it could mean “importable,” “canonical”, or something else. See the open question for further clarification

Interval 1: IL Committee Releases Local IL

Actor: Inclusion List Committee

IL committee members retrieve a list of IL transactions from EL client given the head (CL → EL call), then they sign the local IL (transactions + summaries) and release it to the gossip network.

Resource Considerations

  • Retrieving IL transactions from the EL mempool → CPU/MEM
  • Signing the inclusion list → CPU
  • Uploading the inclusion list to the gossip network → Bandwidth (Upload)

Actor: Nodes (including Attesters)

Nodes following the chain will download the IL, verify it for anti-DOS (not importing it to EL yet), and forward it to other peers. Nodes also import the IL into fork choice and track which ILs have been seen using an aggregate cache. Attesters and nodes following the chain should have the same view of the chain.

Resource Considerations

  • Downloading the IL → Bandwidth (Download)
  • Forwarding the IL → Bandwidth (Upload)
  • Verifying the IL for anti-DOS → CPU/MEM
  • Caching seen and aggregate ILs → MEM

Actor: Proposer

The proposer for the next slot actively monitors the IL gossip network and, collects and aggregates the local ILs, then at IL aggregation cutoff (interval #2) proposer updates the block-building process with a list of IL transactions to be included for its block. This requires a CL to EL call.

Resource Considerations

  • Inherits the same costs as nodes following the chain

Proposer Edge Case

If the next slot’s proposer observes a sufficient number of inclusion lists based on a parent hash it hasn’t seen, the proposer will need to manually request the missing beacon block, import the block, and build on top of that block.

Conclusion

Based on the above, we can identify potential resource-intensive areas and narrow down on them:

  • IL Committee’s CPU impact: IL transaction retrieval from EL & signing: while there are resource demands here, this is presumed to be relatively inexpensive and not a major concern.
  • Nodes bandwidth impact: Nodes downloading and uploading ILs may use tons of bandwidth, especially research post currently states that the inclusion list size is flexible/unbounded. This introduces a potential DOS risk, as a malicious IL committee member could flood the network with a large number of transactions, even if they are invalid. Nodes would still gossip about the IL before they import the ILs. Anti-DoS measures need to be considered carefully.

Interval 2: Nodes lock their view, proposer import IL transactions

Actor: Proposer

The proposer updates block building process with a list of inclusion list transactions. This is a CL → EL call.

Resource Considerations

  • Updates block building process with a list of inclusion list transactions → CPU/MEM

Actor: Nodes (including Attesters)

Lock inclusion list view. Stop accepting local inclusion list from this point.

Resource Considerations

  • Lock local inclusion list view → None

Conclusion

  • Proposer’s CPU impact: Importing the IL transactions into the block-building process could disrupt the block building process, potentially straining the execution layer client’s CPU during transaction simulation. This may become complicated under account abstraction as transactions may invalidate each other. This should be further analyzed.

Interval 3: Proposer Releases Block

Actor: Proposer

The proposer retrieves the execution payload from the EL client (CL → EL call), and releases it to the beacon block gossip network. Everyone else then verifies the block.

Resource Considerations

  • Retrieving the payload from the EL client → CPU/MEM

Actor: Nodes

Nodes receive the beacon block and verify it. The new verification steps include checking the inclusion list aggregate construction and confirming whether the inclusion list satisfies the evaluation function, which is be completed on the CL. The checking of IL conditions (whether they can be skipped due to conflicts or not) will be performed on the EL.

Resource Considerations

  • Verifying that the inclusion list is satisfied on CL → CPU
  • Verifying inclusion list conditions on EL → CPU

Conclusion

The additional duties for the proposer do not seem to be a significant concern. The new verification steps for nodes—checking verifying that the inclusion list meets the satisfactory conditions—may introduce some additional CPU load, but it doesn’t appear to be a major issue.

Interval 4: Attester Committee

Actor: Attester

The attester votes for the beacon block using LMD GHOST fork choice rule. Attesters will only vote for a beacon block that satisfies the inclusion list evaluation function, based on observations from Interval 1.

Resource Considerations

  • Attesters voting for a block that satisfies the inclusion list evaluation function → No additional cost

Conclusion

There is no difference than today.

Resource Consideration Summary

As seen above, the most significant resource concerns revolve around inclusion list upload, download, and the potential for spamming from a node’s perspective. Another key concern is the overhead on nodes for verifying and importing the inclusion list, as well as the proposer’s need to update its block-building process to satisfy the inclusion list. These aspects require careful consideration and design to ensure efficiency and security.

Open Questions

Based on the above, we outline several open questions that will influence how the specification is written:

  1. Block Not Satisfying the Evaluation Function: How should a block that fails the inclusion list evaluation function be handled, and what design considerations come into play for such conditions?

    • Should it be treated similarly to blobs and cannot be imported?
    • Should it not be filtered by fork choice?
    • Should it not be valid in the state transition function?
  2. Inclusion List Equivocations: If an inclusion list committee member sends different versions of the inclusion list to different nodes, and they are all propagated across the network, what are the consequences of this action? How could such behavior negatively impact the proposer building the next block?

  3. Proposer Already Building on a Different Head: If the proposer builds on a different head than the one sent by the inclusion list committee, and thus needs to change its head view, what are the consequences of this action for block validity and proposer behavior?

  4. Inclusion List Transactions Invalidations: Local inclusion list transactions can be invalidated in a few ways. Even if these transactions are invalidated, the block should still be able to satisfy the evaluation function. Transactions may be invalidated as multiple inclusion lists merge with each other or with transactions in the block. Besides typical nonce checking, account abstraction introduces new ways for transactions to be invalidated, as balance can be drained with a static nonce. How much additional simulation a block builder needs to perform due to transaction invalidation and how much this affects its CPU compute remains to be seen for both MEV-Boost actors and local builders.

  5. Proposer’s Observation of the IL Committee Subnet: The proposer monitors the inclusion list committee subnet to know when it is ready to construct the aggregate. There are two design approaches here, and it’s worth considering them further. The first approach is a greedy proposer, where the proposer waits until t=9, gathers as many ILs as possible, sends them to the EL, and the EL updates its block. The second approach is a selective proposer, where the proposer waits until it has a sufficient inclusion list to satisfy the eval function, sends them to the EL, and can do this in less than t=9s or even earlier. The question is whether the second approach justifies the optimization to allow the proposer to release the inclusion list aggregate earlier. The second approach may only be well suited for an IL with its own dedicated gas limit.

1 post - 1 participant

Read full topic

Economics Trusted Advantage in Slot Auction ePBS

Published: Sep 19, 2024

View in forum →Remove

Thanks to Thomas Thiery, Anders Elowsson and Barnabé Monnot for reviewing this post. Also, thanks to everyone in the Robust Incentives Group and the ePBS Breakout Call #9 attendees for discussions. This post’s argument was presented during the ePBS Breakout Call #9 (recording; slides).

ePBS facilitates the fair exchange between a beacon proposer and a block producer. A block producer is the party that constructs the execution payload, which could be a sophisticated builder or a proposer. The beacon proposer sells the rights to build an execution payload to a block producer in exchange for a bid. The Unconditional Payment and Honest Builder Safety are two desiderata of this fair exchange. The former means that a proposer must receive the bid’s value regardless of the block producer’s actions. The latter roughly implies the block producer must receive the execution payload rights if it paid for them. The final desideratum is that there is No Trusted Advantage. In this context, we define this final desideratum as follows.

(Definition) No Trusted Advantage: The beacon proposer is incentivized to use the in-protocol commitment to commit to the block producer whose in-protocol bid value maximizes the beacon proposer’s utility.

Since most beacon proposers run MEV-Boost, it has become apparent that they want to outsource block building to sophisticated block producers. The No Trusted Advantage desideratum ensures that the commitment that beacon proposers use to outsource to block producers benefits from the Unconditional Payment and Honest Builder Safety guarantees that ePBS provides. Suppose an ePBS design does not satisfy No Trusted Advantage, then a rational beacon proposer may be incentivized to sell the execution payload construction rights or to receive payment for the sale via other commitments that do not have the trustless fair exchange properties of ePBS. The trust a rational beacon proposer must then pose in a third party hurts the credible neutrality of the network and defeats the purpose of ePBS.

This post argues that the slot auction Payload-timeliness committee (PTC) ePBS design does not satisfy No Trusted Advantage. We will present an informal model in which a rational proposer outsources execution payload construction via another commitment than the commitment facilitated via ePBS. In practice, this attack means that a proposer will not use ePBS as intended. Instead, the beacon proposer waits until the execution payload must be revealed and sells the execution payload construction rights via an out-of-protocol block auction, like how MEV-Boost currently works.

In the slot auction Payload-timeliness committee (PTC) ePBS design, a beacon proposer commits to a block producer at the beginning of the slot (t = 0). The block producer must reveal its execution payload halfway into the slot (t = 6). This time difference exists because a block producer must be able to observe attestations for the beacon block so that it knows that the beacon block will likely become canonical, ensuring Honest Builder Safety.

Consider a proposer that decides whether to sell the execution payload construction rights Early (t = 0) or Late (t = 6). The proposer must decide which auction it will use at time t = 0, and it can only choose one auction format. If the proposer is incentivized to sell Early, slot auction ePBS satisfies No Trusted Advantage. If the proposer is incentivized to sell Late, No Trusted Advantage is violated since the proposer would commit to itself in the beacon block and run a MEV-Boost-like auction just before t = 6. Assume the proposer maximizes its payoff.

Builders also bid to maximize payoffs. At the beginning of the block, builders only know the distribution of values they could extract by building the execution payload. Just before t = 6, builders learn the precise value they could extract by building an execution payload called the realized value.

If the proposer were to host an auction Early, builders would bid based on their distribution of values; specifically, risk-neutral builders would bid according to the expected value. If the proposer hosts an auction Late, builders bid based on their realized value. This post assumes bids are monotonically increasing in value. The critical insight is that the expected auction revenue is likely higher based on realized value than expected values under many circumstances. This is because only the highest-order statistics of realized values are relevant.

Considering an ascending-bid first-price auction, the winning builder should pay the builder’s value with the second-highest valuation. Hence, if the second-highest expected value in the Early auction is lower than the expectation of the second-highest realized value, then the proposer will choose to sell via the Late auction.

(Main Result) Slot Auction ePBS does not satisfy No Trusted Advantage: In an ascending-bid first-price auction, a rational beacon proposer will auction the execution payload construction rights via the Late auction if the second-highest expected value (Early auction revenue) is lower than the expectation of the second-highest realized value (Late auction expected revenue). In this case, slot auction ePBS does not satisfy No Trusted Advantage.

This result relies on two key assumptions:

  1. The beacon proposer has access to a secondary market to sell the execution payload construction rights to block producers in the Late auction.
  2. Block producers have less access to a secondary market for selling the execution payload construction rights to other block producers in the Late auction than the beacon proposer.

If the first assumption does not hold, then the beacon proposer effectively has no option to sell in the Late auction. A secondary market will likely be available to the beacon proposer since this market already exists via MEV-Boost. On the other hand, Terence has argued that ePBS makes same-slot unbundling attacks easier if a beacon proposer were to sell Late because a beacon proposer could release equivocation execution payloads without losing fork-choice weight or being slashed.

The second assumption could be motivated by the complexity of facilitating trust between two sophisticated parties. Perhaps relays are less willing to facilitate fair exchange between two sophisticated parties. Perhaps block producers will continuously request bids from bidders in the secondary market, whereas today, proposers do not. Perhaps adverse selection is worse if the auctioneer is sophisticated.

Suppose the second assumption does not hold, and the beacon proposer and builders have equal access to the secondary market. In that case, the beacon proposer’s decision to sell in the Early or the Late auction is a risk-reward trade-off. This is because a block producer would then incorporate the value it could get by reselling the item in the secondary market in its bid, which would be equivalent to the value a beacon proposer could get. If the beacon proposer is less risk-averse than builders, it will sell in the Late auction; otherwise, it will sell in the Early auction.

The key point is that whether ePBS is used—whether No Trusted Advantage is satisfied—depends on the risk appetite of beacon proposers and the state of the secondary market if ePBS were deployed with slot auctions. Yet it is entirely unclear what these proposers’ risk appetite or the secondary market’s state will be. In contrast, MEV-Boost has given the ecosystem a lot of information on how block auctions work, providing confidence in it satisfying No Trusted Advantage.

Satisfying No Trusted Advantage in slot auction ePBS is challenging. This desideratum could be achieved by forcing a sale in the early auction, for example, using MEV-Burn. Execution Auctions use MEV-Burn. Whether MEV-Burn is sufficient to satisfy the No Trusted Advantage is still understudied.

This argument could be pivotal in whether ePBS is deployed with block or slot auctions. The difference between the two is huge in terms of market structure. Therefore, it would be very valuable to have numerical validation of the theoretical argument presented here. If you want to create a simulation that tests this theory, please get in touch with Julian Ma (julian.ma@ethereum.org)

3 posts - 2 participants

Read full topic

Layer 2 Decentralized Anti-MEV sequencer based on Order-Fairness Byzantine Fault-Tolerant (BFT) consensus

Published: Sep 13, 2024

View in forum →Remove

by KD.Conway

TL;DR

  • This post introduces a Decentralized Anti-MEV Sequencer based on Order-Fairness Byzantine Fault-Tolerant (BFT) Consensus, a mechanism designed to counteract MEV and ensure transaction fairness.

Order Fairness

Received-Order-Fairness[1]: with parameter 1/2 < 𝛾 ≤ 1 dictates that if 𝛾 fraction of honest nodes receive a transaction tx before tx′, then tx should be ordered no later than tx’.

Introducing the Anti-MEV Sequencer

Our proposed solution is a Decentralized Anti-MEV Sequencer that leverages an Order-Fairness Byzantine Fault-Tolerant (BFT) consensus mechanism. This system provides:

  1. Decentralization: Instead of a centralized sequencer, we will build a sequencer network with multiple nodes contributing to transaction ordering and batching.

  2. Order-Fairness: Transactions are processed based on the time they were received by the nodes in the sequencer network, ensuring no one participant can manipulate transaction ordering.

  3. Byzantine Fault Tolerance: The consensus protocol ensures the system remains operational even if some of the participants behave maliciously.

Workflow

  1. When a user wants to send a transaction on a layer 2 blockchain, they submit the transaction to the sequencer network.

  2. The Order-Fairness BFT consensus is employed to determine the correct order of transactions. This guarantees that, even if a minority of nodes act maliciously, the system can still reach consensus on a fair transaction order.

  3. After reaching consensus, the sequencer batches the transactions and submits them to the Rollup smart contract on Ethereum, where they are executed in the agreed-upon order.

For details on the system implementation of the Order-Fairness BFT consensus, please refer to the corresponding references at the end of this post.

References

[1] Kelkar, Mahimna, et al. “Order-fairness for byzantine consensus.” Advances in Cryptology–CRYPTO 2020: 40th Annual International Cryptology Conference, CRYPTO 2020, Santa Barbara, CA, USA, August 17–21, 2020, Proceedings, Part III 40. Springer International Publishing, 2020.

[2] Kelkar, Mahimna, et al. “Themis: Fast, strong order-fairness in byzantine consensus.” Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2023.

3 posts - 2 participants

Read full topic

Sharding PANDAS: A Practical Approach for Next-Generation Data Availability Sampling

Published: Sep 13, 2024

View in forum →Remove

PANDAS: A Practical Approach for Next-Generation Data Availability Sampling

Authors: Onur Ascigil1, Michał Król2, Matthieu Pigaglio3, Sergi Reñé4, Etienne Rivière3, Ramin Sadre3

TL;DR

  • PANDAS is a network layer protocol that supports Danksharding with 32 MB blobs and beyond.
  • PANDAS aims to achieve a 4-second deadline for random sampling (under the tight fork choice model).
  • Following the Proposer-Builder Separation (PBS), resourceful builders perform the initial distribution (i.e., seeding) of samples to the nodes.
  • PANDAS proceeds in two phases during each slot: 1) Seeding Phase, where the chosen builder of a slot distributes subsets of rows and columns of a 2-D encoded blob to the validator nodes, and 2) Row/Column Consolidation and Sampling phase, where nodes sample random cells and at the same time retrieve and reconstruct assigned rows/columns to boost the data availability of cells.
  • PANDAS uses a direct communication approach, which means 1-hop, i.e., point-to-point communications, for both seeding and sampling phases rather than a gossip-based, multi-hop approach or a DHT.

We make the following assumptions when designing PANDAS:

Assumption 1) Resourceful Builders: Following the Proposer-Builder Separation (PBS) scheme, in PANDAS, a set of resourceful builders — e.g., cloud instances with sufficiently high upload bandwidth such as 500 Mbps or more — undertake the distribution of seed samples to the network.

Assumption 2) Builder Incentives: The builders have an incentive for the blob data to be available since the block will be accepted only if DAS succeeds. However, different builders can have different amounts of resources. The interest of rational builders is to guarantee that data will be considered available while spending a minimal amount of resources.

Assumption 3) Validator Nodes (VNs) are the primary entities of DAS protocol: A single Validator Node (VN) performs only a single sampling operation (as one entity), independent of the number of validators it hosts.

Assumption 4) Dishonest Majority: A majority (or even supermajority) of VNs and builders can be malicious and, therefore, may not follow the protocol correctly.

Assumption 5) Sybil-resistance VNs: An honest VN can use a Proof-of-Validator scheme to prove that it hosts at least one validator. If multiple nodes attempt to re-use the same proof, they can be blocklisted by other honest nodes and builders.

Below are the objectives of PANDAS:

Objective 1) Tight fork choice: Honest validator nodes (VNs) complete random sampling before voting for a block, even when the majority of VNs are malicious*.* Therefore, we target the tight fork choice model, which means that honest VNs in a slot’s committee must complete random sampling before voting within four seconds into that slot.

Objective 2) Flexible builder seeding strategies: Given that different builders can have different resources, our design allows the block builder the flexibility to implement different blob distribution strategies, each with a different trade-off between security and resource usage. For higher security, the builder can send more copies of the blob’s cells to validator nodes, ensuring higher availability. Conversely, to minimise resource usage, the builder can distribute a single copy of each cell at most, reducing bandwidth usage at the expense of lower security. This flexible approach allows the builder to navigate the trade-off between ensuring data availability and optimising bandwidth while under the incentive for the block to be deemed available by validator nodes to be accepted.

Objective 3) Allowing Inconsistent Node Views: Our objective is to ensure that the VNs and the builders are not required to reach a consensus on the list of VNs. While we aim for the VNs and builders to generally agree on the set of VNs in the system, it is not necessary for the VNs to maintain strictly consistent views or for the builders’ and VNs’ views to be fully synchronised.

PANDAS Design

Continuous Peer Discovery: To achieve Objective 3, the nodes in the system perform continuous peer discovery in parallel to the protocol phases below to maintain an up-to-date “view” containing other peers. The builder and the VNs aim to discover all the VNs with a valid Proof-of-Validator. We expect both the builder and VNs to have a close but not perfect view of all the VNs in the system.

A membership service running the peer discovery protocol inserts new (verified) VNs to the view and eventually converges to a complete set of VNs. Peer discovery messages are piggybacked to sample request messages to reduce discovery overhead.

PANDAS protocol has two (uncoordinated) phases, which repeat during each slot:

Phase 1) Seeding,

Phase 2) Row/Column Consolidation and Sampling

In the seeding phase, the builder pushes subsets of row/columns directly to the VNs where row/column assignment is based on a deterministic function. Once a VN receives its samples from the builder, it consolidates the entire row/column it is assigned to (by requesting missing cells from other VNs assigned to the corresponding row/column) and simultaneously performs random sampling.

VNs do not coordinate to start consolidating and sampling. Therefore, a node finishing phase 1 can begin phase 2 immediately without coordinating with other nodes. The VNs who are the committee members of a slot must complete seeding and random sampling within 4 seconds into the slot.

Below, we explain the two phases of our protocol in detail.

Phase 1- Seeding: The builder assigns VNs to individual rows/columns using a deterministic function that uses a hashspace as we explain below. This mapping of VNs to individual rows/columns is dynamic and changes in each slot. The mapping allows nodes to locally and deterministically map nodes to rows/columns without requiring the number or full list of nodes to be known.

The Builder prepares and distributes seed samples to the VNs as follows:

1.a) Mapping Rows/Columns to static regions in the hashspace: The individual rows and columns are assigned static regions in the hashspace as shown in the upper portion of Figure 1.

1.b) Mapping VNs to a hashspace : The builder uses a sortition function FNODE(NodeID, epoch, slot, R) to assign each VN to a key in the hashspace. The function takes parameters such as NodeID, which is the identifier of the node (i.e., peer ID), epoch and slot numbers, and a random value R derived from the header of the block header from the previous slot.


Figure 1: Assignment of Row samples to VNs. The Column samples are mapped similarly.

A VN assigned to a row’s region will receive a subset of the cells belonging to that row from the builder. As the VNs are re-mapped to the hashspace during each slot using FNODE, their row/column assignments can also change.

NOTE: A dynamic, per-slot assignment of rows and columns to VNs is impossible in a gossip-based seeding approach where per-row and per-column gossip channels must remain relatively stable over time.

1.c) Row/Column Sample Distribution: For each row and column, the builder applies a best-effort distribution strategy to push subsets of each row/column to the VNs mapped to the corresponding row/column’s region. The builder uses a direct communication approach, particularly a UDP-based protocol, to distribute the cells for each row/column directly to the VNs.

Rationale for direct communication*:* We aim to complete the seeding phase as quickly as possible to give time for committee members to complete random sampling before voting (Objective 1).

Row/Column Distribution Strategies: We allow the builders to choose distribution strategies based on resource availability in line with Objective 2. A trade-off between resource usage and data availability exists for different distribution strategies. Consider the example in Figure 2 for distributing two rows. In one extreme case (on the left), the builder distributes the entire row 1 to each VN in the row’s region for improved data availability at the expense of higher resource usage. In another extreme case, the builder sends non-overlapping row pieces of row 6 to each VN in that row’s region, which requires fewer resources but results in less availability of individual cells.

We are currently evaluating different distribution strategies, including ones that can deterministically map individual cells of rows/columns to individual VNs in the row/column’s region.

NOTE: The builder is only involved in the Seeding phase.


Figure 2: Two (extreme) strategies to distribute row samples to the VNs in the corresponding row’s region.

Phase 2- Row/Column Consolidation and Sampling: VNs that are part of the current slot’s committee aim to complete random sampling within the slot’s first four seconds (i.e., voting deadline). To boost the availability of cells, particularly for the committee members of the slot who must perform (random) sampling within four seconds, the VNs also consolidate, i.e., retrieve the full row and column they are assigned to based on the FNODE mapping as part of row/column sampling.

2.a) VN Random Sampling: The VNs in the current slot’s committee attempt to retrieve 73 randomly chosen cells as soon as they receive their seed samples from the builder.

Using the deterministic assignment FNODE, VNs can locally determine the nodes expected to eventually custody a given row or column.

Sampling Algorithm: Some of these nodes may be offline or otherwise unresponsive. Sequentially sending requests for cells risks missing the 4-second deadline for the committee members.


Figure 3: Sample Fetching Example: The rows and columns assigned to each VN are shown on the top of the corresponding VN. VN14 knows to send a request to VN78 to retrieve cell one based on the knowledge of the mapping FNODE.

At the same time, sending requests to all peers holding copies will lead to an explosion of messages in the network and bear the risk of congestion. Fetching must, therefore, seek a tradeoff between the use of parallel and redundant requests on the one hand and latency constraints on the other hand. Our approach employs an adaptive cell-fetching strategy using direct communication between nodes through a UDP-based (connectionless) protocol. The fetching algorithm can tolerate losses and offline nodes.

2.b) VN Row/Column Consolidation: If a VN receives less than half of the cells of its assigned row or column from the builder (as a consequence of the builder’s chosen distribution strategy), it requests the missing cells from other VNs. A VN requests cells from only the VNs assigned to the same row/column’s region during row/column consolidation. When a VN has half of the cells of a row or column, it can locally reconstruct the entire row or column.

The Rationale for Consolidating Row/Column:

  • Reconstructing missing cells: while performing row/column sampling, VNs reconstruct missing cells.
  • To boost the availability of cells: Given the deterministic mapping (FNODE), the builder can choose any distribution strategy to send subsets of rows and columns to the VNs. Row/Column consolidation aims to improve the availability of samples so that random sampling can be completed on time.

Ideally, the builder should select a seed sample distribution strategy that enables VNs to consolidate rows and columns efficiently. To facilitate this, the builder can push each VN a map (together with the seed samples) that details how individual cells of a row/column are assigned to VNs within that row/column’s region as part of the builder’s distribution strategy. With this map, VNs can quickly identify and retrieve missing cells to reconstruct a complete row, thereby improving the availability of the data.

NOTE: In some DAS approaches, the term ‘row/column sampling’ refers to nodes retrieving multiple rows and columns before voting on the availability of the blob. In our approach, nodes retrieve rows and columns to enhance data availability, supporting validators who must perform random sampling before they vote.

We refer to this as ‘row/column consolidation’ instead of ‘row/column sampling’ because in PANDAS, committee members vote based on random sampling, and they do not directly sample entire rows or columns.

What about Regular Nodes (RNs)?

Unlike VNs, RNs do not obtain seed row/column samples from the builder. The builder sends initial seed samples to a Sybil-resistant group of VNs that use the Proof-of-Validator scheme. There is currently no mechanism for RNs to prove that they are not Sybils; therefore, the initial distribution of samples from the builder only uses VNs.

Using the public deterministic function FNODE, RNs can be similarly mapped to individual row/column regions. Once mapped to a region, RNs can (optionally) perform row/column consolidation to retrieve entire rows and columns and respond to queries for cells within their assigned region.

Like other nodes, RNs must perform peer discovery. In general, RNs aim to discover all the VNs and can also seek to discover other RNs. Given the knowledge of other peers through peer discovery, RNs can perform random sampling through direct communication. Unlike VNs, RNs are not under strict time constraints to complete sampling — they can start sampling after the VNs, for instance, after receiving the block header for the current slot.

Discussion & On-going Work

We assume rational builders to have an incentive to cut costs (and under provision) but, at the same time, aim to make blocks available (to be rewarded). This implies that the builders will aim for the row/column consolidation to be as efficient as possible, i.e., with efficient consolidation, which boosts the availability of cells, the builder can send less copies of each cell during the seeding phase to cut costs.

We are currently experimenting with different distribution strategies with malicious VNs withholding samples and attempting to disrupt peer discovery. Our DAS simulation code is available on DataHop GitHub repository.

1 Lancaster University, UK

2 City, University London, UK

3 UniversitĂŠ Catholique de Louvain (UCLouvain)

4 DataHop Labs

5 posts - 4 participants

Read full topic

Block proposer AUCIL: An Auction-Based Inclusion List Design for Enhanced Censorship Resistance on Ethereum

Published: Sep 12, 2024

View in forum →Remove

By @sarisht @kartik1507 @voidp @soispoke @Julian
In collaboration with @barnabe @luca_zanolini @fradamt - 2024-09-12T04:00:00Z UTC

TLDR;

In this post, we introduce an AUCtion-based-Inclusion List design, AUCIL, that leverages competition within an inclusion list committee consisting of rational parties. The protocol design leverages two key components: (i) an input list creation mechanism allowing committee members to pick non-overlapping transactions while maximizing their fees, and (ii) an auction mechanism allowing parties to ensure most of these input lists are included in the final output inclusion list. The former ensures many censored transactions are considered for inclusion, and the latter employs competition where including as many of the input lists as possible is incentivized to produce the output inclusion list.

Introduction

The centralized builder ecosystem of Ethereum today has led to ~2 builders with the power to decide which transactions are posted on Ethereum. This centralization leads to censorship concerns since the builders have complete authority over which transactions are included. The current solution proposed (and rejected) by Ethereum (EIP 7547) requires the current proposer to determine the inclusion list (or the set of censored transactions) to be included by the next proposer. Such a proposer also acts as a single point of failure, which can easily be bribed to exclude transactions. This has led to proposals such as COMIS and FOCIL that require inputs from multiple proposers to be aggregated to form the inclusion list.

Intuitively, using multiple proposers implies the need to bribe multiple parties for a transaction to be excluded. However, do all parties include the transaction in the first place? Since the resulting inclusion list is finite (limited to block size), how do each of these parties decide which transactions to include in their local list such that maximizing the utility also increases the system’s throughput? Moreover, when aggregating the transactions to produce the inclusion list, how many points of failure can be bribed to exclude transactions? This post introduces a multi-proposer design called AUCIL to address these questions.

Motivation

Let’s first motivate the first part as to how the inclusion lists should be created. For existing inclusion list designs, the intricate assumption is that an IL Proposer can include as many transactions as it sees. While FOCIL or COMIS, leave the proposal of transactions in Local Inclusion List underspecified, Fox et al. assumes that there is no network congestion. However, including all the transactions could lead to a scenario where the size of the inclusion list is larger than the block size. In such a scenario, the builder (constrained by transactions in the Inclusion List) would add as many transactions as possible, dropping any leftover transactions in the inclusion list.

The first thing to note above is that for an IL Proposer, it never makes sense to add more transactions than the block size, and thus, there could be an implicit block space size constraint (\mathcal{L}) on the Local Inclusion List (We would refer to these as Input Lists).

Now, consider that the proposer is passive (i.e., rational but does not accept a bribe). Since each input could be size \mathcal{L}, the resulting union of lists could be of size \geq \mathcal{L}. Now, the builder (or proposer without the PBS) is constrained to pick transactions from the Inclusion List; it would pick the top \mathcal{L} paying transactions, and the rest would not execute. Thus, the inclusion list proposers would only want to include the top \mathcal{L} transactions. Thus, all the previous analysis made for inclusion lists with a scale factor of the number of inclusion list proposers holds in this case (Fox et al., FOCIL, COMIS).

However, things look very different in the presence of a bribing adversary. Consider that one party is bribed enough (we will quantify this at the end of paragraph) to exclude a top \mathcal{L} paying transaction and instead replace it with (\mathcal{L}+1)^{th} transaction. The builder now receives an inclusion list with \mathcal{L}+1 transactions and can choose any transaction to exclude. The adversary can further bribe the builder to exclude the target transaction. Since there is one extra transaction in the list, the block can be formed without violating the properties of an inclusion list (All transactions are executed, or the block space is full). Coming back to the incentives for the party, if it is the only party that deviates from picking top \mathcal{L} transactions, then it would be the only recipient of the fee from (\mathcal{L}+1)^{th} transaction. This may be larger than the utility received (if f_t for the target transaction is not n times larger than f_{\mathcal{L}+1} for the inserted transaction). Even in the worst case, the bribe required would be slightly larger than f_t/n.

All in all, the property of inclusion list that allows the transaction to be excluded if the block is full is a property the design in this post wishes to avoid. Thus, we would restrict the size of input lists to less than \mathcal{L}/n such that even if all parties propose unique transactions, the size of the inclusion list is less than the available block size.[1]

There could exist other solutions to this problem like cumulative non-expiring inclusion list and unconditional inclusion lists, however, these require additional state support, where parties would have to keep track of previous inclusion lists.[2]

As for the other question of how many points of failure exist while using multi-proposer designs, aggregation of lists from all parties is the most critical point of failure, which hasn’t yet been adequately studied. Fox et al. sidestep this by never truly aggregating and assuming that the proposer’s inputs would be included without truly analyzing the problem. In COMIS, the aggregator role is formalized, and they assume that this role is trusted for their analysis. FOCIL removes this assumption by using the proposer of the next block and keeping the point of failure in check with the committee of attesters. However, relying on attesters comes with its share of problems. Attesters are not incentivized to verify; as long as they vote with other attesters, they receive rewards without the risk of a penalty. Using attesters to compute is thus more unreliable than relying on the attesters to confirm the existence of the block or verify a proof as used in this post.

Model

In this post, we consider all parties involved in consensus as rational, i.e., trying to maximize the value they receive through transaction fees, consensus, or bribery. We will call each party collectively proposing the inclusion list as an IL Proposer and their input as an input list. We will refer to the aggregator as the party that computes a union of these input lists to create an inclusion list. Differing from previous proposals, we assume that the input list size of each party is constrained. The size of an input list can be at most k \leq \mathcal{L}/n, as mentioned in the previous section. The total number of IL proposers is considered to be n. Each transaction tx_i pays a fee of f_i for inclusion in the inclusion list, which is paid to the IL Proposer(s) that include it (chosen by the user independently from the base fee and Ethereum transaction fee). If the transaction repeats across multiple input lists, the fee is equally divided amongst all the IL Proposers that included it tracably on-chain.

We assume an external adversary with a budget such that it can bribe parties to take adversarial actions.

Problem Statement

The problem setting consists of n rational parties who locally have access to a set of censored transactions (M_i) that are continually updated (their mempool). Let M = \cap_i M_i. The problem is to create a list of valid transactions with each party contributing a share of transactions it observes.

Adversarial model. We assume each of the n parties is rational, i.e., they maximize their utility. We assume a bribing adversary will bribe these parties to censor one or more transactions.

Definition ((b,p,T)-Censorship Resistance.) We say that a protocol is (b,p,T)-censorship resistant if given a budget b to an external adversary for bribing parties, for all transactions t \in T(M) at least p parties output a list which contains all the transactions in T(M).

The protocol design aims to maximize b for a fixed p and |T(M)|. More concretely, in non-multi-proposer inclusion list design schemes, b is typically O(f), but our protocol aims to obtain b = O(n\cdot f).

To facilitate understanding of the goal, T(M) can be considered the “feasible” subset of transactions in M, e.g., those paying sufficiently high fees subject to a space limit. The definition of T depends on the protocol we implement, and it is justified why such a T is used.

In our protocol, we assume that M_i = M. When M_i \neq M, our protocol does not satisfy the definition since it may output a higher paying transaction that appears in some M_i at the expense of some lower paying transaction in the intersection

Input List Creation Mechanism

The first question we address is how IL Proposers select transactions for their input lists. A simple approach is for IL Proposers to naively choose the transactions that pay the highest fees, regardless of the actions of others. However, this greedy approach is not a Nash equilibrium. If all other IL Proposers are greedily selecting transactions, the rational choice for any IL Proposer might not be to do the same. Table 1 illustrates this point.

Strategy Objects Picked Utility
Pick Top Paying (o_1,o_2) 7
Alternate (o_3,o_4) 15

Table 1: Picking top-paying objects is not a Nash equilibrium. Consider transactions (\{o_1,o_2,o_3,o_4,o_5,o_6\}) with utilities (\{11, 10, 9, 6, 4, 3\}) respectively and three players with max size input list of 2. Other players are assumed to follow the strategy of picking the top-paying transaction.

A more viable approach is to use mixed strategies, where each party selects transactions based on a predefined probability distribution. Deviating from this distribution would result in lower expected revenue. However, a mixed Nash equilibrium may not be sufficient, especially in games where players can wait to observe others’ actions before deciding. Thus, this post explores a correlated equilibrium instead.

A correlated equilibrium is a situation where each player is suggested specific actions, and deviating from these suggestions leads to lower utility, assuming others follow the suggestions. To prevent centralization (by asking a single known party to send recommendations), we propose a well-known algorithm that each party can run locally to simulate these suggested actions. Deviating from the algorithm would result in lower utility for the deviating party.

Algorithm 1: A Greedy Algorithm for Transaction Inclusion

Input: ( n \geq 0 ), ( m \geq 0 ), ( k \geq 0 ) (number of players, transactions, input list size)

Output: ( L_i ) arrays for all ( i \in P ) (final inclusion lists for each player)

  1. P \gets [1,\dots,n]
  2. U \gets [u_1,\dots, u_m]
  3. N \gets [1,\dots,1]
  4. \forall i \in P: L_i \gets [1,\dots,1]
  5. l \gets 0
  6. while l < k do
    1. i \gets 0
    2. while i < n do
      1. U_{curr} \gets (U \otimes L_i) \oslash N
      2. s \gets argmax(U_{curr})
      3. L_{i}[s] \gets 0
      4. N[s] \gets N[s] + 1
      5. i \gets i + 1
    3. end while
    4. l \gets l + 1
  7. end while
  8. return \forall i \in P: L_i

This algorithm iteratively updates each player’s transaction inclusion status. Each player’s input list (L_i) indicates whether a transaction has been included (0) or not (1). The algorithm aims to maximize utility values greedily, including transactions based on their current utility and the number of times each transaction has been included.

Description of the algorithm

Consider the following simulation protocol. All parties are first numbered randomly. Since the randomness needs to be the same across all parties, a random seed is agreed upon before the start of the protocol. All parties are assigned items greedily, one at a time. Each party picks the item that gives the maximum utility at that instant. To do so, it computes the current utility of all objects yet to be chosen \left((U \otimes L_i) \oslash N\right). The first (U \otimes L_i) makes the utility of all objects already chosen by i as 0, and then \oslash N divides by the number of parties sharing the object if party i decides to pick that object. The list of objects the party picks is updated (0 implies the object is chosen), and the number of parties picking the object is also updated. The procedure is repeated k times such that each party picks k objects. This protocol achieves a correlated equilibrium. Note that while the protocol assigns objects to parties one at a time, in practice, the output recommends all transactions to the parties at once.

This protocol provably achieves a correlated equilibrium while also achieving a notion of game-theoretic-fairness properties (almost equal distribution of fee) (Paper to follow soon). The set of all transactions chosen by the input list creation algorithm is T(M), for which we achieve (b,p, T)-censorship resistance through AUCIL, which follows.

Aggregation of input lists

After creating input lists, the next step is to aggregate the lists to create an inclusion list for the next block. If a transaction appears in the inclusion list, it is constrained to appear in the next block. Since the space occupied by the input list is fixed, it cannot suffer from spam transactions since each transaction is confirmed valid (with an adequate base fee) right before the block that includes it.

A standard way to approach this problem is to assign a party the role of an aggregator. This aggregator would compute the union of all the input lists and add it to the inclusion list. However, this aggregator is now a single point of failure. For instance, it may be the case that the aggregator may not receive input lists from all IL proposers and thus cannot be expected to add all input lists. However, if we consider this and only require it to include some threshold number of input lists, then the aggregator can strategically omit specific input lists and significantly reduce the required budget to censor transactions.

So, what can be done in this case? FOCIL requires the proposer of the following block to include an inclusion list, a superset of local input lists. However, it still allows for some transactions to not be on the inclusion list (due to the threshold). Instead, we look at a different way to deal with this problem. We auction off the role of the aggregator; however, instead of paying a bid to win the role of the aggregator, the bids are the size of the inclusion list. Thus, if a party P proposes a larger inclusion list than all other parties, then P would be rewarded with the aggregator role and reward.

Algorithm: AUCIL Outline

Participants: All IL proposers P_1, P_2, \ldots, P_n

Step 1: IL Proposers Broadcast Input Lists

  • For each proposer P_i:
    • P_i \rightarrow_B (broadcasts to all parties): \text{inpL}_i

Step 2: Parties Aggregate Input Lists into an Inclusion List and Broadcast It

  • For each party P_j:
    • \text{incL}_j = \bigcup_{i=1}^{n} \text{inpL}_i
    • P_j \rightarrow_B (broadcasts to all parties):\left(\text{incL}_j, \ell_j = \text{size}(\text{incL}_j)\right)

Step 3: Proposer Selects the Highest Bid Inclusion List

  • Proposer receives: \{(\text{incL}_1, \ell_1), (\text{incL}_2,\ell_2), \ldots, (\text{incL}_n,\ell_n)\}
  • Proposer selects the highest bid.

While Step 2 has its incentives clear by introducing aggregation rewards (u_a), Step 1 and Step 3 are not incentive compatible. If all other parties broadcast their input lists, then it is dominant not to broadcast its input list for a party. This way, it can create the largest inclusion list and thus win the auction. Thus, Step 1 is not incentive-compatible. Similarly, the proposer is not incentivized to pick the largest bid. Censorship in auctions (Fox et al.) has been studied and is easily applicable here. Thus, Step 3 is also not incentive-compatible.

Recall the definition of censorship resistance. If some protocol satisfies the definition of (b,p, T)-censorship resistance, then at least p parties output a non-censored inclusion list. Thus, we require the proposer to include proof of the included bid being greater than n-p other bids (e.g., including n-p bids). If the proposer fails to add such proof, the block would be considered invalid, thus making Step 3 incentive compatible.

We make the auction biased to deal with the problem of not broadcasting. First, observe that if no party is broadcasting its input list, then the probability of winning the auction for any party is very low; thus, broadcasting its input list at least yields the rewards from including the input list in making the inclusion list. Thus, if more people believe that keeping its input list private does not lead to a significant increase in the probability of winning, then parties would be incentivized to broadcast its input list.

Algorithm: AUCIL

Participants: All IL proposers P_1, P_2, \ldots, P_n

Step 0: IL Proposers Generate Their Auction Bias

  • For each proposer P_i:
    • P_i generates a random bias: \text{bias} \gets \text{VRF}(P_i, \text{biasmax})
    • (The bias is uniformly distributed between 0 and \text{biasmax} and is added to the bid.)

Step 1: IL Proposers Broadcast Input Lists

  • For each proposer P_i:
    • P_i \rightarrow_B (broadcasts to all parties): \text{inpL}_i
    • (Proposers broadcast their input lists to all parties.)

Step 2: Parties Aggregate Input Lists into an Inclusion List and Broadcast It

  • For each party P_j:
    • \text{incL}_j = \bigcup_{i=1}^{y_j} \text{inpL}_i
      • (where y_j is the number of input lists party P_j receives.)
    • P_j \rightarrow_B (broadcasts to all parties): \left(\text{incL}_j, \ell_j = y_j + \text{bias}\right)
    • (Parties declare their bid with the added bias.)

Step 3: Proposer Selects the Highest Bid Inclusion List

  • Proposer receives: \{(\text{incL}_1, \ell_1), (\text{incL}_2,\ell_2), \ldots, (\text{incL}_n,\ell_n)\}
  • Proposer selects the highest bid and adds it to the block (\text{incL},\ell).
  • Proposer adds proof that the highest bid is greater than n-p other bids.

Step 4: Attesters Vote on the Validity of the Block

  • For each attester:
    • Attester receives: \{(\text{incL}_1, \ell_1), (\text{incL}_2,\ell_2), \ldots, (\text{incL}_n,\ell_n)\} and (\text{incL},\ell)
    • Attester verifies the attached proof and votes only if the proof is correct.
  • Block is considered valid if it receives more than a threshold of votes.

With the above algorithm, we claim that the party is incentivized to broadcast the input list unless the bias drawn is greater than \text{biasmax} -1. Even when the bias is greater than \text{biasmax} -1, a mixed Nash equilibrium still exists, and parties could still choose to broadcast.

Censorship Resistance

Censorship by bribery to IL Proposers

The first attack step an adversary can take is removing a transaction from the input lists. For this, assume that a bribe is given to those IL Proposers who are assigned to include the target transaction. This bribe should be enough to ensure that the target transaction is excluded from each input list with probability 1. It is assumed (for now) that each of these IL Proposers would compute the union of all observed input lists in Step 3.

Fox et al. analyze the bribe required for a multi-proposer scenario. In their case, it is assumed that the transaction repeats across all proposers. If a transaction pays a fee (higher fee for them) of f_i, then the adversary would have to pay n times the fee to censor the transaction.

In our case, the analysis is similar. If the transaction repeats across \kappa_i input lists, then the expected bribe required is \kappa_i f_i. The parameter \kappa_i is directly proportional to \frac{n\cdot f_i\cdot k}{\sum f_i}, where \sum f_i is the sum of fees paid by all transactions chosen by the protocol. As an intuition for this number, one of our results ensures that the revenue distribution from each transaction is fair, and thus, assumes that each transaction gives the same utility. (Let’s say there exist two transactions paying a fee of 15 and 5, respectively, then the former transaction would be included in thrice as many input lists as the latter transaction. Thus, revenue is the same). n\cdot k represents the total available slots out of which a transaction with fee f_i would occupy \frac{f_i}{\sum f_i} off the total space to maintain the same revenue assumption. Thus, if bribing the IL Proposers to exclude the transaction from the input list is the dominant action (as compared to bribery by aggregator we will mention next), then the protocol would be (b=O(\frac{nkf_i^2}{\sum f_i}),n, T)-censorship resistant.

Censorship by bribery to aggregator

In an alternate bribery attack, the adversary could bribe a party to reduce its bid by excluding all input lists that contain the target transaction. Thus, the bid for each party decreases by \kappa_i. This would be the same as drawing a bias \kappa_i less than what is drawn. A bias of \text{biasmax}-1 is supposed to have almost 0 probability of winning, and thus, reduction of a party bias to \text{biasmax}-\kappa_i, essentially means the adversary is bribing the party to not participate in the auction. From our analysis, the adversary would have to pay in expectation \frac{\kappa_i n}{biasmax} parties (Each with a bias greater than n-\kappa_i) a bribe of u_a each in order for them not to include the input lists containing the target transaction. Setting \text{biasmax} and u_a to be \sqrt n and \sqrt n \cdot u_{il} \geq \sqrt n \cdot f_i, we achieve (b = O(\frac{n^2kf_i^2}{\sum f_i}),n-\kappa_i\sqrt n+1,T)-censorship resistant.

Conclusion

We outline an input list building scheme that all parties are incentivized to follow. Working within the confines of limited-size inclusion lists, we achieve significant censorship resistance guarantees (proportional to the number of parties, including the transaction). Then, we looked at an aggregation scheme, AUCIL, that utilizes auctions to incentivize parties to include the largest inclusion list. AUCIL ensures that the aggregator is incentivized to add all input lists to the transaction. We are also analyzing how coalition affects the censorship resistance guarantees and will publish the results soon. Meanwhile, it would be amazing to hear thoughts on AUCIL and the inclusion list building mechanism.


  1. Note that with EIP-1559, the cost to fill the block scales when the block space is full. And so, if the network is not congested, and the adversary is inserting artificial transactions to raise the congestion, then the cost of bribery would be high across multiple blocks. ↩︎

  2. We achieve the same “unconditional” property as Unconditional ILs without assigning exclusive Inclusion List space. ↩︎

3 posts - 2 participants

Read full topic

Economics Pricing Ethereum Blocks with Vol Markets with Implications for Preconfirmations

Published: Sep 12, 2024

View in forum →Remove

Ethereum Block Pricing in the Context of Vol Markets

by Lepsoe (@ETHGas)

With thanks to the Commit Boost, and Titan teams for making Preconfs a near-term open and scalable possibility, and Drew for prompting the market sizing exploration

TL;DR

  • With the forthcoming gas markets and the ability to buy Entire Blocks, we look at how to price these taking into account prevailing market Volatility, Token prices, Transaction Fees, and Liquidity
  • Treating the Blockchain/Network as a financial instrument, Block purchases are effectively Options on this network. If one can buy 5 blocks of Ethereum (e.g. 1 minute), one can observe prices in CEXs over this time with an option to monetize the difference between CEX and DEX prices (e.g. latency arb trade)
  • Buying a block is analogous to buying a Straddle on the Network, and all its DEXs. Taking into account transaction fees, liquidity and slippage, however, this is more analogous to a Strangle.
  • We then employ an arbitrage trade that involves Shorting European Strangles in CEX (e.g. Deribit, Binance, OKX), and Buying Blocks or Preconfs of Ethereum. This implies a minimum or floor price for one or many consecutive blocks
  • We can then draw a direct, real-time connection between the current implied Vol for ETH, BTC, SOL, etc… and Preconfs prices
  • We conclude that if ETH Vol is 75%, and transaction fees are 0.10%, then buying 5 consecutive blocks of Ethereum should be no lower than 6.9 Gwei
  • Historically, very short-end vol appears to rise dramatically higher than 75% with a Mean of 273%, although the median remains at 75% over the last 2 years
  • With the current PBS flow and prior to blockspace commitment contracts, this strategy is possible but limited to only the current/next block. With the ability to buy two or more blocks, it becomes easier to execute on and thus price Preconfs with confidence
  • Connecting the two markets, Vol and Macro traders may therefore trade the Preconf markets, in some cases, with little care as to how these instruments are used or valued with respect to the underlying physical gas markets themselves (e.g. typical orderflow, MEV)
  • The terms Preconfs and Blocks are used interchangeable for readability

Background

How much are Ethereum’s blocks worth?

Arbitrage, often referred to as ‘arb’ trading, typically involves quantitative strategies that exploit pricing discrepancies or minor imbalances between closely related financial instruments. These instruments may be similar in nature or expected to exhibit similar behaviors over time - they can be priced with models or priced using dynamic replication (such as options replicated through dynamic hedging).

One such arb is statistical arbitrage (‘stat arb’) that frequently employs mean reversion models to capitalize on short-term pricing inefficiencies. Another one is latency arbitrage that takes advantage of minute price variations across different trading venues. In the cryptocurrency, a common form of arbitrage is known as CEX/DEX arb, a type of latency arbitrage where decentralized exchanges (DEXs) respond more slowly to market changes than centralized exchanges (CEXs), largely due to differing block or settlement times. In such scenarios, traders engage in relative-value or pairs trading between centralized exchanges (such as Binance and OKX) and decentralized exchanges (such as Uniswap and Curve).

The Network As a Financial Instrument

In this article, we look to delineate, and quantify such an arbitrage trade between two seemingly different instruments: the Vol markets on CEX vs the Ethereum Blockchain itself (i.e. the Network, not DEXs).

The purpose of this article is to introduce a closed-form solution to price a floor price for Ethereum Blocks drawing a direct relationship between Vol markets and the minimum price one should pay for Ethereum Blocks. More specifically, we will look at the effect of selling Strangles on ETH (and other tokens) in CEX, while buying Blockspace Commitments (or Preconfirmations) on Ethereum.

While this type of relationship may exist with limited effect today for 12 seconds, the burgeoning space of preconfirmations and validator commitments will enable this to exist for much longer periods turning what may be a theoretical exercise today into a practical exercise tomorrow.

Through this exercise, we position the Blockchain or Network itself as a financial instrument that can be used for macro hedging or relative value trading purposes.

What is a Strangle?

The building blocks of options markets or ‘Vol’ markets are ‘vanilla’ options known as calls and puts. Combining such vanilla options together at the same strike produces a ‘V-shaped’ payoff known as a ‘Straddle’. A Straddle will always have a positive intrinsic value or payoff enabling the buyer to monetize any movement of the underlying instrument.


Figure 1: Straddles vs Strangles

When the strikes are apart from one another, in the above example by a distance of ‘z’, they are called a ‘Strangle’. For example:

  • A Put and Call both with strikes of 100 (i.e. X) would collectively be called a Straddle
  • A Put and Call with strikes of 90 (i.e. X - z) and 110 (i.e. X + z) respectively, would collectively be a Strangle

Strangles payoff or have an intrinsic value only when the underlying spot price has moved by a sufficient distance, in this case ‘z’.

What Are Preconfirmations?

Preconfirmations and Blockspace Commitments are part of a new field of Ethereum Research and Development focused on giving Validators (called Proposers, i.e. those that Propose the next epoch of blocks) expanded abilities to sell blockspace in a way that gives them more flexibility than they are currently afforded within the current PBS (Proposer-Builder- Separation) flow.

Such an initiative is intended broadly to bring more control in-protocol (as opposed to externally with Block Builders), and streamline scaling technology for the new field of Based Rollups.

While there are different forms of Blockspace Commitments, the general form has Proposers providing commitments to buyers - typically Searchers, Market Makers, Block Builders, and others looking to use the blockspace for transactions, among other purposes. For example, there are:

  • Inclusion Preconfirmations: Where Proposers issue guarantees to include transactions within a specified block, anywhere in the block
  • Execution Preconfirmations: Where Proposers issue guarantees to include transactions within a specific block, with a specific state or result
  • Whole Block Sales which may be called Entire Blocks or Execution Tickets: Where Proposers sell their block en masse to an intermediary who then engages in some form of pseudo block building consisting perhaps of a mix of their own trades, Inclusion Preconfirmations, Execution Preconfirmations, private order flow, and public order flow.

For the purposes of this paper, we will be referring to Whole Block Sales by Proposers, but may refer to them generically as Preconfirmations or Preconfs for ease of reading and consistency with some current nomenclature.

Current Preconf and Blockspace Pricing

The value of Ethereum blocks are often associated with the Maximum Extractable Value (MEV), that is, the largest amount of value that one could extract or monetize within a 12 second period. This may include a mix of the public’s willingness to pay for transactions (financial and non-financial), private order flow, as well as other MEV trades including sandwich attacks, atomic arbitrage, CEX/DEX arb, or other.

Extending into the Multi-block MEV (MMEV) or Consecutive Block valuation, MMEV valuation is often performed in the context of TWAP oracle manipulation attacks producing forced liquidations by price manipulation. While there is an intersection between longer-term CEX/DEX arb captured in single-block pricing discussions vs the relative value vol markets, we prefer the simplicity and forward-looking nature of the vol markets for the purpose of our pricing exercise.

Putting this together, there are multiple ways to value a single or multiple set of Ethereum blocks. From our analysis, we present a floor price for Ethereum blocks driven by non-arbitrage pricing and the Vol markets in CeFi. From this floor price one may additionally then consider encompassing other forms of value capture to arrive at a true mid-market price of an Ethereum Block.

The Trade

Historical Background

Buying a block, or multiple blocks of Ethereum enables one more control over order execution and states. Simply, if it were possible to buy 12.8 minutes of Ethereum (i.e. 64 blocks or two epoch) one could watch prices as they move in CEX during this time, and at any time during this 12 minute period, one could put on a relative value trade capturing the difference in prices between the CEXs and DEXs. If, for example, prices rose 5% in CEX during this time, one could sell assets in CEX, and buy those same assets in DEX (where the prices haven’t moved) earning 5% in the process. While this may not be currently feasible, it is the starting point for discussion.

Historically, we can look at these dynamics measuring the maximum price movements over 12 secs, 1 min, or more. We can then take into account the liquidity on DEXs and calculate a historical breakeven between the profitability of such transactions with the number of blocks for a given period. For more on this see this article: | Greenfield

While possible to calculate, we’re more interested in looking forward, not backward. Enter the Vol markets.

Vol Markets & Strangles

To execute the trade above, one must cross bid-offer, paying transaction fees on both the CEX and DEX side as well as ‘time’ the market accordingly to maximize the arbitrage. One furthermore has to factor in the liquidity or depth of the market. That is, for the strategy to pay off, prices need to move beyond a certain minimum threshold or in our case, a Strike price different from the current Spot price.

Let us assume that the “sum of transaction fees and slippage between CEX/DEX” - our ‘threshold’ or Strike is 0.10%. If we have the Vol of the asset, and a time horizon, we can now price this using Black-Scholes as a simple Strangle.

Assume the following:

  • Trade Size: $10mm
  • Token: ETH
  • Spot Price: 100 || to keep things simple
  • Interest Rates: 4.00%
  • Dividend Yield: 0.00%
  • Vol: 75%
  • Expiry: 32 Blocks (12.8mins)
  • Fees: 0.10 as accounted for in the following Strikes:
    • Strike 1: 100 + 0.10 = 100.10 - for the Call Option
    • Strike 2: 100 - 0.10 = 99.90 - for the Put Option

Result:

  • Call Price: 0.0620%
  • Put Price: 0.0619%
  • Strangle Price: 0.0620% + 0.0619% = 0.1239%
  • Price in USD Terms: $12,388


Figure 2: A Strangle on Ethereum and all its DEXs combined

Per the diagram above, if one could trade this Strangle in CEX for $12,388 (see spreadsheet for calculations), one should equivalently be able to trade Preconfs on Ethereum for the same price. If the underlying spot market in CEX moves up or down more than 0.10, whilst DEX prices stay the same, then these options become in-the-money…

Putting CEX and DEX together below, one would sell the Strangle on ETH in CEX but buy Preconfs on Ethereum giving them an almost identical payoff where z represents both the expected transaction fees and the distance to the Strike price for pricing purposes:


Figure 3: Short CEX Strangle + Long Ethereum Preconf

If the Vol markets imply a price of $12,399 for 12.8mins (i.e. 32 blocks) then this is the amount (less one dollar) that one would be willing to pay to buy up 32 consecutive blocks (i.e. 12.8mins) of Ethereum. Given the assumptions above, the expected value is always positive and we thus have a closed-form solution to Floor pricing for Preconfs.

The arbitrage carries two scenarios:

  • Prices are between 99.90 and 100.10: Both the Strangle and Preconf Expire ‘out-of-the-money’ without any cash settlement
  • Prices are beyond 99.90 and 100.10 with options expiring ‘in-the-money’. The Trader incurs a loss on the CEX Strangle, but then monetizes the gain in DeFi by entering into an off-market spot trade (with respect to CEX) crystallizing the in-the-money value of the option

Vol Traders do this 1000s of times a day, with automated systems and razor-sharp precision. Trading Vol vs Preconfs opens up an entirely new relative-value asset class for them to potentially buy vol or gamma much more cheaply.

Scenario Analyses and Sensitivities

Turning to Gas Market terminology, the price of $12,399 translates into a Gwei price of 165 Gwei ($12,399 / 2,500 * 1e9 / 30e6) assuming the ETH price is 2,500 in this example. Using the Strangle pricing method, we can then infer from the ETH Vol markets (75% vol in this case) the price of 1 block, all the way up to 32 consecutive blocks or slots as follows:


Figure 4: Price for N-Consecutive Blocks of Ethereum

Comparing the difference in Strangle prices between a period of N(0,1), to a Strangle with a period of length N(0,2), we can then price the Strangle for Slot 2 N(1,2), as follows for the entire curve. We can furthermore take the ‘average preconf price’ for N slots.

Figure 5: Slot N Price vs Avg Price for N-Slots

The following table highlights the fees in Gwei that validators would get paid for specific blocks/slots with 5.16 Gwei as the average. This may be compared, for example, to historical Priority Fees that one receives via MEV-Boost where 4.04 Gwei is the average:


Figure 6: Historical Priority Fees from MEV-Boost. Priority Fees from 24 Jan 2024 to 9 Sep 2024.

Transaction Costs Impact on Pricing

The difference between the Strike Prices and Spot Price or transaction costs above are taken to be uniform at 0.10%. In practice however, transaction costs encompass i) actual transaction fees, and ii) liquidity/slippage in execution. Below, we see that Transaction Costs have a significant impact on Preconf pricing especially where there is a shorter time-to-maturity.


Figure 7: Preconf Pricing for varying levels of Transaction Costs

Volatility Impact on Pricing

Finally, as the CEX leg of the trade uses Volatility as the primary market input, we now consider the impact that volatility has on Preconf pricing with Vega close to 0.1 Gwei at the 4th slot, and ~0.06 Gwei at the 32nd slot. That is, at Slot 4, a 10% change in Vol is impacts Block prices by 1 Gwei.


Figure 8: Preconf Prices for Different levels of Volatility

Refinements & Market Sizing

For market sizing, we look exclusively at the CEX Strangle vs Preconf on Ethereum L1.

Consecutive Blocks

The exercise considers buying multiple blocks, potentially up to 32 or 64 blocks depending on the lookahead window. In reality however, this is extremely difficult due to the diversity of Validators.

There is a subset of Validators that, for ideological reasons or other, do not adopt MEV-Boost, and would be unlikely to adopt a framework that captures more MEV. In economic terms, they are not rational. It could be that they do not ‘believe’ in MEV, or they simply could be an at-home staker that hasn’t upgraded to MEV-Boost. Either way, these Vanilla or self-built blocks account for slightly less than 10% (and decreasing) of blocks (see realtime with ETHGas’ GasExplorer, and research with Blocknative).

Let’s assume the other 90% are rational (i.e. they are economically motivated) and that they are somehow able to coordinate among one another through some unifying medium for the sale of consecutive blocks. In this case, we can then model the frequency of single vs consecutive blocks where about half of the time there are less than 7 consecutive blocks, and the other half have somewhere between 8 and 32 consecutive blocks.


Figure 9: Frequency of Consecutive Blocks

Historical Volatility Analysis

Looking at almost 2 years of trades from 10 Sep 2022 to 10 Sep 2024 on Deribit, we uncover some fascinating dynamics for short-dated transactions.

1 Hour to Expiry

For those transactions with less than 1 hour to expiry, we find approx 13,500 trades over this period, a mean Vol of 107.52%, a Median of 63%, and 75th Percentile as 102%. Note that Deribit’s Vols are capped at 999 suggesting that the mean may be higher than that which is indicated.


Figure 10: Distribution of Implied Vol on ETH Options with less than 1 Hour to Expiry

12 Mins to Expiry

For transactions with less than 12 mins to expiry (or approx 64 blocks), we find almost 1,400 trades over this period with a mean of 273% Vol, median of 75% Vol, and 75th Percentile as 395% Vol.


Figure: 11: Distribution of Implied Vol on ETH Options 12 Mins to Expiry

<12 Minutes to Expiry

Across these 1,400 trades, we then split them into their 1-minute buckets to view distributions across times more closely associated with Preconf Block timeframes.


Figure 13: Distribution of ETH Implied Vol for the last 12 mins to Expiry

The Vol numbers are far larger than we expected warranting further research into this area. While liquidity will need to be analyzed, we have provided some Preconf-implied Pricing given Vols of a much higher magnitude for convenience:


Figure 14: Preconf Implied Prices for very high levels of Volatility

Vol Smile

As you may recall, we’re not looking for at-the-money Vol (used for a Straddle) but rather for Vol as it may relate to Strangles. The Vol for out-of-the-money options is almost always higher than at-the-money options. To this effect, we have provided a heat map below providing some color on the smile accordingly.

Figure 15: Vol Smile for 0 to 12 minutes

Market Sizing

Bringing the above information together, we decide to take the combined Vol set and use that as a proxy for Strangle pricing. To account for illiquidity, we then provide different scenarios at lower volatilities assuming that as we sell more Strangles, the Vol would decrease accordingly.

We can now size the market considering:

  • The historical mean Vol: 275%
  • The frequency of Consecutive Blocks: Per the above
  • The implied preconf Floor pricing as a function of Vol: Black-Scholes
  • And, making some adjustment for Liquidity: Reducing Vol by up to 200%


Figure 16: Preconf Pricing Based on Frequency of Consecutive Blocks, Historical Volatility and adjusted for Liquidity

The annual market size for Blockspace could equal approximately 419,938 ETH per year historically (~$1bln equiv) and with approx 33 million Staked ETH, this amounts to 5.33 Gwei per block or an extra 1.25% in Validator Yields as a floor above Base Fees.

Vol 275% Vol 225% Vol 175% Vol 125% Vol 75% Vol
Gwei Total 282,615 218,322 155,081 93,997 38,350
Gwei per Block 39.25 30.32 21.54 13.06 5.33
ETH Total Fees 3,094,638 2,390,631 1,698,137 1,029,270 419,938
Increase to APYs 9.10% 7.03% 4.99% 3.03% 1.24%
$ Total Fees 7,736,594,273 5,976,577,160 4,245,342,208 2,573,176,209 1,049,844,310

Other Considerations

Liquidity

On the CEX side, we would like to assume there is infinite liquidity but this is not realistic. In the example immediately above, we bump the Vol downward to adjust for this but in reality, we would need more order book information. Looking forward, this market could also be illiquid because there was never another market to trade it against, e.g. Preconfs. We furthermore would need to run the analysis considering tokens other than ETH.

Everyday there is a 12-minute direct overlap where a set of option expiries for BTC, ETH, SOL, XRP on Deribit (and other exchanges) roughly match the time-frame for preconfs enabling one to recalibrate and reconcile any intraday Vol positions vs the actual Preconf markets with more accuracy. For the rest of the day, traders would need to run basis-risk between the Vol positions on their books, with their Preconf positions accordingly. As such, execution in the Vol markets and direct one-for-one pairs trading may be limited on a regular basis and only possible sporadically.

As an alternative to directly offsetting the Short Strangle positions with Long Preconfs, a trader may approach this on a portfolio basis and trade the greeks. In this instance, a preconf buyer may consider selling longer-dated, more liquid straddles, and buying them back up to 12 mins later or whenever the preconf is exercised. The gamma profile there is much less sharp meaning any moves in Spot will have a lesser impact on option price. There is additional Vol/Vega to consider (although less impactful for a short-dated option) and the time decay (which is in the arbitrageur’s favor here as they would be Short the options and theta decays faster closer to expiry). If one could seemingly buy Vol 5-10% cheaper via Preconfs over time, then this would indeed be attractive to options traders.

On the DEX side, liquidity across ETH, and other tokens is limited to about $4-5mm at the time of this article. Taking into account the total volume on major DEXs, we’d additionally expect about $200k of additional demand every block from general order flow. Although most of this typically may not be seen in the public mempool, over 32 blocks this would be $6.4mm which one could either use to estimate option expiration liquidity and/or capture via other conventional MEV approaches (i.e. front/back-runs).

More research on liquidity, and execution is warranted.

Inventory

To execute trades on two different venues, traders will need to hold sufficient inventory on both locations. For this reason, an additional cost of capital is not considered in this exercise.

For example, if the Call part of the Strangle ends up in-the-money (ITM), when the Preconf is exercised, the user will:

  • Buy, let’s say, ETH in the DEX and sell it in the CEX. That is, the user needs USDT/C inventory onchain, and ETH inventory in the CEX, to avoid any transfer lag.

Larger market makers should have sufficient liquidity on both sides making this lesser of an issue.

European vs American Options

The CEX Strangle (i.e. where the Arbitrageur is ‘Short’) is a European Option unlike the Preconfirmation (i.e. where the Arbitrageur is ‘Long’) which is more an American Option. This gives the Arbitrageur positive basis such that the instrument they are ‘Long’ has more optionality or upside built into it. If the Preconf is early exercised, the trader receives the intrinsic value while the Strangle still has some time value (although minimal), therefore, the PNL is equal to the Net Premium minus the time value difference.

What About Other MEV and MMEV?

While there is some intersection between conventional MEV and the Strangle strategy as highlighted above, there is still the value to the everyday deal-flow, alongside significant other forms of MEV that are not captured. Monetization of such flows would be separate to, and in addition to, that of the Floor price.

The Strangle exercise above suggests that some types of single-block MEV may currently be constrained by transaction costs which would indicate a non-linear MMEV for when multi-block purchases are possible (at least within the first few blocks).

Conclusions

The purpose of this paper is to open up a discussion and illustrate a novel approach for the pricing of preconfs - one that importantly responds in real-time to prevailing market conditions. While the execution of such a strategy is difficult, it is not insurmountable for sophisticated players to automate.

Perhaps the most important consideration is that the Price of the Preconfs is a function of the Size of the Markets. If both the Options markets on Deribit and DEX liquidity are 10x larger than they are today, the Preconf Price Floors would be 10x those indicated above. Financial markets often look for inflection points where trades that were almost-possible suddenly become mainstream. With Gas Markets opening up, Macro traders now able to hedge Vol with Preconfs, Based Rollups increasing liquidity, and a trend towards lower transaction fees, this is indeed an interesting area of research.

We believe that highlighting a seemingly odd relationship between token Vol and the Ethereum Blockchain itself will help to further the study of risk-neutral block pricing and are excited to discuss and explore this, and other approaches, with any other parties who may be interested.

References

[ 1 ] Pascal Stichler, Does multi-block MEV exist? Analysis of 2 years of MEV Data

[ 2 ] Öz B, Sui D, Thiery T, Matthes F. Who Wins Ethereum Block Building Auctions and Why?. arXiv preprint arXiv:2407.13931. 2024 Jul 18.

[ 3 ] Jensen JR, von Wachter V, Ross O. Multi-block MEV. arXiv preprint arXiv:2303.04430. 2023 Mar 8.

[ 4 ] Christoph Rosenmayr, Mateusz Dominiak - Statistical Arbitrage on AMMs and Block Building On Ethereum - Part 1

1 post - 1 participant

Read full topic

Decentralized exchanges Resolving the Dichotomy: DeFi Compliance under Zero Knowledge

Published: Sep 11, 2024

View in forum →Remove

This is a summary and enumeration of relevant research questions based on the recent EEA Article by the same title as this post.

Bulleted Summary

  • DeFi protocols face a compliance challenge due to the type of assets traded and their often decentralized governance.
  • A solution is leveraging blockchain-native compliance mechanisms, specifically smart contracts, and onchain verifiable zero-knowledge proofs.
  • This approach ensures regulatory compliance, weighted risk management, and required transaction reporting while preserving user privacy.
  • The framework attaches Compliance-Relevant Auxiliary Information (CRAI) to onchain transactions, enabling real-time compliance monitoring/verification, in a privacy-preserving way using zero-knowledge proofs.
  • The framework also specifies compliance-safe DeFi interaction patterns involving using smart contract wallets, DeFi compliance contracts, a compliance smart contract system, and zero-knowledge proofs to enforce compliance rules specified in the compliance smart contract system that defines compliance policies, attestation providers, and compliant assets.
  • The framework offers benefits like regulatory compliance, risk management, privacy protection, security, versatility, transparency, and accountability.
  • By adopting such a framework, DeFi protocols could navigate the regulatory landscape while maintaining their core principles.
  • Some of this solution already exists (compliance smart contract system, compliant assets, etc.) and need to be further expanded (smart contract wallets, compliance wrapper contracts, DeFi-specific custom hooks, etc.)

Below is a list of open research questions in no particular order:

  • What are the potential challenges and limitations of implementing this framework in existing DeFi protocols?
  • How can the framework’s privacy features be further enhanced to accommodate complex compliance scenarios with many compliance assertions as zkps e.g. using proof aggregation and proof recursion?
  • How can the framework be extended to support a broader range of compliance requirements beyond KYC/AML e.g. incorporated DAOs, Power-of-Attorney?
  • What are the potential governance challenges associated with managing and updating compliance policies within the framework?
  • How can the framework’s transparency and accountability features be leveraged to further enhance DeFi e.g. custom hooks?
  • How can the framework be adapted to different regulatory environments and jurisdictions?
  • What are the economic implications of implementing this framework for DeFi users and protocols?

Given that part of the framework already exists, this post is to stimulate further discussion on the framework itself, and its suggested open research questions.

Looking forward to the feedback from the Ethereum research community.

1 post - 1 participant

Read full topic

zk-s[nt]arks Lookup argument and its tweaks

Published: Sep 11, 2024

View in forum →Remove

In building the Placeholder proof system for =nil; Foundation, we use a lookup argument based on the Plookup paper by Aztec researchers. We took Plookup technique as a starting point and then made some practical improvements for writing large PLONK circuits with a complex logic.

Lookup argument allows prover to prove that some table over prime field (hereafter assignment table) satisfies specific constraints: some cells computed from assignment table (lookup input) belong to list of values that is also computed from the assignment table (lookup table).

Join-and-split algorithm

The core of Plookup techinque is a sorting algorithm. We call it join-and-split because it includes two steps:

  • join — lookup table columns are joined together with input columns into single large vector using special reordering algorithm.
  • split — constructed vector is split again into original size parts.

The case with the single lookup table and single input column is described in the Plookup paper in detail. But it wasn’t enough for our use-cases. We needed lots of efficiently packed lookup tables and lookup constraints applied to arbitrary rows and columns, and we didn’t want to repeat lookup argument for each (input, table) pair.

So, we modified join-and-split algorithm to be able to join more than two columns. It allows us to use multiple lookup constraints even if they are applied on same rows and use a large lookup table, even if its size is greater than the whole assignment table rows amount by appending columns to assignment table instead of rows. Balance between assignment table rows and columns amounts helps to find a perfect balance for the best prover performance and verification cost.

Selector columns

Original article contains technique to lookup tuples of values that are placed in the same or neighboring rows. It constructs linear combinations of columns with a random factor. Combining this approach with polynomial expressions usage for lookup tables and input columns both we achieved selector columns full support. Circuit designer now can manage which rows exactly are constrained and what rows are reserved for lookup tables storing.

Plookup paper also describes technique for multiple lookup tables support. They propose to associate each lookup table with its unique identifier and fill tag column to mark what rows contains lookup tables with which identifier. Tag column for input helps to mark what constraints are applied to marked row. Tag columns should be a part of the random linear combination constructed for the lookup table and input columns respectively. This approach is obviously limited. Sum of lookup tables sizes should be less than the whole table rows amount.

We combined lookup table identifier usage with our selector columns construction and algorithms for large lookup tables. These modifications allow lookup tables to be stored and used without regard to lookup argument restrictions, but according to the best circuit design. It made our lookup argument into a universal and flexible tool.

Detailed description of our modifications can be found on our HackMD page. Feel free to share your comments!

1 post - 1 participant

Read full topic

Economics The Shape of Issuance Curves to Come

Published: Sep 10, 2024

View in forum →Remove

In this post we will analyze the consequences that the shape of the issuance curve has for the decentralization of the validator set.

The course of action is the following:

First, we will introduce the concept of effective yield as the yield observed after taking into account the dilution generated by issuance.

Second, we will introduce the concept of real yield as the effective yield that a validator obtains post expenses (OpEx, CapEx, taxes…).

Armed with these definitions we will be able to make some observations about how the shape of the issuance curve can result in centralization forces, as the real yield observed can push out small uncorrelated stakers at high stake rates.

Then, we will propose a number of properties we would expect the issuance curve to satisfy to minimize these centralization forces. And explore some alternative issuance curves that could deal with the aforementioned issues.

Finally, some heuristic arguments on how to fix a specific choice of issuance and yield curves.

Source Code for all plots can be found here: GitHub - pa7x1/ethereum-issuance

Effective Yield

By effective yield we mean the yield observed by an Ethereum holder after taking into account circulating supply changes. For instance, if everyone were to be a staker, the yield observed would be effectively 0%. As the new issuance is split evenly among all participants, the ownership of the circulating supply experienced by each staker would not change. Pre-taxes and other associated costs this situation resembles more a token re-denomination or a fractional stock split. So we would expect the effective yield to progressively reach 0% as stake rates grow to 100%.

On the other hand, non-staking holders are being diluted by the newly minted issuance. This causes holders to experience a negative effective yield due to issuance. We would expect this effect to be more and more acute as stake rates grow closer and closer to 100%.

These ideas can be put very simply in math terms.

Let’s call s the amount of ETH held by stakers, h the amount of ETH held by non-stakers (holders), and t the total circulating supply. Then:

s + h = t

After staking for certain period of time, we will reach a new situation s' + h' = t'. Where s' and t' have been inflated by the new issuance i are obviously related to the nominal staking yield y_s:

s' = s + i = s \cdot y_s

h' = h

t' = t + i = t + s \cdot (y_s - 1)

Now, let’s introduce the normalized quantities s_n and h_n. They simply represent the proportion of total circulating supply that each subset represents:

s_n \equiv \frac{s}{t}

h_n \equiv \frac{h}{t}

We can do the same for s'_n and h'_n:

s'_n \equiv \frac{s'}{t'} = \frac{sy_s}{s(y_s - 1) +t}

h'_n \equiv \frac{h'}{t'} = \frac{t-s}{s(y_s - 1) + t}

With these definitions we can now introduce the effective yield as the change in the proportion of the total circulating supply observed by each subset.

y_s^{eff} \equiv \frac{s'_n}{s_n} = \frac{y_s}{\frac{s}{t}(y_s-1) + 1}

y_h^{eff} \equiv \frac{h'_n}{h_n} = \frac{1}{\frac{s}{t}(y_s-1) + 1}

Net Yield

Staking has associated costs. A staker must acquire a consumer-grade PC, it must pay some amount (albeit small) for electricity, it must have a high-speed internet connection. And they also must put their own labor and time to maintain the system operational and secure, or must pay someone to do that job for them. Stakers also observe other forms of costs that eat away from the nominal yield they observe, e.g. taxes. We would like to model this net yield observed after all forms of costs, because it can give us valuable information on how different stakers are impacted by changes in the nominal stake yield.

To model this we will introduce two types of costs; costs that scale with the nominal yield (e.g. taxes or fees charged by an LST would fit under this umbrella), and costs that do not (i.e. HW, electricity, internet, labor…).

With our definitions, after staking for a reference period stakers would have earned s' = y_s s = s + s(y_s - 1)

But if we introduce costs that eat away from the nominal yield (let’s call them k), and costs that eat away from the principal (let’s call them c). We arrive to the following formula for the net stake:

s' = s(1-c) + s(y_s - 1) - \max(0, sk(y_s - 1))

NOTE: The max simply prevents that a cost that scales with yield becomes a profit if yield goes negative. For instance, if yield goes negative it’s unlikely that an LST will pay the LST holders 10%. Or if yield goes negative you may not be able to recoup in taxes as if it were negative income. In those cases we set it to 0. This will become useful later on when we explore issuance curves with negative issuance regimes.

This represents the net stake our stakers observe after all forms of costs have been taken into account. To be noted that this formula can be easily modified to take into account other types of effects like validator effectiveness (acts as multiplicative factor on the terms (y_s - 1)) or correlation/anti-correlation incentives (which alter y_s).

To fix ideas, let’s estimate the net yield observed by 3 different types of stakers. A home staker, an LST holder, and an institutional large-scale operator. The values proposed are only orientative and should be tuned to best reflect the realities of each stakeholder.

A home staker will have to pay for a PC that they amortize over 5 years and costs around 1000 USD, so 200 USD/year. Pay for Internet, 50 USD per month, for around 600 USD/year. Something extra for electricity, less than 100 USD/year for a typical NUC. Let’s assume they are a hobbyist and decide to do this with their spare time, valuing their time at 0 USD/year. This would mean that his staking operation has a cost of around 1000 USD/year for them. If they have 32 ETH, with current ETH prices we can round that at ~100K USD. This would mean that for this staker, c = \frac{1}{1000}. As their costs represent around 1 over 1000 their stake value.

Now for the costs that scale with the yield. They will have to pay taxes, these are highly dependent on their tax jurisdiction, but may vary between 20% and 50% in most developed countries. Let’s pick 35% as an intermediate value. In that case, their stake after costs looks like:

s' = s\left(1-\frac{1}{1000}\right) + s(1-0.35)(y_s - 1)

We can do the same exercise for a staker using an LST. In this case, c=0 and k is composed of staking fees (10-15%) and taxes (20-50%) which depend on the tax treatment. Rebasing tokens have the advantage of postponing the realization of capital gains. If we assume a 5 year holding period, equivalent to the amortization time we assumed for solo staking, it could look something like this:

  • Fixed costs: 0
  • Staking fees: 10%
  • Capital gains tax: 20%
  • Holding period: 5 years

s' = s(1-0) + s(1-0.14)(y_s - 1)

Finally, for a large scale operator. They have higher fixed costs, they will have to pay for labor, etc… But also will run much higher amount of validators. In that case, c can get much smaller as it’s a proportion of s. Perhaps 1 or 2 orders of magnitude smaller. And taxes will be typical of corporate tax rates (20-30%).

s' = s\left(1-\frac{1}{10000}\right) + s(1-0.25)(y_s - 1)

Net Effective Yield (a.k.a Real Yield)

Finally, we can blend the two concepts together to understand what’s the real yield a staker or holder obtains net of all forms of costs and after supply changes dilution. I would suggest calling this net effective yield as the real yield, because well, that’s the yield you are really getting.

y_s^{real} = \frac{(1-c) + (y_s - 1) - \max(0,k(y_s - 1))}{\frac{s}{t}(y_s-1)+1}

y_h^{real} = y_h^{eff} = \frac{1}{\frac{s}{t}(y_s-1) + 1}

In the second equation we are simply stating the fact that there is no cost to holding, so the real yield (after costs) of holding is the same as the effective yield of holding.

The Issuance Curve and Centralization

Up to here all the equations presented are agnostic of Ethereum’s specificities and in fact are equally applicable to any other scenario where stakeholders observe a yield but that yield is coming from new issuance.

To bring this analysis back to Ethereum-land it suffices to substitute y_s by Ethereum’s issuance yield as a function of the total amount staked s. And substitute t by the total circulating supply of ETH.

t \approx 120\cdot 10^6 \quad \text{ETH}

i(s) = 2.6 \cdot 64 \cdot \sqrt{s} \quad \text{ETH}\cdot\text{year}^{-1}

y_{s}(s) = 1 + \frac{2.6 \cdot 64}{\sqrt{s}} \quad \text{year}^{-1}

We can plot the real yield for the 4 different types of ETH stakeholders we introduced above, as a way to visualize the possible centralization forces that arise due to economies scale, and exogenous factors like taxes.

We can make the following observations, from which we will derive some consequences.

Observations

  • Observation 0: The economic choice to participate as a solo staker, LST holder, ETH holder or any other option is made on the gap between the real yields observed and the risks (liquidity, slashing, operational, regulatory, smart contract…) of each option. Typically higher risks demand a higher premium.

  • Observation 1: Holding always has a lower real yield than staking, at least for the assumptions taken above for costs. But the gap shrinks with high stake rates.

  • Observation 2: Different stakers cross the 0% real yield at different stake rate. Around 70M ETH staked solo validators start to earn negative real yield. At around 90M ETH institutional stakers start to earn negative real yield. At around 100M LST holders start to earn negative real yield.

  • Observation 3: When every staker and every ETH holder is becoming diluted (negative real yield), staking is a net cost for everyone.

  • Observation 4: There is quite a large gap between the stake levels where different stakers cross the 0% real yield.

  • Observation 5: Low nominal yields affect disproportionately home stakers over large operators. From the cost structure formula above we can see that as long as nominal yields are positive, the only term that can make the real yield negative is c. This term is affected by economies of scale, and small operators will suffer larger c.

Implications

Observation 0 and Observation 1 imply that as the gap between real yields becomes sufficiently small, participating in the network as some of those subsets may become economically irrational. For example, solo staking may be economically irrational given the operational risks, liquidity risks, slashing risks if the yield premium becomes sufficiently small vs holding. In that case solo stakers may become holders or switch to other forms of staking (e.g. LSTs) where the premium still satisfies the risk.

Together with Observation 2 and Observation 4 implies that as stake rates become higher and higher the chain is at risk of becoming more centralized, as solo stakers (which are the most uncorrelated staking set) must continue staking when it may be economically irrational to do so. Given the above assumptions LSTs will always observe at least 1% real yield higher than holding even at extreme stake rates (~100%), this may mean that there is always an incentive to hold an LST instead of ETH. Furthermore, when solo stakers cross to negative real yield but other stakers do not, other stakers are slowly but steadily gaining greater weight.

From Observation 3 we know that the very high stake rate regime, where everyone is observing a negative real yield, is costly for everyone. Everyone observes dilution. The money is going to the costs that were included in the real yield calculation (tax, ISPs, HW, electricity, labor…).

Observation 5 implies that nominal yield reductions need to be applied with care and certainly not in isolation without introducing uncorrelation incentives at the same time. As they risk penalizing home solo stakers disproportionately.

Recommendations

Given the above analysis, we can put forward a few recommended properties the yield curve (respectively the issuance curve) should have. The idea of establishing these properties is that we should be able to discuss them individually and agree or disagree on their desirability. Once agreed they constrain the set of functions we should consider. At a minimum, it will make the discussion about issuance changes more structured.

  • Property 0: Although this property is already satisfied with the current issuance curve, it is worth stating explicitly. The protocol should incentivize some minimum amount of stake, to ensure the network is secure and the cost to attack much larger than the potential economic reward of doing so. This is achieved by defining a yield curve that ramps up the nominal yield as the total proportion of stake (s_n) is reduced.

  • Property 1: The yield curve should contain uncorrelation incentives such that stakers are incentivized to spin-up uncorrelated nodes and to stake independently, instead of joining large-scale operators. From a protocol perspective the marginal value gained from another ETH staked through a large staking operation is much smaller than if that same ETH does so through an uncorrelated node. The protocol should reward uncorrelation as that’s what allows the network to achieve the extreme levels of censorship resistance, liveness/availability and credible neutrality the protocol expects to obtain from its validator set. The economic incentives must be aligned with the expected outcomes, therefore the yield curve must contain uncorrelation incentives.

  • Property 2: The issuance curve (resp. yield curve) should have a regime where holding is strictly more economically beneficial than staking, at sufficiently high stake rates. This means that the real yield of holding is greater than the real yield of staking if the stake rate is sufficiently high. As explained above it’s the real yield gap the defining characteristic that establishes the economically rational choice to join one subset or another. If the holding real yield can be greater than the staking real yield at sufficiently high stake rates there is an economic incentive to hold instead of continue staking. To be noted, up to here we are not making an argument at what stake rate this should be set. It suffices to agree that 99.9% stake rate is unhealthy for the protocol (it’s a cost for everyone, LSTs will displace ETH as pristine collateral, etc…). If that’s the case, then we can prevent this outcome by setting the holding real yield to be higher than staking at that level. Unhealthy levels are likely found at much lower values of stake rate.

  • Property 3: To prevent centralization forces, the stake rates at which uncorrelated validators vs correlated validators cross to negative real yield should be small, as small as possible. A large gap between the thresholds to negative real yields of uncorrelated (e.g. home stakers) and correlated sets (e.g. large operators) creates a regime where the validator set can become more and more centralized. To make the case more clear, if uncorrelated validators reach 0 real yield at 30M ETH staked, while holding an LST composed of large operators (e.g. cbETH, wstETH) does so at 100M ETH. The regime where the stake ranges between 30M and 100M is such that solo stakers will tend to disappear, either quickly (they stop staking) or slowly (they become more and more diluted), the outcome in either case is a more centralized validator set.

  • Property 4: The yield curve should taper down relatively quick to enter the regime of negative real yields. From Property 2 and Property 3 we know we should build-in a regime where the real yield from issuance goes negative, but we want this regime to occur approximately at the same stake rate for the different types of stakers, to prevent centralization forces. Observation 5 implies that if the slope of this nominal yield reduction is slow, stakers with different cost structures will be pushed out at much different stake rates. Hence, we need to make this yield reduction quick.

  • Property 5: The issuance yield curve should be continuous. It’s tempting to play with discontinuous yield curves, but yield is the main incentive to regulate the total stake of the network. We would like that the changes induced to the total stake s are continuous, therefore the economic incentive should be a continuous function.

Exploring Other Issuance Curves

The desired properties can be summarized very succinctly:

  • The yield curve should be continuous.
  • The yield curve should go up as stake rate goes to 0.
  • The yield curve should go to 0 as the stake rate goes up, crossing 0 at some point that bounds from above the desired stake rate equilibrium.
  • The yield curve should have uncorrelation incentives such that spinning up uncorrelated validators is rewarded and incentivized over correlated validators.
  • The real yield curves of correlated and uncorrelated stakers should become negative relatively close to each other.

A very simple solution to meet the above is to introduce a negative term to Ethereum’s issuance yield and uncorrelation incentives.

The negative term should grow faster than the issuance yield as the stake grows, so that it eventually overcompensates issuance and makes the yield go negative quickly at sufficiently high stake rates. This negative term can be thought of as a stake burn, and should be applied on a slot or epoch basis such that it’s unavoidable (thanks to A. Elowsson for this observation).

Uncorrelation incentives are being explored in other posts. We will simply leave here the recommendation of adopting them as part of any issuance tweaks. Read further: Anti-Correlation Penalties by Wahrstatter et al.

Ethereum’s Issuance with Stake Burn

The following is an example of how such a negative term can be introduced.

i(s) = 2.6 \cdot 64 \cdot \sqrt{s} - 2.6 \cdot \frac{s \ln s}{2048} \quad \text{ETH} \cdot \text{year}^{-1}

y_{s}(s) = 1 + \frac{2.6 \cdot 64}{\sqrt{s}} - \frac{2.6 \ln s}{2048} \quad \text{year}^{-1}

The negative stake burn term eventually dominates issuance and can make it go negative. There is complete freedom deciding where this threshold occurs, by simply tweaking the constant pre-factors. In this particular case, the parameters have been chosen so they are rounded in powers of 2 and so that the negative issuance regime roughly happens around 50% stake rate.

This negative issuance regime induces a positive effective yield on holders, which provides the protocol with an economic incentive to limit the stake rate. As the real yield will eventually be greater holding ETH than staking. It also serves to protect the network from overloading its consensus layer, as it provides the protocol with a mechanism to charge exogenous sources of yield that occur on top of it. If priority fees, MEV, or restaking provide additional yield that would push stake rates above the desired limit, the protocol would start charging those extra sources of yield by making issuance go negative. Hence redistributing exogenous yield onto ETH holders.

To understand better the impact that this stake burn has on the different stakeholders we can plot the real yield curves.

We can see how the introduction of a negative issuance yield regime has helped achieve most of the properties we desired to obtain. Particularly, we can notice the stake rates at which different stakeholders reach 0 real yield have compressed and are much closer to each other. And we can appreciate how when stake rates get close to 50% (given the choice of parameters) holders start to observe a positive real yield which disincentivizes additional staking. Holding real yields can become quite large so even large exogenous sources of yield can be overcome.

Given that we haven’t touched the positive issuance term, this results in a large reduction of the staking yield. We can increase the yield trivially while respecting the same yield curve shape. Here the same curve with larger yield:

i(s) = 2.6 \cdot 128 \cdot \sqrt{s} - 2.6 \cdot \frac{s \ln s}{1024} \quad \text{ETH} \cdot \text{year}^{-1}

y_{s}(s) = 1 + \frac{2.6 \cdot 128}{\sqrt{s}} - \frac{2.6 \ln s}{1024} \quad \text{year}^{-1}

This shows the target yield that is observed at a specific stake rate is a separate consideration to the curve shape discussion. So if you dislike this particular example because of the resulting yield at current stake rates. Fear not, that has an easy fix.

Adding Uncorrelation Incentives to the Mix

We will not cover the specifics of how uncorrelation incentives should be introduced nor how they should be sized, but we will illustrate how the introduction of a correlation penalty can help align the economic incentives with the network interest of maintaining an uncorrelated validator set.

To do so we will simulate what would happen to the real yields observed by the following stakeholders:

  • Home Validator (Very Uncorrelated): -0.0% subtracted to the nominal yield through correlation penalties
  • LST Holder through a decentralized protocol (Quite Uncorrelated): -0.2% subtracted to the nominal yield through correlation penalties
  • LST Holder through staking through large operators (Quite Correlated): -0.4% subtracted to the nominal yield through correlation penalties
  • Large Institutional Operator (Very Correlated): -0.6% subtracted to the nominal yield through correlation penalties

The following figure zooms in to the area where negative real yields are achieved:

Important Note: The above values for correlation penalties are not based on any estimation or study. They have been chosen arbitrarily to showcase that the inclusion of uncorrelation incentives in the issuance curve can be used to disincentivize staking through large correlated operators. We refer the analysis of the right incentives to other papers.

Fixing the Issuance Yield Curve

Up until now the focus has been on the shape of the yield curve (respectively the issuance curve) but very little has been said about the specific yield we should target at different stake rates. As illustrated above, by simply applying a multiplicative factor we can keep the same curve shape but make yields be higher or lower as wished.

In this section we will provide some heuristic properties to address this problem and be able to specify the prefactors that allow us to define a concrete yield curve.

These heuristic properties are orientative. There is no hard science behind them, just some soft arguments that provide reasonable justification for these choices.

Heuristic 0: The nominal issuance yield should become negative at 50% stake rate or lower. Higher stake rates start to become problematic, above those levels the majority of circulating supply is staking. In case of supermajority bug the majority of ETH holders could be incentivized to break the consensus rules. The negative yield regime can be seen as a protection mechanism from the protocol to prevent this type of situations from happening, it sets an economic incentive to align the social layer with the protocol interests.

Heuristic 1: Target 3% yield at 25% stake rate. When PoS was released there was no idea what would be the expected staking yield the market would consider appetizing. Would 5% be enough? Or 3%?

Now we have data points, current staking yield is 3% as measured by https://beaconcha.in (issuance, MEV, and priority fees included). So we know the market certainly has appetite for ETH yield at 3%. There is also some soft-arguments by V. Buterin, J. Drake et al. that 25% stake rate should provide enough security.

And finally, the current issuance curve happens to provide 3% yield at 25% stake rate. So by fixing the new curve to meet that same yield at 25% we anchor the same yield (and issuance) at the target rate. But any extra amount of stake will be met with a reduction in yield and issuance that makes it go to 0 before hitting 50%.

As current stake rate is a tad over 25% the proposed change to the issuance curve would imply a bit of issuance reduction, nothing very significant. But most importantly it avoids the ever growing issuance increase as stake rates become higher.

In conjunction with well designed uncorrelation incentives it could help the protocol ensure it does not overpay for security, stake rates are self-limiting, and the validator set very uncorrelated.

Final Words

The analytic form of the yield curve or the issuance curve matter much less than we may think. It might be tempting to spend time tinkering with its concrete analytic form but for all it matters it could be equally defined with a piece-wise continuous function.

Its purpose is to provide an economic incentive to get stake rates where the protocol needs them to be (not too high, not too low) and maintaining a large uncorrelated validator set.

This post is an invitation to steer the discussion towards said properties instead of getting lost with the fine details. If we nail down the properties we will constrain the solution space enough so that almost any function we choose will do the job.

1 post - 1 participant

Read full topic

Applications Introducing CCTP Express: a faster and cheaper way to use CCTP

Published: Sep 10, 2024

View in forum →Remove

By Wel and Alan on behalf of CCTP Express
For most recent information about CCTP Express, please visit our X.

Motivation

We recognize the vital role stablecoins play in the Web3 ecosystem, especially within DeFi. Among them, USDC stands out for its high transparency and regulatory compliance. Circle, the issuer of USDC, introduced the Cross-Chain Transfer Protocol (CCTP) to securely transfer USDC across chains using a native burn-and-mint mechanism.

CCTP is a game-changing tool that drives USDC adoption in the multichain world, allowing developers to create applications that offer secure, 1:1 USDC transfers across blockchains. This eliminates the added risks of using bridges.

However, CCTP has a key limitation: wait time. Its off-chain attestation service requires block confirmations on the source chain to ensure finality before minting USDC on the destination chain. This process can take anywhere from 20 seconds to 13 minutes, which is not ideal for users needing instant transfers. To address this, CCTP Express was designed to provide instant USDC bridging while leveraging CCTP. We position CCTP Express as a booster tool of CCTP, enabling users to benefit from faster and cheaper transactions.

We believe CCTP Express is an essential tool to achieve chain abstraction by providing an instant USDC bridging experience.

TL;DR

  • CCTP Express is positioned as a booster tool to use CCTP, where users enjoys a faster and cheaper experience;
  • It is an intent-base bridging system built upon CCTP, instant USDC bridging is enabled by the “Filler-Pay-First” mechanism;
  • CCTP Express is a trustless design, allowing anyone to participate as a filler or datadaemon without permission;
  • To mitigate the reorg risk exposed to the fillers, CCTP Express introduces an insurance fee that varies based on the user-defined initiateDeadline.;
  • In order to lower the transaction costs, repayment and rebalancing transactions are bundled, cross-chain messages are transmitted as hashes to reduce data size.

Primary principles

1. CCTP Dependency
CCTP Express is specifically designed to enhance CCTP. All fund rebalancing must be done exclusively through CCTP to avoid exposure to potential risks associated with other bridges.

2. Decentralization
The system must be trustless to ensure maximum protection for everyone’s assets. Players in the system, including Fillers and Datamaemon, are permissionless.

3. Win-Win-Win
The design should benefit all stakeholders — users, fillers, and CCTP. Users gain a faster and more cost-effective experience, fillers receive satisfactory rewards while their funds are safeguarded, and CCTP grows stronger through the support of CCTP Express.

Key concepts

CCTP Express is an intent based cross-chain bridging system built upon CCTP. The key to speed up the transaction is the adoption of the “Filler-pay-first” mechanism.

When a user submits a bridging intent, fillers initiate an order on the origin chain, then immediately call a fillOrder on the destination chain and transfer funds to the user accordingly.

The system periodically validates the payments and repays to fillers in batches. Rebalancing across domains is done across CCTP if needed. This settlement process is out of the scene of the users, the repayments and rebalancing are bundled to save costs.

Dive Deeper

CCTP Express adopts a Hub-and-Spoke architecture, it can be broken down into a 3-layered system: a request for quote mechanism to obtain users’ bridging intent, enabling a filler network to claim and fill those orders, and lastly a settlement layer periodically repay fillers through CCTP and utilizing attestation service from Iris (Circle’s off-chain attestation service).

Our design adheres to ERC-7683, emphasizing the importance of aligning with industry standards. This ensures that cross-chain intent systems can interoperate and share infrastructure like order dissemination services and filler networks. By fostering this interoperability, we enhance the end-user experience by increasing competition for fulfilling user intents. Below is a diagram of the architecture of CCTP Express:

Order initiation

  1. User signs an off-chain message defining the parameters of an order:
 function deposit(
        bytes32 recipient,
        bytes32 inputToken,
        bytes32 outputToken,
        uint256 inputAmount,
        uint256 outputAmount,
        uint32 destinationDomainId,        
        bytes32 exclusiveFiller,
        uint32 exclusivityDeadline,
        uint32 initiateDeadline,
        uint32 fillDeadline,
        bytes calldata message
    ) external;
  1. The order is disseminated to Fillers. The Filler calls initiate on the origin chain SpokePool. A CrossChainOrder will be created and the user’s funds are transferred to the SpokePool for escrow.
  2. The SpokePool on origin chain submits a Deposit message to Circle’s off-chain attestation service, Iris, for attestation and subsequently a DepositAttestation will be generated.

Filler Network Fills Order

  1. Fillers call fillOrder on the destination SpokePool with their own assets which are then transferred to the user from the SpokePool.

  2. The SpokePool on destination chain submits a Fill message to Iris and a FillAttestation will be generated.

Settlement

  1. A permissionless Datadaemon retrieves the DepositAttestation and FillAttestation and relays to the Hub Pool on the Settlement Chain.

  2. Periodically, the Datadaemon calls repayFunds and rebalanceFunds at the Hub Pool, which would collect all the attestations and perform the following steps:

  • Iterate through a list of attestations, a valid filled order is supported by both Deposit and Fill attestation.

  • Determine the aggregate settlement sum from all valid fills for each filler.

  • If there is sufficient funds on SpokePool to repay filler, a repayFunds message in the form of merkle root hash is sent to Iris.

  • For the remaining outstanding payment, the Hub Pool will send a rebalanceFunds message in the form of merkle root hash to Iris, which indicates how much a SpokePool with surplus funds would send to another pool in deficit to fulfill the need for repayment.

  1. Once the repayFunds and rebalanceFunds messages get attested by Iris, they are sent to respective SpokePools. Datamaemon will call repayFunds and rebalanceFunds on SpokePools with merkle root hash and their respective transaction details. Accordingly, funds would be repaid to fillers and sent to other SpokePools to ensure sufficient funds for handling repayments.

  2. Repay funds to fillers from the SpokePool on destination chain, and rebalance funds across SpokePools on different chains via CCTP.

Cctp Fill Settlement

  1. In case of an order initiated by Fillers not being filled, anyone can call cctpFill and mark the order status on destination chain SpokePool to RequestCctpFill and block any filler from filling it. At the same time, the SpokePool will emit a CctpFill message to Iris for attestation.

  2. The CctpFillAttestation will be used to replace the FillAttestation mentioned in 5. and allow the user fund to be transferred via the CCTP route.

Risk and solutions

Reorg risk
The reorg risk is uniquely borne by fillers. If the filler fills the intent too fast without waiting for the finality on the source chain, the source chain may reorg and cause a loss to the filler since the intent has been filled on the destination chain and the filler would end up in empty hand.

The reorg risk is effectively mitigated by the Insurance Fee, which varies based on the initiateDeadline specified by the user. If the initiateDeadline is sufficiently long, the filler can reinitiate the CrossChainOrder on the origin chain in the event of a reorg, ensuring the user’s funds are transferred again. The insurance fee is calculated using below formula:

Formula of Insurance Fee

Where:
f(t) is the insurance fee which is a function varies with t
V is the trading volume, representing the maximum insurance fee
e is the base of the natural logarithm
k is a constant that control the descending rate of the fee
t is the time between order creation time and the initiateDeadline
T is the time required for finality on the origin chain

The insurance fee varies with the initiateDeadline- it decreases with the increment of time between the order creation time and the initiateDeadline:

Since the insurance fee decreases significantly when the initiateDeadline is long (it drops to nearly zero if it is 2x of the time needed for finality on the origin chain), a normal user is likely to set a long initiateDeadline to avoid paying the fee, minimizing the reorg risk for the filler.

High system costs
The complexity of the design apparently implies higher costs compared to bridging directly using CCTP. To align with our goal of providing a faster and cheaper way to use CCTP, we mitigate costs through two key strategies: transaction bundling and data compression.

Transactions bundling-

Datadaemon works periodically to call repayment and rebalancing on the hub pool. This interval is adjustable to make sure a sufficient number of transactions are processed in each batch.

In this architecture design, gas costs are primarily incurred in rebalancing via CCTP and fund transfers. By processing rebalancing in batches and handling repayments in aggregate sums to the fillers, these costs are distributed across multiple transactions, reducing the costs on any single transaction.

Data Compression-

Cross-chain messages are transmitted between spoke pools and the hub pool via Iris, Circle’s off-chain attestation service. To minimize data size and reduce gas costs, these messages are sent in the form of a hash.

For a detailed comparison of gas consumption between CCTP and CCTP Express, check out this article.

FAQ

1. What does it mean to the end user?
When using CCTP Express’s front end or applications integrated with CCTP Express, users benefit from a significantly faster and cheaper way to bridge USDC across chains. By leveraging CCTP as the underlying asset bridge, the system enhances user experience while maintaining robust security.

2. What are the possible use cases?
We believe CCTP Express is essential to achieve chain abstraction by providing an instant USDC bridging experience. Possible use cases included-

USDC-denominated dApps
USDC is widely adopted in various dApps, e.g. dYdX and Polymarket. dApps can integrate CCTP Express SDK to offer their users instant transfer in and out from all CCTP supported chains without the usual waiting time.

Payment Network
CCTP Express can offer instant settled transaction experience for users across chains, enabling them to pay their USDC for a coffee from any CCTP supported chain.

Money Lego
Arbitragers and Solvers can utilize CCTP Express to be the backbone of their cross chain actions. It’s highly undesirable for arbitragers or solvers to wait for long in the high speed crypto world, CCTP Express can offer them superior speed without worrying about security as CCTP Express is using CCTP as the underlying bridge.

3. With a similar idea of providing cross chain bridging powered by off chain agents, how is CCTP Express different from other intent-based bridges, say Across?

The primary distinction between CCTP Express and Across are: positioning and settlement mechanism.

Positioning -

While both protocols are intent-based bridges powered by fillers/relayers, CCTP Express is positioned to be a booster tool to use CCTP.

Given this focus, CCTP Express is closely integrated with CCTP and evolves in tandem with it. For instance, if CCTP supports EURC, CCTP Express will promptly support it as well.

And this alignment also applies to the choice of picking which chain CCTP Express supports. CCTP Express aims to cover all EVM and non-EVM chains CCTP operates. And like CCTP, CCTP Express adopts the bytes32 address format, instead of the 20 byte address used in EVM, to handle 32 byte addresses in many non-EVM chains.

In contrast, Across is limited to EVM chains only, as it has a hard requirement to support EVM- chains only.

Settlement mechanism -

In CCTP Express, the Hub Pool smart contract utilizes the Iris attestation service used in CCTP to relay and verify messages. Deposit and Filled messages from various Spoke Pools are sent to Iris for attestation and then collected in the Hub Pool, which processes repayments on-chain.

In contrast, Across uses canonical bridges to relay messages and utilizes UMA to optimistically verify fill events off-chain. Since UMA works off-chain, an interval is needed as a dispute window.

Discuss with Us

To shape a better product, we are keen to discuss with users, fillers and dApp teams who need instant USDC bridging. If anyone is interested in CCTP Express, we have a public telegram group here to discuss about it: Join Group Chat

1 post - 1 participant

Read full topic

zk-s[nt]arks Fake GLV: You don't need an efficient endomorphism to implement GLV-like scalar multiplication in SNARK circuits

Published: Sep 09, 2024

View in forum →Remove

 _____     _           ____ _ __     __
|  ___|_ _| | _____   / ___| |\ \   / /
| |_ / _` | |/ / _ \ | |  _| | \ \ / /  
|  _| (_| |   <  __/ | |_| | |__\ V /   
|_|  \__,_|_|\_\___|  \____|_____\_/   

You don’t need an efficient endomorphism to implement GLV-like scalar multiplication in SNARK circuits

Introduction

P-256, also known as secp256r1 and prime256v1, is a 256-bit prime field Weierstrass curve standardized by the NIST. It is widely adopted in internet systems, which explains its myriad use cases in platforms such as TLS, DNSSEC, Apple’s Secure Enclave, Passkeys, Android Keystore, and Yubikey. The key operation in elliptic curves based cryptography is the scalar multiplication. When the curve is equipped with an efficient endomorphism it is possible to speed up this operation through the well-known GLV algorithm. P-256 does unfortunately not have an efficient endomorphism (see parameters) to enjoy this speedup.

Verifying ECDSA signatures on Ethereum through precompiled contracts, i.e. smart contracts built into the Ethereum protocol (there are only 9) is only possible with the secp256k1 curve and not the P-256.
Verifying ECDSA signatures on P-256 requires computing scalar multiplications in Solidity and is especially useful for smart-contract wallets, enabling hardware-based signing keys and safer, easier self-custody. Different solutions can bring P-256 signatures on-chain. There are primarily three interesting approaches: (zk)-SNARK based verifiers, smart contract verifiers (e.g. [Dubois23], Ledger/FCL (deprecated), smoo.th/SCL and daimo/p256verifier), and native protocol precompiles (EIP/RIP 7212).

Using SNARK (succinctness) properties, provides a great way to reduce gas cost for computation on Ethereum (e.g. ~232k gas for Groth16, ~285k gas for PLONK and ~185k gas for FFLONK). This is very competitive with (and sometimes better that) the currently gas-optimal smart contract verifier. Moreover one can batch many ECDSA verifications in a single proof, amortizing thus the gas cost. However verifying P-256 signatures in a SNARK circuit can be very expensive i.e. long proving time. This is because the field where the points on the P-256 curve lie is different than the field where the SNARK computation is usually expressed. To be able to verify the proof onchain through the procompile the SNARK field needs to be the BN254 scalar field. Different teams tried to implement the ECDSA verification on P-256 in a BN254 SNARK circuit efficiently. Among these: zkwebauthn/webauthn-halo2, https://github.com/zkwebauthn/webauthn-circom and PSE/circom-ecdsa-p256.

If P-256 had an efficient endomorphism we could have optimized the proving time a great deal!

In this note we show a way to implement a GLV-like scalar multiplications in-circuit without having an efficient endomorphism.

Other applications

Background

Standard scalar multiplication

Let E be an elliptic curve defined over the prime field \mathbb{F}_p and let r be a prime divisor of the curve order \#E(\mathbb{F}_p) (i.e. the number of points).
Let s \in \mathbb{F}_r and P(x,y) \in E(\mathbb{F}_p), we are interested in proving scalar multiplication s\cdot P over the r-torsion subgroup of E, denoted E[r] (i.e. the subset of points of order r).

The simplest algorithm is the standard left-to-right double-and-add:

INPUT: s = (s_{t−1},..., s_1, s_0), P ∈ E(Fp).
OUTPUT: sP.
1. Q ← ∞.
2. For i from t−1 downto 0 do
    2.1 Q ← 2Q.
    2.2 If s_i = 1 then Q ← Q + P.
3. Return(Q).

If/else branching is not possible in SNARK circuits so this is replaced by constant window table lookups inside the circuit. This can be achieved using polynomials which vanish at the constants that aren’t being selected, i.e. a 1-bit table lookup Q ← s_i * Q + (1 - s_i) * (Q+P). Hence this double-and-add algorithm requires t doublings, t additions and t 1-bit table lookup.
This can be extended to windowed double-and-add, i.e. scanning more than a bit per iteration using larger window tables, but the multiplicative depth of the evaluation increases exponentially. We use affine coordinates for doubling/adding points because inverses cost as much as multiplications, i.e. instead of checking that 1/x is y we provide y out-circuit and check in-circuit that x\cdot y = 1. However since we start with Q ← ∞ it is infeasible to avoid conditional branching since affine formulas are incomplete. Instead, we scan the bits right-to-left and assume that the first bit s_0 is 1 (so that we start at Q ← P), we double the input point P instead of the accumulator Q in this algorithm and finally conditionally subtract (using the 1-bit lookup) P if s_0 was 0.

INPUT: s = (s_{t−1},..., s_1, s_0), P ∈ E(Fp).
OUTPUT: sP.
1. Q ← P.
2. For i from 1 to t−1 do
    2.1 If s_i = 1 then Q ← Q + P.
    2.2 P ← 2P.
3. if s_0 = 0 then Q ← Q - P
4. Return(Q).

GLV scalar multiplication

However it is well known that if the curve is equipped with an efficient endomorphism then there exists a faster algorithm known as [GLV].

Example 1 : suppose that E has Complex Multiplication (CM) with discrimant -D=-3, i.e. E is of the form y^2=x^3+b, with b \in \mathbb{F}_p. This is the case of BN254, BLS12-381 and secp256k1 elliptic curves used in Ethereum. There is an efficient endomorphism \phi: E \rightarrow E defined by (x,y)\mapsto (\omega x,y) (and \mathcal{O} \mapsto \mathcal{O}) that acts on P \in E[r] as \phi(P)=\lambda \cdot P. Both \omega and \lambda are cube roots of unity in \mathbb{F}_p and \mathbb{F}_r respectively, i.e. \omega^2+\omega+1 \equiv 0 \pmod p and \lambda^2+\lambda+1 \equiv 0 \pmod r.

Example 2 : suppose that E has Complex Multiplication (CM) with discrimant -D=-8, meaning that the endomorphism ring is \mathbf{Z}[\sqrt{−2}]. This is the case of the Bandersnatch elliptic curves specified in Ethereum Verkle trie. There is an efficient endomorphism \phi: E \rightarrow E whose kernel is generated by a 2-torsion point. The map can be found by looking at 2-isogeneous curves and applying Vélu’s formulas. For Bandersnatch it is defined by (x,y)\mapsto (u^2\cdot \frac{x^2+wx+t}{x+w},u^3\cdot y\cdot \frac{x^2+2wx+v}{(x+w)^2}) for some constants u,v,w,t (and \mathcal{O} \mapsto \mathcal{O}) that acts on P \in E[r] as \phi(P)=\lambda \cdot P where \lambda^2+2 \equiv 0 \pmod r.

The GLV algorithm starts by decomposing s as s = s_0 + \lambda s_1 and then replacing the scalar multiplication s \cdot P by s_0 \cdot P + s_1 \cdot \phi(P). Because s_0 and s_1 are guaranteed to be \leq \sqrt{r} (see Sec.4 of [GLV] and Sec.4 of [FourQ] for an optimization trick), we can halve the size of the for loop in the double-and-add algorithm. We can then scan simultaenously the bits of s_0 and s_1 and apply the Strauss-Shamir trick. This results in a significant speed up but only when an endomorphism is available. For example the left-to-right double-and-add would become:

INPUT: s and P ∈ E(Fp).
OUTPUT: sP.
1. Find s1 and s2 s.t. s = s1 + 𝜆 * s2 mod r 
    1.1 let s1 = (s1_{t−1},..., s1_1, s1_0) 
    1.2 and s2 = = (s2_{t−1},..., s2_1, s2_0)
2. P1 ← P, P2 ← 𝜙(P) and Q ← ∞.
3. For i from t−1 downto 0 do
    3.1 Q ← 2Q.
    3.2 If s1_i = 0 and s2_i = 0 then Q ← Q.
    3.3 If s1_i = 1 and s2_i = 0 then Q ← Q + P1.
    3.4 If s1_i = 0 and s2_i = 1 then Q ← Q + P2.
    3.5 If s1_i = 1 and s2_i = 1 then Q ← Q + P1 + P2.
4. Return(Q).

Using the efficient endomorphism in-circuit is also possible (see [Halo, Sec. 6.2 and Appendix C] or [gnark implementation] for short Weierstrass curves and [arkworks] and [gnark] implementations for twisted Edwards). But one should be careful about some extra checks of the decomposition s = s_0 + \lambda s_1 \mod r (not the SNARK modulus). The integers s_0, s_1 can possibly be negative in which case they will be reduced in-circuit modulo the SNARK field and not r.

The fake GLV trick

Remember that we are proving that s\cdot P = Q and not computing it. We can “hint” the result Q and check in-circuit that s\cdot P - Q = \mathcal{O}. Now, if we can find u,v \leq \sqrt{r} such that v\cdot s = u \pmod r then we can check instead that

(v\cdot s)\cdot P - v\cdot Q = \mathcal{O}

which is equivalent to

u\cdot P - v\cdot Q = \mathcal{O}

The thing now is that u and v are “small” and we can, similarly to the GLV algorithm, halve the size of the double-and-add loop and apply the Strauss-Shamir trick.

Solution: running the half-GCD algorithm (i.e. running GCD half-way) is sufficient to find u and v. We can apply the exact same trick for finding the lattice basis as in the GLV paper (Sec. 4). For completeness we recall the algorithm hereafter.
We apply the extended Euclidean algorithm to find the greatest common divisor of r and s (This gcd is 1 since r is prime.) The algorithm produces a sequence of equations

w_i \cdot r + v_i \cdot s = u_i

for i = 0, 1, 2, \dots where w_0 = 1, v_0 = 0, u_0 = r, w_1 = 0, v_1 = 1, u_1 = s, and u_i \geq 0 for all i. We stop at the index m for which u_m \geq \sqrt{r} and take u = u_{m+1} and v = -v_{m+1}.
Note: By construction u is guaranteed to be a positive integer but v can be negative, in which case it would be reduced in-circuit modulo the SNARK modulus and not r. To circumvent this we return in the hint u, v and a \texttt{b}=1 if v is negative and \texttt{b}=0 otherwise. In-circuit we negate Q instead when \texttt{b}=1.

Implementation

A generic implementation in the gnark library is available at gnark.io (feat/fake-GLV branch). For Short Weierstrass (e.g. P256) look at the scalarMulFakeGLV method in the emulated package and for twisted Edwards (e.g. Bandersnatch/Jubjub) look at the scalarMulFakeGLV method in the native package.

Benchmark

The best algorithm to implement scalar multiplication in a non-native circuit (i.e. circuit field ≠ curve field) when an efficient endomorphism is not available is an adaptation of [Joye07] (implemented in gnark here).
Next we compare this scalar multiplication with our fake GLV in a PLONKish vanilla (i.e. no custom gates) circuit (scs) over the BN254 curve (Ethereum compatible). We also give benchmarks in R1CS.

P-256 Old (Joye07) New (fake GLV)
[s]P 738,031 scs
186,466 r1cs
385,412 scs
100,914 r1cs
ECDSA verification 1,135,876 scs
293,814 r1cs
742,541 scs
195,266 r1cs

Note here that the old ECDSA verification uses Strauss-Shamir trick for computing [s]P+[t]Q while the new version is merely two fake GLV multiplications and an addition.

Comparison

p256wallet.org is an ERC-4337 smart contract wallet that leverages zk-SNARKs for WebAuthn and P-256 signature verification. It uses PSE/circom-ecdsa-p256 to generate the webAuthn proof, and underneath PSE/circom-ecdsa-p256 to generate the ECDSA proof on P-256 curve. The github README reports 1,972,905 R1CS. Compiling our circuit in R1CS results in 195,266 R1CS. This is more than a 10x reduction, which is not only due to the fake GLV algorithm but also to optimized non-native field arithmetic in gnark.

Other curves

Similar results are noticed for other curves in short Weirstrass, e.g. P-384 and STARK curve:

P-384 Old (Joye07) New (fake GLV)
[s]P 1,438,071 scs 782,674 scs
ECDSA verification 2,174,027 scs 1,419,929 scs
STARK curve Old (Joye07) New (fake GLV)
[s]P 727,033 scs 380,210 scs
ECDSA verification 1,137,459 scs 732,131 scs

and also in twisted Edwards e.g. Jubjub vs. Bandersnatch:

Jubjub Old (2-bit double-and-add) New (fake GLV)
[s]P 5,863 scs
3,314 r1cs
4,549 scs
2,401 r1cs
Bandersnatch Old (GLV) New (fake GLV)
[s]P 4,781 scs
2,455 r1cs
4,712 scs
2,420 r1cs

EDIT: Thanks to Ben Smith for reporting that a similar idea was proposed in [SAC05:ABGL+] for ECDSA verification. We note that, in our context, the trick applies to a single scalar multiplication and that the half GCD is free through the hint.

Acknowledgement

I would like to thank Arnau Cube, Aard Vark, Holden Mui, Olivier BĂŠgassat, Thomas Piellard and Ben Smith for fruitful discussions.

6 posts - 3 participants

Read full topic

Economics Embedded fee markets and ERC-4337 (part 2)

Published: Sep 05, 2024

View in forum →Remove

by: Davide Rezzoli (@DavideRezzoli) and BarnabĂŠ Monnot (@barnabe)

Many thanks to Yoav Weiss (@yoavw) for introducing us to the problem, Dror Tirosh (@drortirosh) for helpful comments on the draft, and the 4337 team for their support. Reviews ≠ endorsements; all errors are the authors’ own.

This work was done for ROP-7.


Introduction

In our previous post, we introduced the ERC-4337 model. This model outlines the fee market structure for bundlers and details the cost function related to the on-chain publishing cost and the off-chain (aggregation costs) of a bundle.

We also introduced the concept of the “Bundler Game”. This game will be the primary focus of the second part. Given a set of transactions, a bundler can choose which transactions to include in their bundle. This creates an asymmetry of information between the bundlers and the user, as the user doesn’t know how many transactions will be included in the bundle. This leads to a zero-sum game where the user is at a clear disadvantage.

This research aims to explore methods to improve the UX by ensuring that users do not need to overpay for inclusion in the next bundle. Instead, users should be able to pay a fee based on the actual market demand for inclusion.

Current state of ERC-4337

In today’s market, the P2P mempool is not live on mainnet and it’s being tested on the Sepolia testnet. Companies building on ERC-4337 are currently operating in a private mode, the users connect via an RPC to a private bundler which will than work with a buidler to publish onchain your useroperation. Bundle Bear app, developed by Kofi, provides some intriguing statistics on the current state of ERC-4337.

In the Weekly % Multi-UserOp Bundles metric, we observe the percentage of bundlers creating bundles that include multiple userops. From the beginning of 2024 to June 2024, this percentage has not exceeded 6.6%. This data becomes even more interesting when considering that many bundlers run their own paymasters, entities that sponsor transactions on behalf of users. Notably, the two largest bundlers who also operate as a paymaster, in terms of user operations published, sponsored 97% of the user operations using their services. The paymaster pays for some parts of the useroperation and the rest is paid by the dapps or other entity.

The question that arises is why paymasters, dApps, etc., are paying for the user operations. Will the user pay them back in the future? We can’t be sure what will happen, but my personal guess is that currently, dApps are covering the fees to increase usage and adoption of their apps. Once adoption is high, users will likely have to pay for the transactions themselves. It’s worth mentioning that for the user to pay for a user operation with the current model is not the best option, since a basic ERC-4337 operation costs ~42,000 gas, while a normal transaction costs ~21,000 gas.

Variations on ERC-4337

Overview of ERC-4337

The mempool is still in a testing phase on Sepolia and is not live on the mainnet. Without the mempool, users have limited options for using account abstraction. Users interact with an RPC, which may be offered by a bundler that bundles UserOps, or with an RPC service that doesn’t bundle, similar to services like Alchemy or Infura, which receive and propagate transactions to other bundlers.

High level of a transaction in ERC-4337 without the mempool

Once the mempool is live, the transaction flow will resemble the diagram below, which is similar to the current transaction flow. A mempool enhances censorship resistance for users because, unlike the RPC model, it reduces the chances of a transaction being excluded. However, even with a mempool, there is still a risk that an RPC provider might not forward the transaction, but the mempool model is particularly beneficial for users who prefer to run their own nodes, as it mitigates this risk.

High level of a normal transaction using an EOA

High level of an userop type of transaction

While bundlers have the potential to act as builders, we prefer to keep the roles separate due to the competitive landscape. Bundlers would face significant competition from existing, sophisticated builders, making building less attractive and potentially less profitable. As a result, bundlers are more incentivized to collaborate with established builders rather than building independently and risking losses.

Combining the roles of bundler and builder into a single entity implies significant changes to the current system. Bundlers would need to compete with existing sophisticated builders, or alternatively, current builders will need to horizontally integrate and assume the bundler role as well. The latter scenario, while more plausible, raises concerns about market concentration and the potential negative impact on censorship resistance.

Bundlers and builders as two different entities

With the users connecting directly to an RPC, everything runs in a more private environment, which doesn’t help with market competition. In the near future the mempool will be on the mainnet increasing competition.

Using a mempool, in which userops are public to different bundlers increases competition, in the case of non native account abstraction having a separation between bundler and builder is needed, in the case of native account abstraction the separation might not be needed since the builder can interpret the userops as normal transactions.

For our model we believe that having a separation between the bundler and the builder also offers some advantages, especially in terms of competition and censorship resistance. Imagine a scenario where all the bundlers are offering a cost \textbf{v} for getting included in their bundle. There will be a bundler who wants to attract more users to achieve higher profits, so they will offer a cost \textbf{v’} where \textbf{v’} < \textbf{v} with enough competition among bundlers, \textbf{v'} will get close to \omega, the aggregation cost for the bundle. In this case, the bundlers who can search more efficiently and have better hardware to include more transactions in a bundle will earn higher fees and in return makes the useroperation for the user cheaper.

This could lead to the following outcome: In a competitive environment, bundlers will lower their prices to be selected by users, who will, in turn, seek the lowest price for the inclusion of their user operation in a bundle. This competition will create a system where the bundler who offers the best price will be selected more often than the bundler who is only trying to maximize their profit by creating smaller bundles. Separating the roles of the bundler and builder can also enhance censorship resistance. A bundler can create a bundle of aggregated user operations and send it to different builders. If the bundle includes operations that could be censored, a non-censoring builder can accept it and proceed with construction. However, it’s worth noting that from a user’s perspective, this setup could increase costs, as the introduction of a bundler adds an additional party, leading to higher expenses.

RIP-7560

Native account abstraction isn’t a novel concept; it’s been under research for years. While ERC-4337 is gaining traction, its implementation outside the protocol offers distinct advantages alongside trade-offs. Notably, existing EOAs can’t seamlessly transition to SCWs, and various types of censorship-resistant lists are harder to utilize. As previously mentioned, the gas overhead of a userOp cost escalates significantly compared to a normal transaction. RIP-7560 won’t inherently resolve the ongoing issue concerning off-chain costs, but it substantially reduce transaction expenses. From the initial ~42000 gas, it’s possible to reduce the cost by ~20000 gas.

High level of a type4 transaction with RIP-7560

Layer2s Account Abstraction

Account abstraction can be utilized in Layer 2 (L2) solutions. Some L2s already implement it natively, while others follow the L1 approach and are waiting for a new proposal similar to RIP-7560. In L2, the L1 is used for data availability to inherit security, while most of the computation occurs off-chain on the L2, providing cheaper transactions and scalability.

High level of Account abstraction in Layer 2

In scenarios where computation on L2 is significantly cheaper than the cost of calldata for data availability (DA) on the mainchain, the use of signature aggregation proves highly beneficial. For instance, pairing for BLS on the mainnet is facilitated by the 0x08 precompile from the EVM, which costs approximately ~45000k gas. Consequently, using BLS on L1 is more expensive than traditional transactions.

Compression techniques on L2s are already being used, such as 0-byte compression, which reduces the cost from ~188 bytes to ~154 bytes for an ERC20 transfer. With signature aggregation, the compression efficiency can be further enhanced by using a single signature, reducing the size to ~128 bytes.

In Layer 2s, signature aggregation is a crucial innovation that enhances both transaction efficiency and cost-effectiveness. By combining multiple signatures into a single one, the overall data payload is significantly reduced, which lowers the costs associated with data availability on Layer 1. This advancement not only improves scalability but also reduces transaction costs for users, making the system more economical and efficient.

Signature Aggregation economics in Layer2s

When using an L2 service, the user incurs several costs, including a fee for the L2 operator, a cost based on network congestion, and the cost of data availability on L1.

From a previous research on ”Understanding rollup economics from first principles”, we can outline the costs a user faces when using L2 services as follows:

When a user interatcs with a layer 2 he has some costs that we can define as follow:

  • User fee = L1 data publication fee + L2 operator fee + L2 congestion fee
  • Operator cost = L2 operator cost + L1 data publication cost
  • Operator revenue = User fees + MEV
  • Operator profit = Operator revenue - Operator cost = L2 congestion fee + MEV

In the case of non-native account abstraction, an additional entity, the bundler, may introduce a fee for creating bundles of userops.

Considering the bundler, the costs and profits are extended as follows:

  • User fee = L1 data publication fee + L2 operator fee + L2 congestion fee + Bundler Fee
  • Bundler Cost = Quoted(L1 data publication fee + L2 operator fee + L2 congestion fee)
  • Bundler Revenue = User fee
  • Bundler profit = Bundler Revenue - Bundler cost = Difference between L1 and L2 costs and quoted prices from the bundler + Bundler fee
  • Operator Cost = L1 data publication fee + L2 operator fee
  • Operator profit = Operator revenue - Operator cost = L2 congestion fee + MEV

The bundler earns its fee from the user for their services, while the remainder of the user’s payment covers the L2 operator’s costs. If the user is unaware of the bundle size, estimating the actual cost of sending userops becomes challenging, potentially leading to the bundler charging higher fees than the one necessary to cover the operator cost.

Incentive Alignment in L2

The interaction between the bundler and L2 helps address this issue, as L2s are incentivized to keep user costs low due to competition. Overcharging users can drive them to switch to other L2s offering fairer prices.

Let’s redefine our model by introducing the operator. The user bids to the bundler for inclusion in the next L2 block by bidding a value V. The user aims to minimize the data publication fee, while the bundler seeks to maximize their fee or gain a surplus from L2 interaction costs and user fees.

The costs associated with creating a bundle and publishing it on-chain can be divided into two parts:

On-chain cost function: A bundler issuing bundle \mathbf{B} when the base fee is r expends a cost:

C_\text{on-chain}(\mathbf{B}, r) = F \times r + n \times S \times r

Aggregated cost function: The bundler has a cost function for aggregating n transactions in a single bundle \mathbf{B} with base fee of r:

C_\text{agg}(\mathbf{B}, r) = F' \times r + n \times S' \times r + n \times \omega

with S' < S the reduced size of a transaction and the pre-verification gas use F' > F, which contains the publication and verification of the single on-chain aggregated signature.

If the user can obtain a reliable estimate for n, they can calculate their cost using the estimateGas function, available in most L2 solutions. Having a good estimation can make the user bid accordingly without having to overestimate their bid for inclusion. This function determines the necessary cost to ensure inclusion. Having a good estimate for n and the estimateGas function can avoid the user to pay for a higher preVerificationGas. In the next section, we will explore various mechanisms to ensure a reliable estimation of n.

Layer2s operate an oracle

The oracle’s role is to monitor the mempool and estimate the number of transactions present. The process works as follows: the Layer 2 deploys an oracle to check the mempool and then informs the user about the number of transactions in the mempool. This enables the user to estimate their bid for inclusion in a bundle. The Layer 2 can request the bundler to include at least a specified number of transactions (n) in a bundle, or else the bundle will be rejected. Once the bundler gathers enough transactions to form a bundle, it sends the bundle to the Layer 2, which then forwards it to the mainnet as calldata for data availability.

Layer2s with shared sequencer

An interesting approach is to have multiple Layer 2 (L2) networks running a shared sequencer. This setup can provide a more accurate estimate of the mempool, as the sequencer reaches an agreement through consensus facilitated by the shared sequencer.

In this configuration, different L2 networks operate independently but share a common sequencer. At regular intervals, these networks check the number of user operations (userops) in the shared mempool. The shared sequencer helps synchronize and aggregate data from these networks. Once they reach an agreement, the information is communicated to the user, allowing them to bid based on the number of userops present.

This approach offers several advantages. Firstly, it provides a decentralized method to determine the number of userops in the mempool, enhancing resistance to collusion. Secondly, it eliminates the single point of failure that could occur if only one system were managing the communication between the user and the mempool. Thirdly, the shared sequencer ensures consistency and reduces discrepancies between the different L2 solutions.

By leveraging the shared sequencer, this method ensures a robust and reliable system for estimating and communicating the state of the mempool to users, thus improving the overall efficiency and security of the process.

In the two explained approaches by using an oracle, there is a potential attack vector where an adversary could generate multiple user operations in the mempool, knowing that they will revert if aggregated together. As a result, the oracle sees that there are n transactions and requires a large bundle, but the bundler cannot create the bundle. This issue could stall the network for many blocks.

Layer2s operate their own bundler

In this proposal, the Layer 2 itself assumes the role of the bundler, while another entity handles the aggregation of signatures (this could be current bundler services). The process works as follows: the Layer 2 operates its own bundler, and users send their operations (userops) to the mempool. The Layer 2 selects some of these userops from the mempool and sends them “raw” to the aggregator, compensating the aggregator for aggregating the signatures. Once the aggregator produces the bundle, it sends it to the bundler, which then forwards it to the mainnet as calldata for data availability.

The main idea is that the Layer 2 handles the collection of userops and then outsources the aggregation to another entity. The Layer 2 pays for the aggregation and charges the user a fee for the service.

There are two different options:

  1. Flat Fee Model: The bundler (Sequencer) selects some transactions and charges the user a flat fee. This flat fee is calculated similarly to current Layer 2 transactions, predicting the future cost of L1 data publication. Alternatively, the Layer 2 could charge the user a flat fee based on the cost of bundling n aggregated userops, the layer 2 still have to predict how many transactions will be present in the bundle he will contruct to correctly quote the user, this can be made in the same way is now where the . As it is now where the l2 charge the best comeptitive price to the user that it is in the Layer 2’s best interest to keep the prices as competitive as possible for the user.

  2. Requesting Refunds: If the Layer 2 wants to enhance its credibility, it could enable automatic refunds. This would involve a mechanism that checks how many userops are published in a single block and whether the transactions could have been aggregated. If a userop that could have been aggregated wasn’t, and no automatic refund was issued, the user can request a refund. In this scenario, the Layer 2 could stake some assets, and if the refund isn’t provided, the user could enforce the refund, ensuring fairness and accountability.

Conclusion

In these two different posts, we outline the difficulties users experience when bidding to be included in the next bundle. In the first part, we presented the ERC-4337 model, explaining the costs a bundler incurs when posting a bundle on-chain and the associated off-chain costs. We also outlined the fee markets for bundlers and began discussing the issue of formatting the bundler. Users experience difficulties with bidding due to a lack of knowledge about the number of transactions present in the mempool at the time of bundling.

In the second part, we explained ERC-4337 and RIP-7560. We then discussed why signature aggregation is more likely to occur on Layer 2 solutions rather than directly on Layer 1. We demonstrated how Layer 2 solutions could address the asymmetric knowledge that users experience in different ways. The first one is to use oracles to signal to the user how many transactions are present in the mempool, with this approach the users knows how much they should bid and can force the bundler to make larger bundles. The third approach which is the simplest is that the L2 acts as a bundler and outsources the aggregation to a third party and lets the users pay a fee for it.

2 posts - 2 participants

Read full topic

Block proposer Timestamp Ordering in MCP for Timing Games

Published: Sep 03, 2024

View in forum →Remove

Thanks to @Julian and @denisa for the corrections, suggestions and discussions!

Multiple Concurrent Proposers (MCP) has recently become a significant topic of discussion within the community, particularly following the introduction of the BRAID protocol and the rise of DAG consensus. Max’s argument in favor of MCP for Ethereum centers on the monopoly created by leader-based consensus mechanisms, where the leader for a given slot is granted substantial monopolistic power. This concentration of power leads to issues such as short censorship for some transactions.

In leader-based consensus, the designated leader for each slot has the exclusive authority to propose blocks, which allows them to exploit their position for profit maximization, such as through transaction reordering or frontrunning. MCP aims to mitigate these issues by decentralizing the block proposal process, reducing the influence any single proposer can exert over the network during a given slot.

Multiple Concurrent Proposers Economic Order

Let n represent the number of validators in the network. A subset of validators maintains a local chain, denoted by k < n. The protocol at some step will need to pick the union of all local blockchains at slot i and an ordering rule must be applied between transactions of each local chain.

Deterministic Block Ordering: A deterministic rule is applied to order the blocks and its transactions. In the context of MEV-SBC ‘24 event, Max proposes two approaches:

  1. Sorting by Priority Fee: Blocks are sorted based on the priority fee of transactions. MEV (Maximal Extractable Value) taxes can be applied, where a percentage of the priority fee is extracted and redistributed by the application. This approach is detailed in the proposal “Priority is All You Need”.
  2. Execution Flags: Transactions can set an “execution flag” that indicates specific actions, such as interacting with a particular liquidity pool (e.g., trading ETH/USDC in the UNIv5 pool). When the block ordering rule encounters a transaction with such a flag, it pulls all flagged transactions attempting to interact with that pool and executes them as a batch.

Timing Games with Frontrunning Incentive

Let p be a proposer participating in the MCP protocol, who is responsible for proposing a block in their local chain during slot i. We acknowledge that there exists an inherent delay and processing time required to propose this block. Specifically, the protocol permits a maximum allowable delay of \Delta time units before p incurs penalties.

p may strategically opt to delay their block proposal until \Delta - \epsilon (where \epsilon > 0 ) time units. This delay enables p to potentially exploit a frontrunning opportunity by observing and computing a partial order of transactions submitted by other proposers. By strategically placing their block proposal just before the misslot penalty (no block has been proposed and it’s no going to be accepted for slot i), p could include transactions with higher gas fees, a situation that provides a clear incentive for engaging in frontrunning behavior and the main incentive for playing timing games in this post.

Under the current deterministic protocol rules, such a timing strategy is incentivized as it allows proposers to maximize their rewards through manipulation of transaction order. This situation underscores the need for an effective mechanism. However, a more robust solution may involve revisiting the transaction ordering rules to eliminate this concrete incentive for timing games that lead to such exploitative behaviors, thereby ensuring a fairer and more secure protocol.

Partially Ordered Dataset (POD)

One of the main concerns regarding MCP is the absence of a clearly defined method for determining the order of transactions. It remains uncertain how the sequence and the underlying criteria for ordering will be established, as well as how the influence of clients will be exercised—whether through mechanisms such as auctions, latency considerations, or the risk of spam attacks, as highlighted by Phil at SBC '24.

The team of Common Prefix has conducted a thorough analysis of various consensus protocols, including leader-based, inclusion list, and leaderless consensus models, with a focus on their resistance to censorship. As a result of their research, they developed the concept of a Partially Ordered Dataset. In this model, the order of transactions is determined by the timestamps recorded by the clients, which may lead to a lack of strict ordering when two transactions are recorded simultaneously. The implications of relinquishing strict ordering in transaction processing have not been extensively explored in the existing literature, or at least, I am not aware of any comprehensive studies on the matter.

A POD is a finite sequence of pair \{(r, T), …, (r’, T’)\} s.t. r is round (slot) and T a set of transactions.

A round is perfect r_{perf} if no new transactions can appear with recorded round r_{rec} \leq r_{perf}, which means there is no conflict in the ordering before r_{perf}.

A POD protocol exposes the following methods.

  • input event write(tx) : Clients call write(tx) to write a transaction tx .
  • output event write_return(tx, π) : after write(tx) the protocol outputs write_return(tx, π), where π is a record certificate.
  • input event read_perfect(): Clients call read_perfect() to read the transactions in the bulletin.
  • output event read_perfect_return(r, D, Π) : after read_perfect() protocol outputs read_perfect_return(r, D, Π), where r is a round, called the past perfect round, L is a set of transactions, D is a POD, and Π is a past-perfect certificate. For each entry (r', T) in D, we say that transactions in T became finalized at round r'.
  • input event read_all() : returns all transactions up to the current round without past-perfection guarantees, hence it can return faster than read_perfect().
  • output event read_all_return(D, Π)
  • identify(π, Π) → P' ⊆ P : Clients call identify(π, Π) → P' ⊆ P to identify the set P' of parties who vouched for the finalization of a transaction, where Π is a POD and π is the certificate returned by write_return(tx, π).

The properties of Liveness and Security are detailed in the original work, and the following will be utilized in subsequent arguments:

Fair punishment: No honest replica gets punished as a result of malicious operation. If identify(π, Π) → P', where π is a record certificate for transaction tx and Π is a past-perfect certificate for a POD D, can only be created if all parties in P' sign tx and D.

The construction of the POD is as follows: The client will send a transaction to all the validators in the network and will have to wait for n - f signatures to confirm his transaction has been received by the network, where f is the amount of allowed byzantine validators. Once the client received the signature he will record the median of all the signatures he has received, as there is going to be some latency and difference between the validators when they received the transaction.

For the reading set of transactions for some round the client will have two options:

  • Believe in synchrony on the txs received: Request all the recorded transactions from the validators for some specific round r. Once obtains the n- f signatures of all the transactions computes the median of the set of transactions based on their timestamps.
  • Past-perfect guarantees, no-synchrony believer: Assume r_{perf} to be the minimum of the received r values, then we will not have any transaction with lower timestamp. Now takes the union of all the upcoming transactions. Now the client will have to wait some \delta time to ensure through the gossip mechanism there is no lower r_{perf} and no more transaction for the upcoming round.

PODs mitigating MEV in MCP

Adopting Partially Ordered Datasets (PODs) as the primary data structure for MCP introduces a novel approach that hasn’t been extensively studied, particularly regarding its potential to mitigate the types of MEV games previously described.

In PODs, transactions are ordered deterministically based on their timestamps. While this approach necessitates handling cases where multiple transactions share the same timestamp—or evaluating the likelihood of such occurrences—it fundamentally alters the dynamics of the fronturunning incentive of timing games previously described against other proposers block transactions.

Consider a scenario in slot m where a malicious proposer attempts to front-run or sandwich another transaction. Under the previous deterministic ordering, which was based on auctions and priority fees, such attacks were feasible because proposers could manipulate their position in the ordering by outbidding others or exploiting latency. However, with timestamp-based ordering as implemented in PODs, this strategy changes significantly. An open question is still to know which strategies can be applied with PODs or timestamp ordering to extract MEV and if they are worse in wellfare of the network compared with the described game.

In this new setup, being the last proposer in a slot would actually place that proposer in the final position within the transaction order, limiting their ability to engage in front-running or sandwiching assuming honesty in all nodes. Instead, they would only be able to perform back-running, which is generally considered less harmful than front-running or sandwiching. This shift in ordering strategy could effectively reduce the risk of these more dangerous forms of MEV exploitation.

If a malicious validator attempts to manipulate the order of transactions by bribing proposers, slashing should be applied to the validator. By imposing such penalties, the protocol discourages malicious behavior and ensures that the integrity of the transaction ordering process is maintained. One of the future next questions it’s how can we detect a bad behaviour in the transaction record, maybe applying Turkey’s Method it’s a posible option and assume that outliers are malicious records.

However, the situation is more complex than it appears. The shift to a new game for validators, where transaction ordering is influenced by latency, introduces additional challenges. Validators may now engage in latency games, where geographical proximity to other validators or network nodes becomes a crucial factor in gaining an advantage. To mitigate this, it is essential to ensure that validators are well decentralized across different regions.

Decentralizing validators geographically helps reduce the impact of latency-based advantages. Validators clustered in the same location could lead to centralization risks, where a few validators might dominate the network due to their low-latency connections. This centralization could undermine the fairness of transaction ordering and potentially reintroduce the risk of censorship.

Moreover, validators are incentivized to avoid sharing the same location because doing so decreases the uniqueness of the transactions they can access for a possible backrunning and taking such opportunities. The more validators operate from the same region, the fewer unique transactions each can capture, leading to lower profits from transaction fees, as these would have to be split among more validators. This dynamic encourages validators to spread out, fostering a more decentralized and resilient network that is better protected against latency-based games and the centralization of power. However, the current incentive is still weak and future work will reside in how to provide better incentives for non-centralization.

3 posts - 2 participants

Read full topic

Data Structure Interpreting MPT branch node values

Published: Aug 31, 2024

View in forum →Remove

Consider a branch node for an MPT.
Suppose the 17’th item in the branch node list is supposed to be NULL, because the branch node is not a “terminator” node. Ethereum documentation says NULL is encoded as the empty string.
Suppose the 17’th item in the list is supposed to be a value because the branch node is a terminator node. Suppose this value happens to be the empty string.
How to distinguish these two cases?
Note this question should be independent of RLP encoding, which only concerns how we encode the list. I’m asking what’s in the list itself, before considering how the list is subsequently encoded.

1 post - 1 participant

Read full topic

Layer 2 Exploring Verifiable Continuous Sequencing with Delay Functions

Published: Aug 30, 2024

View in forum →Remove

Thanks to Conor, Lin and Swapnil from the Switchboard team, Cecilia and Brecht from the Taiko team, Alex Obadia, Justin Drake, Artem Kotelskiy and the Chainbound team for review.

Abstract

Agreeing on time in a decentralized setting can be challenging: wall clocks may drift between machines, agents can lie about their local times, and it is generally hard to distinguish between malicious intent and just unsynchronized clocks or network latencies.

Ethereum can be thought of as a global clock that ticks at a rate of 1 tick per ~12 seconds. This tick rate is soft-enforced by the consensus protocol: blocks and attestations produced too early or too late will not be considered valid. But what should we do in order to achieve a granularity lower than 12 seconds? Do we always require a consensus protocol to keep track of time?

We want to explore these questions in the context of untrusted L2 sequencers, who don’t have any incentive to follow the L2 block schedule that is currently maintained by trusted L2 sequencers, and will likely play various forms of timing games in order to maximize their revenue.

In this article, we introduce mechanisms to enforce the timeliness, safety and non-extractive ordering of sequencers in a decentralized rollup featuring a rotating leader mechanism, without relying on additional consensus, honest majority assumptions or altruism. To do so, we use three key primitives:

  1. Client-side ordering preferences,
  2. Ethereum as a global 12s-tick clock,
  3. Verifiable Delay Functions.

Lastly, we show the case study of MR-MEV-Boost, a modification of MEV-Boost that enables a variation of based preconfirmations, where the same construction explored can be applied to reduce the timing games of the proposer.

Rationale

Rollup sequencers are entities responsible for ordering (and in most cases, executing) L2 transactions and occasionally updating the L2 state root on the L1. Currently, centralized sequencers benefit from the reputational collateral of the teams building them to maintain five properties:

  • Responsiveness: responding to user transactions with soft commitments / preconfirmations in a timely manner. We want to highlight that this definition includes the timely broadcast of unsafe heads on the rollup peer-to-peer network.
  • Non-equivocation (safety): adhering to preconfirmation promises when submitting the ordered batch on the L1, which is what will ultimately determine the total ordering of transactions.
  • Non-extractive ordering: not extracting MEV from users by front-running or sandwiching, or by accepting bribes for front-running privileges.
  • Liveness: posting batches to L1 and updating the canonical rollup state regularly.
  • Censorship-resistance: ensuring that no valid transactions are deliberately excluded by the sequencer regardless of the sender, content, or any external factors.

In this piece we are concerned with how the first four properties can be maintained in a permissionless, untrusted setting. Note that censorship-resistance is ensured by construction: by introducing multiple organizationally distinct sequencers in different geographies and jurisdictions we have a strong guarantee that any transaction will be accepted eventually.

Consider a decentralized sequencer set S := \{S_1,\dots,S_n\} with a predictable leader rotation mechanism and a sequencing window corresponding to a known amount of L1 slots. For simplicity, let’s assume S_{i} is the current leader and S_{i+1} is the next one. At any point in time, only one sequencer is active and has a lock over the rollup state.

Here are two strategies that sequencer S_i can explore to maximize its expected value:

1. Delaying the inclusion of transactions

Suppose a user sends a transaction to S_i at a certain L2 slot. Then, the sequencer could wait some time before inserting the transaction into a block in order to extract more MEV with sandwich attacks in collaboration with searchers or by directly front-running the user. In particular, since MEV grows superlinearly with time, it’s not in the sequencer’s best interest to commit early to a transaction. The worst case scenario would be the sequencer delaying inclusion until the sequencer rotation ^1.

2. Not publishing unsafe heads in the rollup peer-to-peer network

In this setting the sequencer has low incentives to publish the unsafe heads in the rollup network: since L2 blocks are signed by the sequencer (e.g. in Optimism), they act as a binding commitment which can be used by users to slash it in case of equivocations.

This has a major downstream consequence on the UX of the rollup: both the next sequencer and users need to wait until a batch is included to see the latest transactions. For users it means they won’t know the status of their transactions in a timely manner, while the next sequencers risks building blocks on invalid state.

We will now explore mechanisms to mitigate these behaviours and introduce slashing conditions for sequencers.

Primitive 1: Transaction Deadlines

We introduce a new EIP-2718 transaction type with an additional field:

  • deadline - uint256 indicating the last L2 block number for which the transaction is considered valid.

This idea is not entirely new. For instance, the LimeChain team has explored this in their Vanilla Based Sequencing article. However, in our variant the deadline field is signed as part of the transaction payload and it is not expressed in L1 slots.

The reasoning behind it is that the sequencer cannot tamper with either the deadline field or block.number (because it is a monotonically increasing counter), and therefore it is easy to modify the L2 derivation pipeline to attribute a fault in case the sequencer inserts the user transaction in a block where block.number > deadline.

This approach mitigates problem #1. However, it does not in any way solve the responsiveness issue, since sequencers can still delay proposing the block in order to extract more MEV.

Primitive 2: Ethereum as a Global Clock

A simple rotating sequencer design would be one where S_i loses the power to settle batches after the end of its sequencing window W_i, which is dictated by an L1 smart contract. However, the sequencer still needs some time to post the batch with the latest L2 blocks. We therefore introduce an inclusion window that is shifted n \geq 1 slots ahead of W_i, where S_i still has time to land rollup batches on L1 with the last L2 blocks, even if the responsibility of sequencing has shifted to S_{i+1}.

In case of any safety fault, the sequencer should be slashed. If the sequencer has not managed to post all their assigned L2 blocks by the end of its inclusion window, it will forego all associated rewards. Optionally, there could also be penalties for liveness faults. This also helps with the problem of collaboration with the next sequencer, by ensuring that the latest blocks will be known to it within n\cdot12 seconds. Ideally, we’d like to keep n as small as possible with a value of 1.

There are still some potential issues here: getting a transaction included on Ethereum is probabilistic, meaning that you can’t be sure that a transaction you send will actually be included in time. In this context it means that the last batch sent by an honest leader may not be included in the L1 by the end of its inclusion window. This can be helped with two approaches:

  • A “based” setup, where the sequencer is also the L1 block proposer and can include any transactions right up to the point they have to propose, or
  • Using proposer commitments with a protocol like Bolt. We expand more on this in the ”Further work” section below.

Note that we assume there is a registry smart contract that can be consulted for the currently active sequencer, i.e. it implements some leader election mechanism and takes care of sequencer bonds along with rewards and penalties. It is up to the rollup governance to decide whether the registry can be fully permissionless or if it should use an allowlist. In case of any misbehaviour, governance would be used to temporarily or permanently remove the sequencer from the allowlist.

Primitive 3: Verifiable Delay Functions

Verifiable Delay Functions (VDFs henceforth) are a cryptographic primitive that allows a prover to show a verifier that a certain amount of time was spent running a function, and do it in a way that the verifier can check the result quickly.

For instance, consider a cryptographic hash function h and define the application

H(n,s) := (h \circ \underset{n\ times}\dots \circ h)(s),

where s is a byte array an n is a natural number.

Composing (or chaining) hash functions like SHA-256 cannot be trivially sped up using parallel computations, but the solution lacks efficient verification ^2 as the only way to verify the result is to recompute the composition of functions. This solution appeared as a naïve VDF in Boneh’s paper, and for this reason it is referred to as weak.

Another example of VDF is iterated squaring over a group of hidden order, with which it is possible to construct time-lock puzzles. We’ll explore the usage of the latter in the next sections.

Why VDFs tho?

VDFs are very useful in the context of sequencing because they can act as a proof of elapsed time for the duration of the block (specifically block_time / max_adversary_speedup, see “Security Considerations”). Consider the following algorithm for the block production pipeline:

  1. At the beginning of L2 block N, the sequencer starts computing a VDF that takes an L2 block time (or slightly less) to compute for honest players, using the previous block hash as its input.
  2. After the end of the L2 slot the sequencer builds a block B_N where the header contains the result of the VDF, denoted V_N. We call this sealing a block. This means the block hash digest contains V_N.

This algorithm has the nice property of creating a chain of VDF computations, in some sense analogous to Solana’s Proof of History from which we inherit the security guarantees. What does this give us in the sequencer context? If we remember that a sequencer has a certain deadline by which it has to post batches set by the L1 slot schedule, we can have the L1 enforce that at least some number of L2 blocks need to be settled. This has two downstream results:

  • The sequencer must start producing and sealing blocks as soon as their sequencing window starts. Pairing this with the transaction deadline property results in an upper bound of time for when a transaction can be confirmed. If they don’t follow the block schedule set by the VDF and the L1, they risk not being able to post any batch.
  • We mitigate problem #2 by taking away the incentive to withhold data (not considering pure griefing attacks): this is because the sequencer cannot tamper with an existing VDF chain, which would require recomputing all the subsequent VDFs and result in an invalid batch.

In general, for the sake of this post we will consider a generic VDF, provided as a “black box” while keeping the hash chain example in mind which currently has stronger guarantees against ad-hoc hardware such ASICs. See “Security Considerations” below for more insights.

Proving correct VDFs

If a sequencer provides an invalid VDF in an L2 block header it should be slashed, and ideally we’d like to ensure this at settlement time. However, recalculating a long hash chain on the EVM is simply unfeasible due to gas costs.

How to show then that the number of iterations of the VDF is invalid? One way could be to enforce it optimistically (or at settlement, in case of ZK-rollups) by requiring a valid VDF chain output in the derivation pipeline of the rollup. In case of equivocation in an optimistic rollup the sequencer can be challenged using fraud proofs.

Hardware requirements

Since by definition VDFs cannot be sped up using parallelism, it follows that computing a VDF can be done by only using a single core of a CPU, and so it does in our block production algorithm.

This makes it different and way more lightweight compared to most Proof-of-Work consensus algorithms such as Bitcoin’s which requires scanning for a value such that, when hashed with SHA-256, the hash begins with a certain number of zero bits.

It’s also worth to note that modern CPUs are optimized to compute the SHA-256 hash function. Since 2016 Intel, starting with the Goldmount family of chips, is offering SHA Extensions in the Core and Xeon line-ups on selected models which introduces three new instructions specialized in computing different steps of the hash function algorithm more efficiently.

Lastly, single-core performance has stagnated over the years indicating that there is a minor benefit in investing in the latest generation of CPUs, thus lowering down the requirements of the system.

Case Study: MR-MEV-Boost

Multi-Round-MEV-Boost, is a modification of MEV-Boost that enables based preconfirmations by running multiple rounds of MEV-Boost auctions within a single L1 slot. The usage of this primitive is to output after each round a based rollup block built by L2 block builders. As shown in the article, this approach inherits the L1 PBS pipeline and mitigates some of the negative externalities of based preconfirmations as a result.

Like MEV-Boost, this fork relies on the opted-in proposer to be an auctioneer which ends the sealed auction by calling the getHeader (Builder-API) endpoint of the relays. After having signed the sealed bid, the getPayload (Builder-API) is called by the proposer to receive the actual content of the winning bid and to publish the block in the based rollup network.

In the original protocol, the end of the auction usually coincides with the end of the L1 slot (more precisely, near one second after it); delaying it results in a high risk of not being able to broadcast the block in time to gather all the needed attestation and forgo all its associated rewards. As such, a block time is proposed every twelve seconds with consistency, enforced by Ethereum consensus.

In contrast, given it consists of multiple rounds happening during the slot, in MR-MEV-Boost an untrusted proposer is incentivized to end the auction seconds later or earlier ^{3} according to the incoming bids, in order to extract more more MEV. In the worst case, MR-MEV-Boost will reflect L1 block times. Another consequence of this is an inconsistent slot time for the based rollup. This can be seen as a much more serious form of timing games.

In the article, the discussed possible solutions to this problem are the following:

  1. Introduce user incentives: if users determine that a proposer is misbehaving, they stop sending transactions to said proposer.
  2. Introduce a committee (consensus) to attest to timeliness and maintain slot durations.

We now argue that a trustless solution that strongly limits the proposer without requiring actions from the user does exist, and it leverages the same construction we used for the VDF-powered block production algorithm in the context of decentralized sequencing.

The construction is fairly simple and consists of computing a VDF that lasts x := 12/r seconds, where r is the number of rounds in an L1 slot (the L2 block time). The proposer must calculate this VDF using the previous based rollup block hash as public input and, at the end of the round, sending it along with the body of a modified getPayload call. The output of the VDF is then stored in the rollup block header and if invalid can result in slashing the proposer after a successful fraud proof.

With this approach the amount of time a proposer can delay the end of a round is limited: for instance if the first auction ended one second later then during the last round it won’t be able to provide three seconds of computation for the VDF but two, resulting in an invalid block and consequent slashing ^4. This is because in order to start computing a valid VDF, it requires the previous block hash as its input, implying a sealed block.

Security Considerations

Are VDFs really safe for this purpose?
Suppose an adversary owns hardware which is capable of computing the VDF faster compared to the baseline of honest players without getting noticed (otherwise the number of iterations for the VDF is adjusted by the protocol). Then, the faster the attacker (max_adversary_speedup), the less our construction would constrain the space of its possible actions. In particular, the sequencer would be able to commit a bit later to blocks and be able to re-organize some of them for extracting more value.

However, given we don’t need the “fast proving” property, hash-chains have proven to be robust with Solana’s Proof of History and will continue to be at least in the short-term. Also, our security requirements will not be as strict as something that needs to be enshrined in Ethereum forever.

Some solutions and directions to get stronger safety guarantees can be found in the ”Further work” section below.

Current limitations

Sequencer credibility

As with many new services which leverage (re)staking, the credibility of the sequencer has an upper bound which is the amount it has staked: if a MEV opportunity exceeds that, then a rational untrusted actor would prefer to get slashed and take the MEV reward.

Leader rotation can be a critical moment

As discussed in the batcher and registry smart contract section, the inclusion window is shifted of one slot forward at minimum compared to the sequencing window. This is needed because of the time required to settle the last batch before rotating leader, but leaves an additional slot time of at least 12 seconds in which the sequencer has room to re-organize the last L2 blocks before publishing them on the rollup peer-to-peer network. As a consequence, liveness is harmed temporarily because S_{i+1} might be building blocks on invalid state if it starts to sequence immediately.

Lastly, one additional slot might not be enough to settle a batch according to recent data on slot inclusion rates for blobs. This can be mitigated by leveraging new inclusion preconfirmation protocols, as explained below.

Sequencer last-look

Our construction makes very difficult for a sequencer to reorg a block after it has been committed to, however it doesn’t solve front-running in its entirety. In particular, the sequencer may extract value from users transactions while building the block with associated deadline field. A possible solution along with its limitations is explored in the section below.

Conclusion

In this article, we explored mechanisms to enforce the timeliness, safety, and non-extractive ordering of untrusted L2 sequencers in a decentralized rollup environment.
The primitives discussed ensure that sequencers can act more predictably and fairly, mitigating issues such as transaction delays and data withholding. Moreover, these techniques can reduce trust assumptions for existing single-sequencer rollups, aligning with the concept of rollups functioning as “servers with blockchain scaffolding”. These findings provide a robust framework for the future development of decentralized, secure rollup architectures.

Further work

Trusted Execution Environments (TEEs) to ensure the sequencer is not running an ASIC

A Trusted Execution Environment is a secure area of a CPU, often called enclave, that helps the code and data loaded inside it be protected with respect to confidentiality and integrity.
Its usage in blockchain protocols is an active area of research, with the main concerns being trusting the hardware manufacturer and the various vulnerabilities found in the past of some implementations (here’s the latest).
Depending on the use case these trust assumptions and vulnerabilities might be a deal-breaker. However, in our setting we just need a guarantee that the sequencer is not using specialized hardware for computing the VDF, without caring about possible leakage of confidential data from the enclave or manipulation of the wall clock / monotonic clock.

Adapt existing anti-ASICs Proof-of-Work algorithms

The Monero blockchain, launched in 2014 as a privacy and untraceable-focused alternative to Bitcoin, uses an ASIC-resistant Proof-of-Work algorithm called RandomX. Quoting their README:

RandomX is a proof-of-work (PoW) algorithm that is optimized for general-purpose CPUs. RandomX uses random code execution (hence the name) together with several memory-hard techniques to minimize the efficiency advantage of specialized hardware.

The algorithm however leverages some degree of parallelism; it is an interesting research direction whether it can adapted into a single-core version, leading to a new weak-VDF.
This approach, while orthogonal to using a TEE, can potentially achieve the same result which is having a guarantee that the sequencer is not using sophisticated hardware.

Time-lock puzzles to prevent front-running

As mentioned in the “Current limitations” section, our construction doesn’t limit the problem of sequencer front-running the users. Luckily, this can be solved by requiring users to encrypt sensitive transactions using time-lock puzzles, as we will show in more detail in a separate piece. However, this solution doesn’t come free: encrypted transactions or encrypted mempools can incentive spamming and statistical arbitrage, especially when the protocol fees are not very high.

Inclusion Preconfirmations and Data Availability layers

Batch submissions to an L1 contract could be made more efficient by leveraging some of the new preconfirmations protocol like Bolt by Chainbound or MEV-Commit by Primev to have guaranteed inclusion in the same slot. In particular, sequencing windows should end precisely in the slot before one where the proposer is running the aforementioned protocols in order to leverage inclusion commitments.

Additionally, the batch could be posted into an efficient and lightweight Data Availability layer run by proposers to enforce a deadline of a configurable amount of seconds in the beginning of the slot, otherwise the sequencer would be slashed.


Footnotes

  1. More precisely, if an operator controls multiple subsequent sequencers it could delay inclusion until the last sequencer rotation.
  2. In Solana, the verification of a SHA-256 chain is actually parallelised but requires dividing a block associated to a ~400ms computation into 32 shreds which are forwarded to the rest of the validators as soon as they’re computed. As such, verification is sped up by computing the intermediate steps of the hash chain in parallel.
  3. In general, the proposer will end some rounds earlier as a side effect of delaying other rounds. For example, it could force a longer last round to leverage possible L1 <> L2 arbitrage opportunities.
  4. There is an edge case where the proposer might not be able to compute all the VDFs even if honest, and it is due to the rotation mechanism: since the public input of the VDF must be the previous rollup block hash, during rotation the next leader will need some time before hearing the block from the rollup network, potentially more than 1s. This could lead the next proposer to be late in computing the VDFs.
    To reduce this risk, the next proposer could rely on various parties to receive this information such as streaming services and/or trusted relays.

1 post - 1 participant

Read full topic

Sharding PeerDas Documentation

Published: Aug 30, 2024

View in forum →Remove

Joint work with @b-wagn, A Documentation of Ethereum’s PeerDAS

The long-term vision of the Ethereum community includes a comprehensive data availability protocol using polynomial commitments and tensor codes. As the next step towards this vision, an intermediate solution called PeerDAS is about to integrated, to bridge the way to the full protocol. With PeerDAS soon becoming an integral part of Ethereum’s consensus layer, understanding its security guarantees is essential.

The linked document aims to describe the cryptography used in PeerDAS in a manner accessible to the cryptographic community, encouraging innovation and improvements, and to explicitly state the security guarantees of PeerDAS. We focus on PeerDAS as described in Ethereum’s consensus specifications [Eth24a, Eth24b].

Our intention is two-fold: first, we aim to provide a description of the cryptography used in PeerDAS that is accessible to the cryptographic community, potentially leading to new ideas and
improvements that can be incorporated in the future. Second, we want to explicitly state the security and efficiency guarantees of PeerDAS. In terms of security, this document justifies the following claim:
Theorem 1 (Main Theorem, Informal): Assuming plausible cryptographic hardness assumptions, PeerDAS is a secure data availability sampling scheme in the algebraic group model, according to the definition in [HASW23].

We hope to receive feedback from the community to make further improvements to this document

1 post - 1 participant

Read full topic

Layer 2 Accessible Encryption for Ethereum Rollups with Fairomon

Published: Aug 28, 2024

View in forum →Remove

Co-authored by @pememoni and @shakeshack. With special thanks to the rest of the Fairblock team!

Fairomon is a special fairy type pokemon that combines the work of Fairblock and Monomer - a framework that enables builders to create Ethereum rollups with built-in encryption with minimal lift.

Background

Monomer is a rollup framework that enables Cosmos SDK app chains to be deployed as rollups on Ethereum. Internally, Monomer is built on top of the OP stack relying on it for chain derivation and settlement while supporting an ABCI interface for a Cosmos SDK app chain to be deployed on top. Fairblock provides threshold MPC encryption that can be utilized in Monomer rollups through a module built for Cosmos SDK chains.

Fairblock enables blockchain developers to integrate pre-execution encryption. This pre-execution encryption is made possible through their threshold MPC network that delivers identity-based encryption (IBE), and soon custom encryption schemes, to partner chains. Fairblock’s MPC network, called Fairyring, generates threshold encryption and decryption keys for each supported Monomer rollup, while the rollups themselves receive and process encrypted transactions natively.

How it Works

FairyRing uses decentralized key generation to issue a master secret key (MSK) for each epoch (every 100 blocks). From each MSK, a master public key (MPK) can be derived. Once the MPK is derived, it is relayed to a Monomer chain where it will be used to encrypt each requested transaction. In parallel, the MSK is split into equal shares for the amount of FairyRing validators participating in the network. For each request for decryption, FairyRing validators use their share of the MSK to collectively derive the associated private keys.

In threshold IBE, users or developers can program the decryption conditions for transactions. Onchain conditions that could trigger decryption could be a block height, the price of an asset, a smart contract call, verification of a ZK proof, or the end of a governance poll, for example. Identity-based encryption allows for the programmability of decryption and allows for decryption to be triggered by “IDs,” which can be either onchain conditions or on/offchain identifiers or attributes that certain wallets prove ownership of.

What’s Possible with Fairomon

MPC encryption can make a number of previously inaccessible applications possible within rollups, most notably encrypted mempools, censorship-resistant sequencing, and DeFi and gaming apps such as encrypted orders, leaderless NFT auctions, ID-gated content, and highest-hand-wins card games like blackjack.

The transaction flow for an application is as follows:

  • User submits an encrypted tx and decryption condition (e.g. target height) to an app
  • Chain receives encrypted txs in mempool
  • Encrypted txs are sorted by target heights and ordering within a block is committed to inside of the integrated x/pep module
  • When target height or decryption condition is reached, the app chain receives decryption key from the Fairyring chain
  • Encrypted txs are decrypted and executed inside the BeginBlock method of the x/pep module

See the architecture diagram below for a detailed description of how Fairyring integrates with a Monomer appchain.

Monomer links:

Fairblock links:

1 post - 1 participant

Read full topic

Security Outdated encryption stored on blockchain

Published: Aug 28, 2024

View in forum →Remove

Please pardon my ignorance. I’ve read several publications related to blockchain being used in healthcare, construction and the like. Many of these publications state that blockchain allows the storage of secured data.

My question is this: If data is “securely” stored on blockchain (I assume encrypted) and the encryption algorithm LATER (after long-term usage) is proven to be “cryptographically broken” (e.g., SHA-1) …

  • does this not mean all “secured” data on the blockchain using that algorithm is suddenly public?
  • are there steps that can be taken to re-encrypt the data to avoid the massive leak of data?

Kind regards.

7 posts - 4 participants

Read full topic

Economics Does multi-block MEV exist? Analysis of 2 years of MEV Data

Published: Aug 28, 2024

View in forum →Remove

Does multi-block MEV exist? Analysis of 2 years of MEV Data

by Pascal Stichler (ephema labs)

Many thanks to Toni, Julian, Danning, Chris and Marc for feedback and especially to BarnabĂŠ for nudging the research in the first place and continuous feedback.

TL;DR

  • We looked at proposer-builder data and MEV-Boost payment data since the merge (September 2022) to identify patterns of multi-block MEV.
  • We observe fewer multi-slot sequences of builders than a random Monte Carlo simulation would predict. The longest observed multi-slot sequence is 25 slots.
  • Average MEV-Boost payments increase for longer consecutive sequences by the same builder from ~0.05 ETH for single slots to ~0.08 ETH for nine consecutive slots.
  • In longer sequences, the payment per slot increases slightly with later slots. This indicates that builders bid higher to get longer sequences or the first slot after a longer sequence.
  • There is a weak positive autocorrelation between subsequent MEV-Boost payments. This contradicts the hypothesis that there are generally periods of low and high MEV.
  • Comparing builders with periods of low and high base fee volatility shows a low correlation. This indicates that no builder specialization based on base fee volatility has developed yet.

The detailed results can be found in the Jupyter notebook on Github or Google Colab.

Background

Multi-block Maximal Extractable Value (MMEV) occurs when one party controls more than one consecutive block. It was first introduced in 2021 by [1] as k-MEV and further elaborated by [2]. It is commonly assumed that controlling multiple slots in a sequence allows to capture significantly more MEV than controlling them individually. This derives from MEV accruing superlinearly over time. The most discussed multi-block MEV strategies include TWAP oracle manipulation attacks on DEXes and producing forced liquidations by price manipulation.

After the merge, [3] have looked into the first four months of data on multi-block MEV and summarized it as “preliminary and non-conclusive results, indicating [that] builders employ super-linear bidding strategies to secure consecutive block space".

With the recent Attester-Proposer-Separation (APS) and pre-confirmation discussions, multi-block MEV has become more of a pressing issue again as it might be prohibitive for some of the proposed designs (For a more in-depth overview, we’ve created a diagram of recently proposed mechanism designs and also Mike Neuder lately gave a comprehensive overview).

Methodology

In order to get a better understanding of the historical prevalence of multi-block MEV, we decided to look at all slots from the Merge in September ‘22 until May ‘24 (totalling roughly 4.3 million slots) and analyze the corresponding data on validators and builders and on MEV-boost payments (if applicable). The scope was to identify patterns of unusual consecutive slot sequences and accompanying MEV values. The data has been kindly provided by Toni Wahrstätter and contains information per slot on relay, builder pubkey, proposer pubkey and MEV-Boost value as well as a builder pubkey and validator pubkey mapping. In the labeling of validators for our purposes staking pool providers such as Lido or Rocket Pool are treated as one entity.

MEV-Boost payments are used as a proxy for the MEV per block. We acknowledge that this is only a non-perfect approximation. The ascending MEV-Boost first-price auction by its nature of being public essentially functions like a second price + 1 wei auction (thanks to Julian for pointing this out!). Hence, we strictly speaking only get an estimate of the intrinsic value of the second highest bidder. However, as [4] have observed more than 88% of MEV-Boost auctions were competitive and [5] concluded that the average profit margin per top three builder is between 1% and 5.4%, further indicating a competitive market between the top builders. Based on this, despite the limitations we deem it feasible to use the MEV-Boost payments as an approximation for the generated MEV per block.

To establish a baseline of expected multi-slot sequences, a Monte Carlo simulation was conducted. In this simulation, builders were randomly assigned to each slot within the specified time period, based on their observed daily market share during that period. The frequency of consecutive slots, ranging in length from 1 to 25 (the longest observed sequence in the empirical data), was recorded. This procedure was repeated 100 times, and the average was taken. We decided to use daily market shares for the main analysis as in the investigated time period market shares have strongly shifted [4]. For comparison we also ran the analysis on monthly and overall market shares.

Further, base fee volatility data has been included to cross-check effects of low and high-volatility periods. Previous research (e.g. [6] & [7]) has focused on token price volatility effects based on CEX-prices. As we are interested in low- and high-MEV environments, we deem base fee volatility for our use case more fitting, as it is driven by empty or full blocks which are at least partially a result of the prevalence of MEV opportunities.

Empirical Findings

Finding 1: Fewer multi-slot sequences exist than assumed by random distribution


Figure 1: Comparison of statistically expected vs. observed multi-slot sequences (note that slots > 25 have been summarized in slot 25 for brevity)

Firstly, the prevalence of multi-slot sequences with the same builder proposing the block was investigated to determine if they are more common than would be expected by chance.

Comparing the results of the Monte Carlo simulation as a baseline in expected distribution (blue) with the observed distribution (orange), it can be seen that significantly fewer multi-slot sequences occur than expected (Figure 1). The longest observed sequence was 25 slots and the longest sequence with the same validator (Lido) and builder (BeaverBuild) was 11 consecutive slots on March 4th, 2024 (more details with descriptive statistics in the notebook). Running the same simulation on monthly or total market shares in the time period, the observation shifts to having more longer sequences than expected, however we attribute this to the statistical effect of changing market shares. A detailed analysis can be run in the notebook or be provided upon request.

In the next step, to understand this in a more-fine-grained manner, the values are compared for each of the top 10 builders based on market shares. Therefore, for each builder, the difference between expected and observed occurrences of multi-slot sequences are plotted with the size of the bubble indicating the delta in Figure 2. The expected occurrences are based on the results of the Monte Carlo simulation. Red bubbles indicate a positive deviation (more observed slots than expected), while blue indicates a negative deviation. Green dots indicate values in line with the expectation. In Figure 2 it is shown in absolute numbers, in the notebook it can also be seen on a relative scale.


Image 2: Deviations between expected (Monte Carlo simulation) and observed multi-slot frequencies per builder

It can be observed in the relative as well as in the absolute deviation that for the top builders there are more single slot sequences than expected with the exception of ETH-Builder, f1b and Blocknative. For multi-slot sequences with two or more slots, almost all top 10 builders have less than expected. This shows that the trend is not limited to singular entities but derives more from the general market structure.

Finding 2: Payments for multi-slot sequences are higher on average than for single slots

To understand if multi-slot sequences are valuable, we looked into MEV-Boost payments and compared single-slot to multi-slot sequences (Figure 3).


Figure 3: Average MEV-Boost payments per Sequence Length

It can be observed that in accordance with previous work of [3], we observe higher average MEV payouts for longer consecutive sequences (from about 0.05 ETH for single slot sequences to around 0.08 ETH for sequences with nine consecutive slots). Note that the gray numbers in Figure 3 provide the sample size for each slot length. So it can be observed that the longer the sequence, almost linearly the average MEV-boost payment per slot in the sequence rises. At this stage of the research we can only speculate why this is the case. It could be driven by a higher value in longer consecutive sequences, but also by alternative effects. For example, Julian rightfully pointed out it could also be driven by an increasing intrinsic value for the second highest-bidder due to accumulating MEV in private order flow and the intrinsic valuation of the winning bidder remains constant. Or as Danning suggested, it might be driven by certain types of proprietary order flow (e.g. CEX-DEX arbitrage) being more valuable in certain time periods (e.g. volatile periods) leading to more consecutive sequences as well as higher MEV-Boost payments on average. For a more comprehensive answer and a more in-depth understanding, an analysis on the true block value (builder profits plus proposer payments) and potentially on individual tx level is necessary. We leave this open for future research.

This trend also holds when plotting the average payments for each individual builder. The results on this are shown in the notebook.

Finding 3: Per Slot Payments also increase with longer sequences

Supplementary to the absolute average payment, we also looked into the payment per slot position in longer sequences (Figure 4). E.g. how much was on average paid for the third position in a longer sequence.


Figure 4: Average MEV-Boost payments per Sequence Position

Also in the payment per slot analysis a similar trend can be observed, however less prevalent. This suggests that there is slight value in longer sequences, however builders are not willing to bid significantly more for longer consecutive sequences or the first slot after a longer sequence.

This indicates for us that, at least so far, multi-slot strategies are not applied systematically. In this case, we expect builders would need to pay significantly higher values for later slots to ensure to capture the MEV opportunity prepared earlier.

Finding 4: Low auto-correlation between consecutive MEV-Boost payments


Figure 5: Auto-correlation of MEV-Boost Payments

We examined auto-correlation in the MEV boost payments to understand if historical MEV data allows us to forecast future MEV and to see if there are low- and high-MEV periods (Figure 5).

Overall, it can be observed that within the first few slots the correlation strongly decreases until an offset of 2 to 3 slots (we tested for Pearson Correlation Coefficient, Spearman’s Rank Correlation Coefficient and Kendall’s Rank Correlation Coefficient). Based on this we can conclude that not more than one to three slots in advance the MEV value can be moderately predicted based on historical data.

Further interesting observations can be made. As expected, the Spearman and Kendall correlation coefficients are significantly higher than the Pearson correlation coefficient, underlining that the data is not following a normal distribution but being skewed and having large outliers. Additionally, it is interesting to note that for the Pearson correlation coefficient, the complete data set and the top 50% quantile dataset behave similarly, which is not the case for the Spearman and Kendall coefficients. This might be an indicator that the rank ordering for the lower 50% quantile can be more reliably predicted, further underlying that high MEV values are volatile and spiky, hence difficult to predict.

Finding 5: No indication of builder specialization on low- or high base fee volatility environment

Previous research (e.g. [6] & [7]) has found that certain builders specialize in low- or high token price volatility environments, with volatility being measured on CEX-price changes. Further, [5] observe that different builders have different strategies with some focusing on high-value blocks while others on gaining market share in low-MEV blocks.

Complementary, to determine whether low or high base fee volatility impacts (multi-block) MEV, we analyzed changes in base fee data to identify periods of high volatility. The base fee fluctuations are driven by whether the gas usage in the previous block was below or above the gas target, as defined by EIP-1559. To identify high volatility environments, we employed two methods: (i) a more naive approach that calculated price changes per slot, classifying the highest and lowest (negative) 10% of these changes as high volatility periods, with the remaining 80 % of slots being categorized as low volatility. Consequently, high volatility blocks occur following a block with either minimal or significant MEV and/or priority tips. (ii) Secondly, the Garman-Klass volatility [8] was calculated on an epoch basis, with slots in the top 20% of GK values designated as high volatility. This approach allows us to examine longer periods characterized by minimal or significant MEV and/or priority tips.

Initial correlation analysis shows only a low correlation between low and high volatile periods and the respective builders (Cramér’s V for the naive approach 0.0664 and for the Garman-Klass 0.0772). This indicates that there seems to be no builder specialization based on the volatility environment of the base fee. So, it can be observed that in contrast to token price volatility for base price volatility there seems to not have a specialization of builders developed (yet). Further research is needed to elaborate on this first finding.

Limitations

The research presented here is intended as an initial exploratory analysis of the data rather than a comprehensive study. It is important to note several limitations that affect the scope and conclusions of this analysis. Firstly, it is limited by the considered data set being publicly available MEV-Boost payments data. This leaves out roughly 10 % of non-MEV-Boost facilitated blocks and it does not reflect potential private off-chain agreements. Additionally, the data was partially incomplete and in other parts contained duplicate information (see the notebook for details). Further, missed slots have been excluded so far, a more detailed analysis in the future might focus on the particular effects missed slots have on the subsequent MEV. Lastly, as outlined in the methodology section, using MEV-Boost payments is only a proxy for captured MEV and the competitive metric used in [4] is only partially applicable for our use case.

As outlined in section Finding 2 it currently can only be speculated about the causation of the increasing average MEV-Boost payouts. Furthermore, running the analysis on the true block value (proposer payment plus builder profits) might generate further insights and solidify the research findings.

On the frequency analysis, the approach contains somewhat a chicken and egg-problem. The Monte Carlo simulation is run on market shares, while the market shares potentially derive from multi-slot sequences. We see a daily time window as an appropriate balance between precision and the need to filter out isolated effects, although this can be critically challenged.

Conclusions

Analyzing block meta-data since the merge, we observe that multi-slot sequences occur less frequently than statistically expected. Further, we observe that the average payments for longer multi-slot sequences increase with the sequence length. Similarly, the payments per slot position in longer sequences also slightly rise. This might indicate that there is generally value in longer consecutive sequences. However, considering the only slight increase in value and the fewer observed multi-slot sequences than expected we so far see no indication of deliberate multi-slot MEV strategies being deployed. Also on individual builder level we currently don’t observe strong deviations from expected distributions. This may also stem from the fact that in the current PBS mechanism, with MEV-Boost operating as a just-in-time (JIT) block auction, creating multi-block MEV opportunities carries inherent risk. This risk arises as creating these opportunities typically requires an upfront investment, and the opportunity might be captured by a competing builder in the next slot, assuming no off-chain collusion between the proposer and builder. This element of risk is a critical factor that could be eliminated by some of the proposed changes to the mechanism (e.g. some APS designs), making it an essential consideration when defining future mechanisms.

References

[1] Babel K, Daian P, Kelkar M, Juels A. Clockwork finance: Automated analysis of economic security in smart contracts. In 2023 IEEE Symposium on Security and Privacy (SP) 2023 May 21 (pp. 2499-2516). IEEE.

[2] Mackinga T, Nadahalli T, Wattenhofer R. Twap oracle attacks: Easier done than said?. In 2022 IEEE International Conference on Blockchain and Cryptocurrency (ICBC) 2022 May 2 (pp. 1-8). IEEE.

[3] Jensen JR, von Wachter V, Ross O. Multi-block MEV. arXiv preprint arXiv:2303.04430. 2023 Mar 8.

[4] Yang S, Nayak K, Zhang F. Decentralization of Ethereum’s Builder Market. arXiv preprint arXiv:2405.01329. 2024 May 2.

[5] Öz B, Sui D, Thiery T, Matthes F. Who Wins Ethereum Block Building Auctions and Why?. arXiv preprint arXiv:2407.13931. 2024 Jul 18.

[6] Gupta T, Pai MM, Resnick M. The centralizing effects of private order flow on proposer-builder separation. arXiv preprint arXiv:2305.19150. 2023 May 30.

[7] Heimbach L, Pahari V, Schertenleib E. Non-atomic arbitrage in decentralized finance. arXiv preprint arXiv:2401.01622. 2024 Jan 3.

[8] Meilijson I. The Garman-Klass volatility estimator revisited. arXiv preprint arXiv:0807.3492. 2008 Jul 22.

4 posts - 2 participants

Read full topic

Proof-of-Stake A Note on Equivocation in Slot Auction ePBS

Published: Aug 23, 2024

View in forum →Remove

Thanks to Francesco D’Amato, Barnabé Monnot, Mike Neuder, and Thomas Thiery for feedback and review. Thanks again to Francesco for coming up with the second proposal.

Whether we want to implement slot auctions into ePBS is an active discussion area, and support for slot auctions was signaled in the seventh ePBS breakout call. Currently, the ecosystem lacks knowledge about the fork choice safety of slot auctions in the current ePBS proposal. This note presents two strawman proposals to start discussing the forkchoice safety of slot auction ePBS.

This note presupposes the reader is familiar with the ePBS proposal (EIP-7732). An essential part of this EIP is that a payload boost is applied to a beacon block if the Payload-timeliness committee (PTC) reaches a quorum. If an execution payload is seen on time by a majority of the PTC, the beacon block that corresponds to the execution payload receives additional fork-choice weight (Reveal Boost). If the PTC observes a timely message from the builder stating that it withholds its payload, the additional fork-choice weight is given to the parent block of the beacon block corresponding with the withhold message (Withholding Boost).

In slot auction ePBS, the beacon proposer does not commit to an execution payload hash, unlike in block auction ePBS. Instead, it commits to a specific builder that can submit an execution payload when it is time to reveal. The first problem is that a builder could submit multiple execution payloads. In this note, we will refer to this as a builder equivocation.

In block auction ePBS, something similar to equivocation is possible. The builder could wait for at least one PTC member to vote PAYLOAD_ABSENT and then release a withhold message and an execution payload to split the PTC’s view such that none of the three vote options (PAYLOAD_ABSENT, PAYLOAD_WITHHELD, PAYLOAD_PRESENT) reaches the quorum of 50% of the votes.

In block auction ePBS, this equivocation does not benefit the builder much. If the PTC does not reach a quorum, no payload boost is applied, and the honest next-slot validator will take the payload as head. If the builder equivocates, the protocol does not need to guarantee Builder Reveal Safety since the builder does not act as the protocol expects. Still, the builder does not have the flexibility to submit a different execution payload since the beacon block commits to the execution payload hash.

It could be that the builder is incentivized to play a timing game and eventually decides that it is best if the block were withheld. The builder could submit a withhold message and see if the PTC will reach a quorum on PAYLOAD_WITHHELD. If the PTC does not seem to do so, and the PTC also has not yet reached a quorum on PAYLOAD_ABSENT, the builder reveals its payload after all. This attack seems difficult to pull off, but it allows the builder to check whether it can renege on its promised payment to the proposer while still landing its payload on-chain if it has to pay (assuming an honest next-slot proposer).

In slot auction ePBS, a builder may be more incentivized to equivocate because it can change the contents of its execution payload. For example, the builder could broadcast a particular execution payload, but a short time later, a significant MEV opportunity appears, and the builder now wants to broadcast a new execution payload.

Preventing equivocations in slot auction ePBS would be desirable because equivocations would cause insecurity in fork choice. Specifically, we want to obtain the following properties with minimal changes.

:bulb:Desiderata

  1. If the builder reveals precisely one timely execution payload, it should retain the same Builder Reveal Safety guarantees as in block auction ePBS
  2. If the builder reveals multiple timely and equivocating execution payloads,
    a. no execution payload should go on-chain,
    b. but the Unconditional Payment should be as strong as in block auction ePBS

Should slashing or a penalty be applied to equivocating execution payload messages? This question is relevant to block and slot auction ePBS, although the potential benefits of equivocation are likely to be higher in slot auction ePBS. Since ePBS still allows local block construction, it seems unwise to apply harsh slashing or penalties if there is equivocation because this may disincentivize local block construction. Moreover, since it is not clear that there are significant gains to be made from equivocating execution payloads, and if gains are to be made, slashing or penalties do not qualitatively change this, so slashing or penalties are not immediately necessary.

Proposal 1: Vote for Execution Payload Hash

The first strawman proposal to obtain these properties involves changing the block auction ePBS fork-choice specification as follows.

:bulb: Proposal 1: Vote for Execution Payload Hash

  1. Replace PAYLOAD_PRESENT with execution_payload_hash
  2. If no PTC quorum is reached, let the honest next-slot validator use an empty block as its head instead of a full block.

A PTC member would now vote for the execution_payload_hash it has observed instead of simply voting whether a payload is present. Reveal boost is applied if a quorum is reached on execution_payload_hash. Intuitively, this is necessary for slot auctions since the PTC now indicates which execution payload should be used if the block is full and not just that the block is full.

It seems like desideratum 1—the same Builder Reveal Safety as in block auction ePBS—is immediately satisfied since an honest builder does not release equivocating execution payloads. A PTC member’s execution_payload_hash vote functions the same as a PAYLOAD_PRESENT vote.

If the builder equivocates but the PTC still reaches a quorum on execution_payload_hash, then the execution payload will make it on-chain in the same way a payload would have made it on-chain if the builder did not equivocate. I believe this is fine because the builder released an equivocating payload that did not split the view of the PTC (sufficiently). This indicates that this equivocating payload is a minor threat to the fork-choice security. Although this outcome contradicts desideratum 2a, the timely requirement in desideratum 2 should be read as the execution payload intends to split the view of the PTC sufficiently.

If the builder equivocates and the PTC does not reach a quorum, then the next-slot honest proposer should see an empty block as its head. The builder loses some of its Builder Reveal Safety because it could be that the builder reveals only one payload (does not equivocate), yet the PTC does not reach a quorum. However, Builder Reveal Safety is not very strong in block auction ePBS either because a next-slot rational proposer would prefer to build on an empty block than a full block since these are more valuable (the ex-post reorg safety is low if reveal boost is not applied). Changing the default next-slot honest proposer behavior from seeing a full block to an empty block as its head does not change much in Builder Reveal Safety, and the system then satisfies desideratum 2.

What if the next-slot proposer is dishonest? The builder could collude with the next-slot proposer and broadcast messages such that the PTC does not reach a quorum and include an execution payload late. This is similar to the attack in block auction ePBS, where a builder tries to get Withhold Boost to apply but releases an execution payload if it does not succeed. The builder and next-slot proposer collusion allows the builder to play aggressive timing games while ensuring Builder Reveal Safety. These timing games come at the expense of the execution validation time of the attesting committee. It is not immediately apparent what this attack would gain for the builder and next-slot proposer collusion since the builder timing game gain comes almost entirely from the next-slot proposer’s revenues.

The downside of this proposal is the problem of free data availability. The PTC could now reach a quorum on an execution_payload_hash. These PTC votes would end up on-chain, and an adversary could use them to show that a piece of data was available to the PTC. Yet the adversary would not have to pay the base fee needed to provide the data on-chain; it only has to pay the proposer to commit to the adversary as the builder.

Proposal 2: Pretend Payload Absent

The second strawman proposal does not suffer from the free data availability problem and achieves the desiderata as follows.

:bulb: Proposal 2: Pretend Payload Absent
If the next-slot proposer/attesters observe(s) at least two equivocating payloads, it/they assign(s) no additional fork-choice weight to any empty or full block

The behavior of a PTC member does not change from the block auction ePBS specification. However, suppose a proposer sees that the block producer in the previous slot released equivocating execution payloads. In that case, it ignores the fork-choice weight the PTC may have given to any fork.

If the builder is honest, this does not change its Builder Reveal Safety since the system works exactly as it does in block auction ePBS. Desideratum 1 is thus immediately satisfied.

If the builder equivocates, an honest-but-rational proposer will choose to build on an empty block since it allows the proposer to extract the MEV from two slots of time instead of one. The attesters will not object to this since they observed the equivocating payloads and assigned no additional fork-choice weight to any forks. Therefore, if the next-slot proposer and attesters are honest, desideratum 2 is also satisfied.

The next-slot proposer could collude with the builder. The builder could equivocate, and the next-slot proposer could choose to build on a full block. Similarly to the collusion situation described in the first proposal, though, the gain that a builder gets from this equivocation seems to primarily come from the profits the next-slot proposer could make. It is not clear that the joint utility of the collusion increases by enough to justify the collusion.

A builder and a next-slot proposer could collude to ensure an execution payload does not become canonical. Consider a builder that submits an execution payload, and the PTC reaches a quorum on whether this payload is timely. Later, the builder regrets the contents of its execution payload and aims to remove it from the canonical chain. It could then release an equivocation payload so the next-slot proposer will not build on the undesirable execution payload. This is similar to a builder not revealing its block in block auction ePBS.

In conclusion, these strawman proposals seem to achieve the same fork-choice safety under slot auctions as under block auctions with minimal changes. While the first proposal has a problem with free data availability, the second proposal may be more susceptible to builder games, such as reorging its execution payload. The lack of free data availability and being less susceptible to builder games are advantages of slot auctions in ePBS. Further research on a design that simultaneously solves both problems would be very valuable. If you are interested in working on (slot auctions in) ePBS, please see this page!

6 posts - 3 participants

Read full topic

Economics The Role of the P2P Market in ePBS

Published: Aug 23, 2024

View in forum →Remove


A two-tier auction market: the right resembles the less sophisticated publicly observable P2P market, and the left resembles the more sophisticated private RPC market.

Thanks to Potuz, BarnabĂŠ Monnot, Terence Tsao, and Thomas Thiery for comments and discussion.

The current ePBS proposal, EIP-7732, suggests operating a two-tier market where builders can bid to obtain the execution payload construction rights. Large block builders are expected to use the pull-based direct connection market. This market allows for lower latency and more flexibility for the builder, as the builder only needs to commit to its bid once the proposer asks for it. This market, however, requires the proposer to connect to the builder’s RPC and actively pull bid(s) from it. Smaller builders who lack this connectivity with the validator set can use the push-based P2P market. This market has stricter rules for what bidders can do but does not need the proposer to pull bid(s) from it since bids are pushed to the proposer.

This note explores the role of the P2P market in ePBS. Although there has been some initial exploration on the topic, this note presents a clear counterfactual of a world where the P2P market were not included in EIP-7732. This note also emphasizes multiplexing—the ability of proposers to discover builders—as the most important aspect of the P2P market.

The three arguments in favor of the P2P market that the author has seen in previous work are: 1) it allows anyone to set a floor price for the auction, 2) it can be used for MEV-Burn in future protocol upgrades, and 3) it lowers entry barriers for new entrants or long-tail builders.

The first argument is that allowing anyone to bid via the publicly observable P2P market gives all validators the ability to set a floor price for the auction. Validators can bid based on the block that they could locally build. Builders must then bid at least above the bid of these validators to obtain the execution payload construction rights. It has been argued that this is valuable if a cartel of builders intends to keep bids low. The floor price, however, would not break up a cartel. Although proposers would make slightly more revenue in this case, it is unclear what the value of such a floor price is to the protocol.

The beacon proposer selling the rights may be the ideal party to set a reserve price. As I argue in this post, a proposer may want to put a higher reserve price than its valuation for the execution payload construction rights to attract higher bids from builders. The P2P market allows the proposer to signal its reserve price to the market. In this sense, the P2P market allows the validator and other participants to express their preferences.

The second argument states that the P2P market may facilitate MEV-Burn in future protocol upgrades. MEV-Burn aims to decouple the rewards from selling execution payload construction rights from being a validator. This has numerous benefits; for example, it decreases the value of using a staking service provider (SSP) since MEV-Burn decreases the variance of validator payoffs. MEV-Burn requires that builder bids be legible to the protocol. Most designs achieve this by having a committee that observes the best available bids. If ePBS would only have the direct connection market, the MEV-Burn designs need to be revisited since a proposer selling the execution rights is incentivized to understate the amount that will be burnt. Still, the P2P market is expected to only reflect a small portion of the value of the execution payload construction rights, hence even ePBS with the P2P market may not be satisfactory for an effective MEV-Burn solution.

The last reason for the P2P market is that it would allow builders from which proposers are unlikely to pull bids to still compete in the market. Proposers may be unlikely to pull bids from builders that infrequently participate in the auction because they are very specialized or from new builders unknown to the proposer. This could be because proposers have an outdated whitelist of builders from which to pull bids. Allowing these proposers to participate in the push-based P2P market will result in more builder diversity in block construction, which may benefit the protocol.

This last reason is what we will explore in this post. Specifically, what does the Ethereum ecosystem gain by enshrining the push-based P2P market aside from an out-of-protocol solution that facilitates small builders’ participation in the market?

Shea Ketsdever recently released a post on TEE-Boost, an adaptation of MEV-Boost that uses Trusted Execution Environments. In this post, she highlights the different roles a relay plays. One of the roles is multiplexing, allowing proposers to discover builders who may want to participate in the auction.

The ePBS P2P market aims to achieve multiplexing. In the context of ePBS, multiplexing has at least two facets: trustlessness and value reflection. Trustlessness is important because ePBS removes the trust that proposers and builders must place in a relay to facilitate the fair exchange. Value reflection is essential because a multiplexing tool that poorly reflects the value bidders assign to the auctioned item will not efficiently match an auctioneer with the correct bidder.

The ePBS P2P market scores very well on the trustlessness front. Neither a proposer nor a builder must trust anyone since bids are broadcast via the P2P network, and the winning bid is committed to on-chain. The P2P market, however, scores poorly on the value reflection front. Since the P2P network must be DOS resistant, it cannot handle too many bids, so bidders will likely not be allowed to bid as often as they could in MEV-Boost, meaning they have to be strategic about when they bid. Moreover, early bids will not be able to be canceled, which could lead to strategic builders only winning via the P2P market if the valuation of other builders that operate via the direct connection market has decreased (adverse selection). Finally, the value reflection of the P2P market relative to the RPC market will worsen as the RPC market becomes more sophisticated while the P2P market becomes stale.

How would an out-of-protocol actor facilitate multiplexing if ePBS were deployed? In MEV-Boost, relays facilitate multiplexing because submitting blocks to relays is (largely) permissionless, and relays are well-connected to validators. In ePBS, a relay - from no one referred to as a bid curation relay - would look different. A bid curation relay could open an RPC endpoint that proposers connect to and host an auction where builders submit bids, like in MEV-Boost. Bids, however, do not need to contain transaction data since the bid curation relay would not be responsible for the fair exchange problem that is solved via ePBS. Bids in ePBS are a bid value and the hash of the execution payload. A proposer then pulls the highest bid from the bid curation relay and, if it so desires, commits to the highest bid via the in-protocol ePBS system. A winning builder then sees this in-protocol commitment and publishes the block via ePBS.

It becomes clear that the trust assumptions that proposers and builders must place in a bid curation relay are vastly lower than in MEV-Boost. Essentially, the proposer and builders must trust the bid curation relay to forward the highest-paying bid when the proposer asks for it. The bid curation relay is not trusted with the block contents (builder privacy is preserved) and is not responsible for unconditional payment (data availability and validation are enforced via the protocol).

The ePBS relay scores worse on the trustlessness front than the P2P market since the proposer and builders must trust the relay not to censor its bids. On the other hand, the value reflection of such a bid curation relay could be far better. The relay could offer bid cancellations and high-frequency bidding to builders. Moreover, relays could invest in latency reductions and charge for this, as some do in MEV-boost. If a relay successfully reduces latency, more prominent builders may connect to it. This means the value reflection of relays relative to directly connected builders may remain stable or improve over time.

Shea also highlights another option that has been discussed widely before: next to the P2P market; there could be an on-chain registry of builders. There could be a smart contract that any builder could write its RPC endpoint to. Any validator could then see the available RPC endpoints and pull bids from it during its slot. This alternative scores well on the trustlessness front since no trust is required, and it scores well on the value reflection point since it allows all builders to compete on a similar level. The proposer could pull from this registry every time it is supposed to propose a block.

Why do we care about multiplexing? Multiplexing contributes to the credible neutrality of the network. In the context of ePBS, credible neutrality may mean something like this: the builder with the highest valuation for the execution payload construction rights is allocated these rights. If proposers were to rely solely on directly connected builders, some long-tail builders who happened to have an exceptionally high value for a specific block might be excluded. If proposers rely on bid curation relays, they may not forward the highest-paying bid because they prefer to forward another bid for whatever reason. If proposers rely on an on-chain registry of builders, it may not connect to the newer or smaller builders.

Allowing multiplexing to contribute to credible neutrality is a trade-off between trustlessness and value reflection. If a completely trustless market is so poor at value reflection that it never surfaces a winning bid, it does not contribute much to credible neutrality. If a perfectly value-reflecting market puts a lot of trust in one party, the benefit of credible neutrality is also nonexistent.

To conclude, the P2P market is easy to implement, and its maintenance does not require a hard fork so clients can iterate freely. Although the P2P market only contributes a little to the core functionality of ePBS, there are virtually no downsides to implementing it, and it is a nice feature that may benefit some users and could be beneficial for proposers as it increases their revenues and may be helpful for MEV-Burn in the future. Further work could specify the P2P market rules and how an on-chain registry of builder RPC endpoints could work.

1 post - 1 participant

Read full topic

Security An Automatic Technique to Detect Storage Collisions and Vulnerabilities within Solidity Smart Contract

Published: Aug 23, 2024

View in forum →Remove

Storage collisions and vulnerabilities within Ethereum smart contracts can lead to unexpected issues like freezing funds, escalating privileges, and financial asset theft. A storage collision occurs when two different storage structs unintentionally use same storage slot(s), or the slot layout is changed during the upgrade of implementation contract. These collision vulnerabilities have been detected in large numbers (worth millions of dollars) in a recent study within smart contracts deployed on the Ethereum network.

In this topic, we propose a more accurate and complete technique to detect storage vulnerabilities and collisions in Solidity smart contracts. And encourage the Ethereum community to provide feedback on the proposed technique.

Introduction

We are working on a solution based on advanced static analysis techniques that can identify vulnerabilities within the deep storage of Ethereum Solidity smart contracts. We aim to detect storage collisions in proxy contracts deployed on the Ethereum network like ERC-2535 (Diamond/Multi-Facet Proxy), ERC-1822, upgrade proxy pattern, etc., as complex proxy contracts are more likely to experience a storage collision, like during the upgrade of implementation or facet contracts.

N. Ruaro et al. analyzed Ethereum contracts using contract bytecode to detect storage collisions and reported 14,891 vulnerable contracts. Their technique was able to identify storage slot types correctly with an accuracy of 87.3%. Whereas, we aim to build a solution that will use source code to accurately analyze the storage layout and slot types of the contract. Furthermore, we will also analyze dynamic arrays, mapping variables, and complex nested structs in our analysis.

Suppose a collision occurs on the state variables’ base slots, our approach will allow us to identify the impact of the collision on dynamic arrays and mapping variables declared consecutively, and arrays data type or mappings key types are same which is a common practice in large contracts like gaming contracts.

As shown in the below example code, the slot layout was changed during the contract upgrade, and since token_uris and token_version have same key types and data types, both variables will return each other’s data after the upgrade due to collision.

library ImplementationStorage1 {
    struct AddressSlot {
        address owner; // slot n+0
        mapping(uint256 => string) token_uris; // slot n+1
        mapping(uint256 => string) token_versions; // slot n+2
    }

    function getAddressSlot(bytes32 slot) internal pure returns (AddressSlot storage r) {
        assembly {
            r.slot := slot
        }
    }
}

// updated code
library ImplementationStorage2 {
    struct AddressSlot {
        address owner; //slot n+0
        mapping(uint256 => string) token_versions; // slot n+1 (shld be token_uris)
        mapping(uint256 => string) token_uris; // slot n+2 (shld be token_versions)
    }

    function getAddressSlot(bytes32 slot) internal pure returns (AddressSlot storage r) {
        assembly {
            r.slot := slot
        }
    }
}

token_uris accessing token_versions and vice-versa after the upgrade.

       (before upgrade)                        (after upgrade)   
      _________________                      _________________
     |     Proxy       |                     |     Proxy       |
     |_________________|                     |_________________|
     | * IMPLEMENT_SLOT| --> NFTManager1     | * IMPLEMENT_SLOT| --> NFTManager2
     | * ADMIN_SLOT    |                     | * ADMIN_SLOT    |
     |_________________|                     |_________________|
     | + upgradeTo()   |                     | + upgradeTo()   |
     | + changeAdmin() |                     | + changeAdmin() |
     |_________________|                     |_________________|
              |                                       |
              v                                       v
      _________________                       _________________
     |   NFTManager1   |                     |   NFTManager2   |
     |_________________|                     |_________________|
     | - owner         |                     | - owner         |
     | - token_uris    | **** collision **** | - token_versions|
     | - token_versions| **** collision **** | - token_uris    |
     |_________________|                     |_________________|

We plan to build a technology that will automatically detect all storage collisions within a Solidity smart contract.

Methodology

We have structured our development plan into three distinct phases, outlined as follows:

  • Automatic State Variable Detector and Slot Layout Calculator

In this phase, we focus on developing an automatic state variable detector and slot layout calculator. This component will facilitate the identification of state variables within smart contracts and determine their corresponding slot layout. By automating this process, we aim to streamline the initial analysis procedures.

Sample output of Slot Calculator

slot 0 - mapping ds.selectorToFacetAndPosition[bytes4] = FacetAddressAndPosition;
slot 1 - mapping ds.facetFunctionSelectors[address] = FacetFunctionSelectors;
slot 2 - address [] ds.facetAddresses;
slot 3 - mapping ds.supportedInterfaces[bytes4] = bool;
slot 4 - address ds.contractOwner;
slot 5 - mapping ds.tempSelectorsNested[uint256] = FacetAddressAndPosition;
slot 6 - FacetAddressAndPosition [] ds.FacetAddressAndPositionArray;
slot 7 - mapping ds.tempMapping[uint256] = uint256;
slot 8 - mapping ds.tempMapping2[address] = uint256;
  • Mapping Keys Analyzer and Slot Calculator of Complex Variables

Building upon the foundation established in phase 1, in this phase we will first extend the slot calculator capability to calculate the slots of complex variables and their entries (for all data types) i.e. slots of mapping keys, dynamic array, complex struct, mappings with complex struct as value.

This component will also include the approximation of all keys used in mapping variables for saving data using advanced static analysis techniques. By accurately approximating keys and calculating entries, we seek to enhance the precision and breadth of storage slot calculation methodology, which will help detect storage collision within deep storage data of a smart contract.

  • Collision Detector for State Variables and Complex Variables All Entries Slots

The final phase of our methodology focuses on implementing a collision detector for both state variables and complex variable slots. This critical component will identify any potential collisions or conflicts within any type of state variables and their associated variable(s)/value(s) slots. By detecting and addressing collisions, we aim to ensure the integrity and reliability of smart contracts.

We aim to develop a robust and comprehensive methodology for smart contract storage collision detectors, by systematically progressing through above discussed three development phases.

Conclusion

The development of our solution will allow developers to ensure that their contract has no potential storage collisions before deployment. It will also be able to detect storage collisions within deep storage of deployed smart contracts and can help in securing contracts worth millions of dollars.

1 post - 1 participant

Read full topic

Block proposer Mechan-stein (alt. Franken-ism)

Published: Aug 21, 2024

View in forum →Remove

Mechan-stein (alt. Franken-ism)

^ choose your own adventure – either way, just trying to portmanteau ‘Frankenstein’ and ‘Mechanism.’


^“don’t worry bro, just one more auction, i swear. check it out.” h/t Mallesh for the relevant tweet.

\cdot
by mike – wednesday; august 21, 2024.
^hbd Bo. if you, dear reader, haven’t seen “Inside” or “Inside Outtakes,” watching them is your homework assignment.
\cdot
Many thanks to BarnabĂŠ, Julian, Thomas, Jacob, mteam, Toni, Justin, Vitalik, Max, and Mallesh for discussions around these topics and comments on the draft!
\cdot
The idea for the combined mechanism explored in Part 2 of this post came from a BaranbĂŠ-led whiteboarding session and accompanying tweet thread. These ideas are also explored in the this doc, which inspired this talk.
\cdot
tl;dr; We sketch a high-level framing for Ethereum block construction centered around the design goals of encouraging builder competition, limiting the value of validator sophistication, and preserving the neutrality of block space. We then highlight three proposed mechanisms and how they interface with the established desiderata. We conclude by exploring the potential synergies of combining these designs into a single flow, called Mechan-stein.
\cdot
Contents
(1) The building blocks of block-space market design
   Enshrined PBS & MEV-burn via PTC
   Execution Auctions (an Attester-Proposer Separation instantiation)
   FOCIL
(2) Mechan-stein
   Potenital Issues with Mechan-stein
\cdot

Related work


[1] The building blocks (pun intended) of block-space market design

Since before the Merge, much has been (and continues to be) written about Ethereum’s transaction supply chain and block-space market design. I still think Vitalik’s Endgame summarizes the best-case outcome most succinctly with,

“Block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented.”

We can operationalize each of these statements into a design goal for our system:

  1. “Block production is centralized.” \rightarrow MEV is a fact of life in financial systems, and some actors will inevitably specialize in its extraction. We can’t expect solo-stakers to run profitable builders, but we can encourage competition and transparency in the MEV markets. When discussing MEV-boost, we usually describe it as aiming to democratize access to MEV for all proposers (which it does extremely well), but one under-discussed element of the existing system is that it encourages builder competition by creating a transparent market for buying block space. There are (and always will be) advantages and economies of scale for being a big builder (e.g., colocation with relays, acquiring exclusive order flow deals, and holding large inventory on various trading venues – for more, see this recent paper from Burak, Danning, Thomas, and Florian), but anyone can send blocks and compete in the auction. Another important element of MEV-boost is that the auction happens Just-In-Time (JIT) for the block proposal, making timing games around the block proposal deadline valuable to the proposer who serves as the auctioneer. Still, the real-time nature of the auction ensures that the builder with the highest value for this specific slot wins the auction (rather than, e.g., the builder with the highest average value for any slot – see Max & Mallesh’s argument for why ahead-of-time auctions are more centralized). This leads to design goal #1: encourage builder competition.^{[1]}
  2. “Block validation is trustless and highly decentralized”^{[2]} \rightarrow Ethereum’s primary focus has been preserving the validator set’s decentralization (why this is important in item #3 below). This fundamental tenet instantiates itself in both the engineering/technical design and the economic/incentive design. On the engineering front, the spec is written with the minimum hardware requirements in mind. This constraint ensures that participation in Ethereum’s consensus is feasible given (relatively) modest resources. On the economic level, the goal is to minimize the disparity in financial outcomes between at-home stakers and professional operators. Beyond feasibility, this aims to make at-home staking not too irrational. This double negative is tongue-in-cheek but hopefully conveys the message of trying to ensure there is some economic viability to at-home staking rather than staking through a centralized provider. Another lens for interpreting this is keeping the marginal value of sophistication low. We can’t expect at-home operators to have the exact same rewards as Coinbase and Lido (e.g., because they may have higher network latency), but the centralized staking providers shouldn’t benefit greatly from sophistication. This leads to design goal #2: limit the value of validator sophistication.
  3. “Censorship is prevented.” \rightarrow Credible neutrality is what differentiates crypto-economic systems from FinTech. If centralized entities determine which transactions land on chain and which do not, it’s over. To ensure the anti-fragility and neutrality of Ethereum, we must rely on a geographically distributed validators; the validator set is the most decentralized part of the block production pipeline. In my opinion, (i) the main point of having a decentralized validator set is to allow those validators to express different preferences over the transactions that land on chain (“high preference entropy” – h/t Dr. Monnot), and (ii) relying on this decentralization is the only way to preserve neutrality of the chain (c.f., Uncrowdable Inclusion Lists for more discussion on chain neutrality). This leads to design goal #3: preserve the neutrality of Ethereum block space.

Right. To summarize:

  1. “Block production is centralized.” \rightarrow design goal #1: encourage builder competition.
  2. “Block validation is trustless and highly decentralized” \rightarrow design goal #2: limit the value of validator sophistication.
  3. “Censorship is prevented.” \rightarrow design goal #3: preserve the neutrality of Ethereum block space.

Ok. This is all great, but let’s talk specifics. Many proposals aim to accomplish some of the design goals above. I am going to focus on three:

  1. Enshrined Proposer-Builder Separation & MEV-burn via Payload-Timeliness Committee (abbr. PTC onwards).
  2. Execution Auctions/Attester-Proposer Separation.
  3. Fork-Choice Enforced Inclusion Lists (abbr. FOCIL onwards).

This may seem jargon-laden, and I apologize; please check out the links for the canonical article on each topic; for even more legibility, I will present a high-level view of each proposal below.

Enshrined PBS & MEV-burn via PTC

This design enshrines a JIT block auction into the Ethereum consensus layer. The diagram below summarizes the block production pipeline during the slot.

  1. The builder bids in the auction by sending (block header, bid value) pairs to the proposer and the committee members.
  2. The proposer commits to the highest bid value by signing and publishing the winning bid.
  3. The committee enforces that the proposer selected a sufficiently high bid according to their view.
  4. The builder publishes the block.
  5. The committee enforces the timeliness of the builder’s publication.

Analysis

  • PTC allows the protocol (through the enforcement of the committee) to serve as the trusted third-party in the fair-exchange of the sale of the block building rights. MEV-burn (maybe more aptly denoted as “block maximization” because burning isn’t strictly necessary for the bids) asks the attesters to enforce a threshold for the bid selected as the winner by the proposer.
  • PTC primarily implements design goal #1: encourage builder competition. PTC enshrines MEV-boost, fully leaning into creating a competitive marketplace for block building. As in MEV-boost, the real-time block auction allows any builder to submit bids and encourages competition during each slot. Additionally, the JIT auction and bid-threshold enforcement of MEV-burn reduces the risk of multi-slot MEV by forcing each auction to take place during the slot. Lastly, PTC and other ePBS designs historically were aimed at removing relays. With bid thresholds from MEV-burn, the bypassability of the protocol becomes less feasible (even if the best builder bypasses, the second best can go through the protocol and ensure their bid wins).
  • PTC marginally addresses design goal #2: limit the value of validator sophistication. By creating an explicit market for MEV-aware blocks, PTC ensures that all validators can access a large portion of the MEV available in their slot. MEV-burn also smooths out the variance in the validator rewards. However, one of the major limitations of this auction design is the “value-in-flight” (h/t BarnabĂŠ for coining the term) problem of the auction taking place during the slot. Because the value of the sold item changes dramatically throughout a slot, the auctioneer’s role benefits from sophistication. Beyond simple timing games, more exotic strategies around the fork-choice rule (e.g., using extra fork-choice weight to further delay block publication, h/t Toni) are possible, and we are just starting to see these play out.
  • PTC does not address design goal #3: Preserve the neutrality of Ethereum block space. Neither PTC nor PBS generally are designed to encourage censorship resistance. The fact that a few builders account for most of Ethereum’s blocks is not surprising, and we should not count on those builders to uphold the credible neutrality of the chain (even if they are right now). While it is true that PTC aims to maintain a decentralized validator set, the fact that the full block is sold counter-acts that effect by still giving discretionary power of the excluded transactions to the builder (e.g., consider the hypothetical where 100% of validators are at-home stakers (maximally decentralized), but they all outsource to the same builder \implies the builder fully determines the transactions that land onchain).

Execution Auctions (an Attester-Proposer Separation Instatiation)

In contrast to the JIT block auction enabled by PTC, this design enshrines an ahead-of-time slot auction into the Ethereum consensus layer. A slot auction still allocates the entire block to the winner of the auction, but they no longer need to commit to the specific contents of the block when bidding (e.g., they are buying future block space) – this allows the auction to take place well in advance of the slot itself. The diagram below summarizes the block production pipeline 32 slots ahead of time (the 32 is just an arbitrary number; you could run the auction any time in advance or even during the slot itself; the key distinction is the fact that the bids don’t contain commitments to the contents of the block).

N.B., the first three steps are nearly identical to the PTC process. The only differences are (a) the auction for the Slot N+32 block production rights takes place during Slot N and (b) the bid object is a single bid value rather than the (block header, bid value) tuples. The actual building and publication of the block happen during Slot N+32, and Execution Auctions are agnostic to that process.

  1. The builder bids in the auction by sending bid value to the proposer and the committee members.
  2. The proposer commits to the highest bid value by signing and publishing the winning bid.
  3. The committee enforces that the proposer selected a sufficiently high bid according to their view.

Analysis

  • Execution Auctions allow the protocol (through the enforcement of the committee) to serve as the trusted third party in the fair-exchange of the sale of the block building rights for a future slot.
  • Execution Auctions primarily support design goal #2: limit the value of validator sophistication. With the real-time auction of PTC, we described how the value-in-flight problem results in value from the sophistication of the validators who conduct the auction. In Execution Auctions, the auction occurs apriori, making the value of the object sold less volatile. The validator conducting the auction has a much simpler role that doesn’t benefit from timing games in the way they do in the JIT auction, thereby reducing their value from sophistication.
  • Execution Auctions do not address design goal #1: encourage builder competition. By running the auction ahead of time, the highest value bidder will always be the builder who is best at producing blocks (h/t Max and Mallesh for formalizing this). The builder may still choose to sell the block production rights on the secondary market, but only at a premium over the amount they can extract.^{[3]}
  • Execution Auctions do not address design goal #3: Preserve the neutrality of Ethereum block space. Execution Auctions are not designed to encourage censorship resistance. We fully expect the future block space and builder markets to remain centralized. Another major concern with Execution Auctions is the risk of multi-slot MEV. Because the auction is not real-time, it is possible to acquire multiple consecutive future slots and launch multi-slot MEV strategies without competing in any auction during the slot itself. (We could try to mitigate this by making the look-ahead only a single slot – e.g., Slot N+1 auction during Slot N, but this may open up the same value-in-flight issues around JIT block auctions. More research is needed (and actively being done h/t Julian) here.)

FOCIL

This design allows multiple consensus participants to construct lists of transactions that must be included in a given slot. In contrast to the previous designs, this is not an auction and does not aim to enshrine a MEV marketplace into the protocol. Instead, the focus here is improving the system’s neutrality by allowing multiple parties to co-create a template (in the form of a set of constraints) for the produced block. The diagram below describes the block production process during the slot itself.

  1. The IL committee publishes their inclusion lists to the builder (clumping this together with the proposer for this diagram because the builder must follow the block template) and the attesters.
  2. The builder publishes a block that includes an aggregate view of the ILs they received and conforms to the constraints therein.
  3. The attesters enforce the block validity conditions, which now check that the builder included a sufficient threshold of observed inclusion lists.

Analysis

  • FOCIL increases the protocol’s neutrality by allowing multiple validators to express their preferences in the block co-creation.
  • FOCIL primarily contributes to design goal #3: preserve the neutrality of Ethereum blockspace. This is the direct goal; more inputs to the block construction seems like a no-brainer (very much in line with the latest thread of concurrent proposer research). Critically, FOCIL intentionally does not give any MEV power to the inclusion list constructors (see Uncrowdability for more) to avoid the economic capture of that role. In particular, FOCIL does not aim to constrain the builder’s ability to extract MEV generally; the builder can still reorder and insert transactions at will in their block production process. Instead, it’s their ability to arbitrarily exclude transactions, which FOCIL reduces.
  • FOCIL does not address design goal #1: encourage builder competition. FOCIL is agnostic to the exact block production process beyond enforcing a block template for transactions that cannot be excluded arbitrarily.
  • FOCIL does not address design goal #2: limit the value of validator sophistication. FOCIL is agnostic to the exact block production process beyond enforcing a block template for transactions that cannot be excluded arbitrarily.

Right. That was the “vegetable eating” portion of this article. The critical takeaway is each of the above proposals primarily addresses one of the cited design goals, but none address all three simultaneously. This makes it easy to point out flaws in any specific design.
…
You probably see where we are going with this. Let’s not bury the lede. What if we combine them? Each serves a specific role and operates on a different portion of the slot duration; why not play it out?

[2] Mechan-stein

With the groundwork laid, we can ~nearly~ combine the three mechanisms directly. There is one issue, however, which arises from both auctions selling the same object – the proposing rights for Slot N+32. The resulting bids in the first auction (the slot auction sale of Slot N+32 during Slot N) would thus not carry any economic meaning because bidders would be competing for the slot but would then be forced sellers by the time the slot arrived. To resolve this, the second auction (which happens JIT during the slot) could instead be a Top-of-Block auction (e.g., the first 5mm gas consumed in the block). There are many articles exploring the Top-of-Block/Rest-of-Block split (sometimes called block prefix/suffixes) (see, e.g., here, here, here), so we won’t go into the details of the consensus changes required to facilitate this exchange. Taking its feasibility for granted, the double-auction design of Mechan-stein makes more sense.
- Auction 1 during Slot N sells the block proposing rights for Slot N+32 and is conducted by the proposer of Slot N.
- Auction 2 during Slot N+32 sells the Top-of-Block to a (potentially different) builder who specifies the specific set of transactions to be executed first in the block. This auction is conducted just in time by the builder/winner of Auction 1.

With this framing, the winner of Auction 1 effectively bought the option to build (or sell) the Rest-of-Block for Slot N+32 – thus the expected value of the bids in that auction would be the average amount of MEV extractable in the block suffix (aside: this might play nicely with preconfs). The diagram below shows the flow at a high level (leaving off many back-and-forths for legibility).

  1. The Slot N proposer auctions off the Slot N+32 proposing rights.
  2. The Slot N attesters enforce the bid threshold of the slot auction.
  3. [32 slots later] The Slot N+32 IL committee publishes their ILs.
  4. The Slot N+32 builder auctions off the Top-of-Block for Slot N+32.
  5. The Slot N+32 PTC enforces the bid threshold of the Top-of-Block auction.
  6. The Slot N+32 PTC enforces the timeliness of the block publication from the winning builder.
  7. The Slot N+32 attesters enforce the IL threshold of the final block.

Yeah, yeah – it’s a lot of steps, but the pitch is pretty compelling.

  • Mechan-stein addresses design goal #1: encourage builder competition. The permissionless, JIT Top-of-Block auction helps mitigate the risk of multi-slot MEV in Execution Auctions by forcing the slot auction winner to sell a portion of the block or at least pay a threshold to build the full block themselves.
  • Mechan-stein addresses design goal #2: limit the value of validator sophistication. The role of an average validator in block production is now the simple combination of (1) conducting the ahead-of-time slot auction and (2) publishing their inclusion list when part of an IL committee. This greatly reduces the power bestowed on the validator because (1) they are now conducting an auction apriori (thus, latency and timing games play a smaller role) and (2) the inclusion list intentionally does not generate much value for MEV-carrying transactions (because it only guarantees inclusion rather than ordering).
  • Mechan-stein addresses design goal #3: preserve the neutrality of Ethereum block space. By allowing many participants to co-create the set of constraints enforced on the builder of each block, high preference entropy is achieved without unduly benefiting the transactions that land in an inclusion list, as block builders can still reorder and insert at their leisure. However, the builder’s ability to exclude is limited, removing some of their monopolist power over the transactions in the block.

The combined mechanism creates a set of checks and balances where the weaknesses of one design in isolation are the strengths of another. Everything is perfect, right?

Potential issues with Mechan-stein

It might not be only rainbows and butterflies. Without being comprehensive (neither in the list of potential issues nor the responses to said issues), let’s run down a few of the most obvious questions with Mechan-stein and some initial counter-points.

  • Point #1 – complexity, complexity, complexity. This could (and maybe should) count for multiple points (h/t Mallesh for the relevant tweet). Each of these proposals involves massive changes to the consensus layer of Ethereum with wide-ranging impact (particularly on the fork-choice rule). The devil is truly in the details, and getting something like this spec’ed out and implemented would be an immense research and engineering lift – let’s just say William of Ockham would not be impressed.
    • Counter-point #1 – building the future of finance in a permissionless and hyper-financialized world wasn’t going to be simple (“Rome wasn’t built in a day”). It shouldn’t be shocking that there doesn’t seem to be a silver bullet for building an MEV-aware, decentralized, credibly neutral blockchain. Maybe eating the complexity now can leave the chain in a more stable equilibrium. Also, there may be significant synergies in combining designs (e.g., using the same committee for FOCIL and PTC). You could probably do a subset of Mechan-stein and still get some benefits (e.g., FOCIL + PTC).
  • Point #2 – how may the ahead-of-time slot auction distort the MEV market? Mostly just reciting Max and Mallesh’s argument (3rd time referencing that paper in this article lol). By removing the real-time nature of the initial auction, you bias it towards a winner-take-all for the best builder (or the “Best Block Space Future Value Estimator™”). I’d say this is similar in spirit to the Phil Daian view of making the competition as deterministic as possible (e.g., “deterministic vs statistical opportunities”).
    • Counter-point #2 – that is the point of still having the PTC conduct a JIT Top-of-Block auction. I think this feels reasonable. However, there is still a slight edge that the auctioneer (who may be a builder themselves) has in the JIT auction, which is they can benefit from the sophistication and latency investments as they are the auctioneer and a participant. As mentioned above, you could consider skipping the Execution Auctions part of Mechan-stein and just going with FOCIL + PTC (or even leave MEV-boost alone as the primary PBS market and just do FOCIL). (h/t Justin for pointing out that you could try to do Execution Auctions where multiple proposers (more than one auction winner) are selected – another combined mechanism that tries to mitigate the multi-slot MEV risk.)
  • Point #3 – there is still power in being the block producer. As pointed out in this comment and its response on the FOCIL post, there is still some discretionary power in being the block builder. Namely, they can choose which ILs they exclude from their aggregate up to some protocol-enforced tolerance. This notion of having an IL “aggregator” is the main difference between FOCIL and a leaderless approach like Braid.
    • Counter-point #3 – this seems like a fundamental feature. Again, I find myself leaning on Phil’s comment and mental model for “how economic power expresses itself in the protocol.” In a distributed system with network latency and geographic decentralization, some parties will have advantages over others. Suppose the protocol doesn’t explicitly imbue some participants with power during some period (e.g., by electing a leader). In that case, that power will still manifest somewhere else, likely in a more implicit (thus more sophisticated) way. This is more of a meta point, and I am happy to be convinced otherwise.

All right, going to cut it here; hope you found it interesting. Lot’s to think on still.

thank for reading :heart: -mike


footnotes

^{[1]}: It is worth noting that, conditioned on having strong censorship resistance properties, the difference between a monopolist builder and a competitive marketplace of builders isn’t so vital. As discussed with Barnabé and Julian, perhaps a more important property is the “replace-ability” of a monopolist builder if they begin abusing their power. All else being equal, I still prefer the outcome where we have multiple builders, even if just for the memetic reality of having a single block builder looks highly centralized, even if the other consensus participants heavily constrain them. Hence, builder competition still feels like a fair desiderata.:leftwards_arrow_with_hook:︎

^{[2]}: Vitalik pointed out that when he originally wrote this, he was referring more to the act of validating the blocks (e.g., by verifying a ZK proof) rather than explicitly participating in consensus. The name “validator” denotes someone who engages in consensus, which has been a nomenclatural pain point since the launch of the beacon chain. Despite this, I still like the framing of keeping some form of consensus participation decentralized (mainly as a means to better chain neutrality), so I will slightly abuse the naming confusion. xD :leftwards_arrow_with_hook:︎

^{[3]}: It is worth noting that validators could also choose to only sell their block at a premium in the more general case through the use of the min-bid feature of MEV-boost. See more on min-bid from Julian and Data Always. :leftwards_arrow_with_hook:︎

7 posts - 5 participants

Read full topic

Ecosystem (5)
Curve
Proposals Reduce TUSD Pegkeeper Debt Ceiling to 0 and pyUSD to 5m

Published: Sep 25, 2024

View in forum →Remove

Summary:

Reduce TUSD PegKeeper debt ceiling to 0.
Reduce pyUSD PegKeeper debt ceiling to 5m

Abstract:

PegKeeper debt ceiling is the max amount of crvUSD that a PegKeeper can mint to a target crvUSD pool. Historically, the USDC/USDT pools have been the primary liquidity centers and most useful PegKeeper pools for keeping crvUSD on peg. This proposal balances the proportional debt ceiling to PegKeeper pool TVL so reliance on each PegKeeper is suitable for the significance of the respective pool. It also proposed to fully remove exposure to TUSD due to regulatory risk and solvency concerns.

Motivation:
The current PegKeeper debt ceilings vs pool TVLs along with proposed ratio are as follows:

Pegkeeper Debt Ceiling Pool TVL Debt Ceiling/TVL Proposed Ratio
USDC 25m 12.62m 1.98 1.98
USDT 25m 9.69m 2.58 2.58
pyUSD 15m 963k 15.58 5.19
TUSD 10m 556k 17.99 0

Notice the PegKeeper ceiling to TVL ratio is much higher for pyUSD and TUSD. These pools also have not shown a strong trajectory in their TVL:

pyUSD

TUSD

This can cause potential issues with the PegKeepers because of ratio limits. A recent proposal voted to increase the caller_share of these PegKeepers to incentivize updating these less efficient PegKeepers. Ratio limits require multiple PegKeepers to deploy debt before increasing the total share of debt that can be deployed. One can play with this Desmos graph to experiment with how changes in debt ceiling affect the ratio limit of the PegKeepers. Essentially, the high debt ceiling relative to pool size may limit the deployment of debt in the USDT/USDC PegKeepers (the important crvUSD PegKeepers) and artificially require more debt to deploy through the pyUSD/TUSD PegKeepers (the “supplementary” PegKeepers).

Ultimately Curve requires a strong diversity of PegKeepers, but that is another discussion. crvUSD is overexposed to minor stablecoins, especially TUSD which has a dubious track record and has recently been charged by the SEC with defrauding investors. LlamaRisk has always taken a skeptical stance toward TUSD, and stakeholders are advised to read the public attestations provided for TUSD. Attestations disclose that nearly 100% of reserves are with the Hong Kong depository institution:

The Hong Kong depository institution also invests in other instruments to generate yield, which are made up of investments that may not be readily convertible to cash, subject to market conditions or fund performance.

Furthermore, attestations disclose that accounting is done at cost. It is impossible to have any insight, based on publicly available information, whether the reserves backing TUSD are liquid or even if the stablecoin is solvent. What we do know is that TUSD experienced prolonged depeg following its offboarding from Binance in February and supply drawdown from 3.3B in Nov '23 to 500M in April '24. The supply has since remained conspicuously static, indicating that there is little public interest in the stablecoin and it may be largely held by insiders. In fact, 78% of the supply on Ethereum (~50% of total supply) is held by an EOA associated with Justin Sun.

PegKeeper V2 was designed to mitigate exposure to stablecoin depegs. However, it may be prudent to consider completely removing crvUSD exposure to TUSD, which has a notably poor track record on peg stability and transparency standards compared to all other PegKeeper stablecoins. Other PegKeeper contenders may include USDM. A followup proposal will address onboarding USDM PegKeeper.

Specification:

call crvUSD ControllerFactory: 0xC9332fdCB1C491Dcc683bAe86Fe3cb70360738BC

TUSD_PK: "0x0a05FF644878B908eF8EB29542aa88C07D9797D3"
PYUSD_PK: "0x3fA20eAa107DE08B38a8734063D605d5842fe09C"

set_debt_ceiling(TUSD_PK, 0)
set_debt_ceiling(PYUSD_PK, 5000000000000000000000000)

1 post - 1 participant

Read full topic

Gauge Proposals Proposal to add tBTC/cbBTC [Ethereum] to the Gauge Controller

Published: Sep 22, 2024

View in forum →Remove

Summary:

Proposal to add gauge support for the tBTC/cbBTC pool on Ethereum.

References/Useful links:

Link to:

Protocol Description:

tBTC is a permissionless wrapped Bitcoin, that is 1:1 backed by mainnet BTC. tBTC is trust minimized and redeemable for mainnet BTC without a centralized custodian.

cbBTC is a new, but already well known centralized wrapped Bitcoin.

Motivation:

This proposal aims to add a veCRV gauge for the Curve tBTC/cbBTC pool on Ethereum.

The recent launch of cbBTC, Threshold considers this pair essential for effective liquidity routing on mainnet, and expects it to attract significant liquidity.

The DAO has budgeted T token bribes to incentivise this pool, with the goal to attract liquidity to the pool and increase the overall liquidity of tBTC on mainnet.

Specifications:

1 Governance:

tBTC operates on Threshold DAO’s decentralized threshold encryption protocol. Threshold DAO is governed by the network’s work token, T. T token holders govern the DAO via proposals raised to the Threshold forum, which can be raised to a vote via Snapshot, as well as the on-chain Governor Bravo module via Boardroom.

tBTC contract deployments are currently managed via the Council multi-sig, a 6-of-9 multi sig managed by trusted community members: 0x9F6e831c8F8939DC0C830C6e492e7cEf4f9C2F5f

In the future, all contract authorities will be passed to the Governor Bravo contract.

2 Oracles:

tBTC does not rely on an oracle price feed.

3 Audits:

Links to:

Immunefi Bug Bounty: Threshold Network Bug Bounties | Immunefi

4 Centralization vectors:

Threshold Network governance is decentralized, and updates are ratified by the DAO.

tBTC contract updates are currently managed by the Council multi-sig.

5 Market History:

tBTC is a wrapped BTC token, and is pegged to the price of BTC. Since launch, tBTC’s price has not significantly diverged from that of BTC. tBTC launched in January of 2023, and currently has ~3400 BTC to collateralize the ~3400 tBTC.

The tBTC/cbBTC pool has recently been created.

Link to pool: https://etherscan.io/address/0xae6ee608b297305abf3eb609b81febbb8f6a0bb3

Link to gauge:
https://etherscan.io/address/0xc11b5bad6ef7b1bdc90c85d5498a91d7f19b5806

Value:

The new tBTC/cbBTC pool on Curve is intended to provide effective liquidity routing on mainnet, as well as boost tBTC TVL on Curve by leveraging the launch of cbBTC.

Threshold has allocated T incentives to bootstrap liquidity via incentivised bribes on Warden. Similar programs have successfully bootstrapped tBTC liquidity in the past.

Our goal is to create deep, sticky liquidity on Curve and benefit from existing orderflows. This will serve to increase TVL and volume on Curve.

1 post - 1 participant

Read full topic

Gauge Proposals Proposal to add tBTC/thUSD [Ethereum] to the Gauge Controller

Published: Sep 22, 2024

View in forum →Remove

Summary:

Proposal to add gauge support for the tBTC/thUSD pool on Ethereum.

References/Useful links:

Link to:

Protocol Description:

tBTC is a permissionless wrapped Bitcoin, that is 1:1 backed by mainnet BTC. tBTC is trust minimized and redeemable for mainnet BTC without a centralized custodian.

thUSD is a collateralized stablecoin Bitcoin based on a fork of Liquity protocol. Interest free thUSD loans are created when a user deposits tBTC and ETH as collateral with a minimum collateral ratio of 110%. Undercollateralized positions are liquidated by the protocol, and collateral is sold to maintain a minimum system-wide collateralisation of 150%.

Users are able to deposit thUSD in the stability pool, which closes at-risk troves and distributes the collateral pro-rata to stability pool participants. Fees are collected for loan origination and redemption.

Motivation:

This proposal aims to add a veCRV gauge for the Curve tBTC/thUSD pool on Ethereum.

thUSD has successfully launched and is growing. The key directives of the growth phase are to add functionality and to continue to increase liquidity on Curve.

The DAO has budgeted T token bribes to incentivise this pool, with the goal to attract liquidity to the pool and continue to position Curve as the centre of thUSD liquidity on Ethereum.

Specifications:

1 Governance:

tBTC operates on Threshold DAO’s decentralized threshold encryption protocol. Threshold DAO is governed by the network’s work token, T. T token holders govern the DAO via proposals raised to the Threshold forum, which can be raised to a vote via Snapshot, as well as the on-chain Governor Bravo module via Boardroom.

tBTC contract deployments are currently managed via the Council multi-sig, a 6-of-9 multi sig managed by trusted community members: 0x9F6e831c8F8939DC0C830C6e492e7cEf4f9C2F5f

In the future, all contract authorities will be passed to the Governor Bravo contract.

thUSD operates as a collection of immutable smart contracts. As a Liquity fork, these contracts have been rigorously stress tested in a real-world scenario. Threshold has also sought independent audits.

The parameters that can be adjusted by the DAO are: the amplification factor of the BAMM contract, thUSD price feeds (Chainlink and Tellor), and thUSD collateral options.

These parameters are managed by the Threshold DAO via the Governor Bravo on-chain governance system.

2 Oracles:

tBTC does not rely on an oracle price feed.

thUSD uses two price oracles to determine system collateralization, which are supplied by Chainlink and Tellor.

3 Audits:

tBTC Audits

tBTC Immunefi Bug Bounty: Threshold Network Bug Bounties | Immunefi

thUSD Audits

thUSD Immunefi Bug Bounty: Threshold USD Bug Bounties | Immunefi

4 Centralization vectors:

Threshold Network governance is decentralized, and updates are ratified by the DAO.

tBTC contract updates are currently managed by the Council multi-sig.

thUSD parameters are governed by Threshold DAO, which operates via a decentralized on-chain governance system.

Updateable parameters are: the amplification factor of the BAMM contract, thUSD price feeds (Chainlink and Tellor), and thUSD collateral options.

Parameter changes require an on-chain governance cycle to change.

5 Market History:

tBTC is a wrapped BTC token, and is pegged to the price of BTC. Since launch, tBTC’s price has not significantly diverged from that of BTC. tBTC launched in January of 2023, and currently has ~3400 BTC to collateralize the ~3400 tBTC.

thUSD is a collateralized stablecoin backed by ETH and tBTC. It targets a 1:1 peg with USD, and is built on a sophisticated automated system that achieves price stability by liquidating undercollateralized troves. Liquid thUSD can be used to close at-risk troves, or pay back debt.

Since launch, thUSD’s price has not significantly diverged from that of USD. thUSD launched in April of 2024, and currently has ~117 tBTC ($7.32M) & ~54.2 ETH (~$131k) to collateralize the ~3.1M thUSD.

The tBTC/thUSD pool has recently been created.

Link to pool: https://etherscan.io/address/0xb79bbc5c96ba248e10c1bdac1a2b83790ff22b7f

Link to gauge:
https://etherscan.io/address/0x4e057701a76de5b7ade2fcca2b6761325646f8fa

Value:

The new tBTC/thUSD pool on Curve is intended to provide new functionality for thUSD holders (ability to long BTC) as well as enhance thUSD price stability.

Threshold has allocated T incentives to bootstrap liquidity via incentivised bribes on Warden. Similar programs have successfully bootstrapped tBTC liquidity in the past.

Our goal is to create deep, sticky liquidity on Curve and benefit from existing orderflows. This will serve to increase TVL and volume on Curve.

1 post - 1 participant

Read full topic

Gauge Proposals Add a gauge for the following pool: USR/RLP

Published: Sep 16, 2024

View in forum →Remove

Summary:

Resolv is a protocol issuing and maintaining USR, a stablecoin backed by delta-neutral Ether collateral pool.

USR is overcollateralized; excess collateral pool is tokenized into Resolv Liquidity Pool (RLP).

To make secondary liquidity in USR and RLP more accessible in larger volumes, an USR/RLP pool was created. Resolv Labs, core contributors of the protocol, propose to add USR-RLP pool on Ethereum to the Gauge Controller.

References/Useful links:

Protocol Description:

Resolv is a protocol that maintains USR, a stablecoin fully backed by ETH and pegged to the US Dollar. The stablecoin’s delta-neutral design ensures price stability, and is backed by a tokenized insurance pool (RLP) to provide additional security and overcollateralization.

Key features:

  1. Stable and Transparent: USR is backed 100% by ETH collateral pool. Its price fluctuations are hedged using perpetual futures to ensure price stability. This approach creates a delta-neutral portfolio, maintaining a stable value pegged to the US Dollar.
  2. Insurance Layer: RLP acts as a protection mechanism, absorbing risks related to counterparty defaults and negative interest rates. This additional layer of security guarantees that USR remains overcollateralized, which enhances the stablecoin’s stability and reliability.
  3. Capital Efficiency: USR and RLP can be minted and redeemed on a 1:1 basis with liquid collateral, avoiding the inefficiencies associated with overcollateralization. This straightforward minting process allows users to efficiently utilize their assets without tying up excessive capital.
  4. Profitable and Secure: The protocol’s collateral pool generates profits through staking ETH and earning funding fees from futures positions. Additionally, Resolv employs robust risk management strategies, including diversifying exposure to trading venues and using off-exchange custody solutions, to ensure the security and profitability of its assets.
  5. Optimized for Investment: USR and RLP have different levels of expected yield and its volatility. The protocol’s separation of risks and returns allows investors to understand their investment profiles better, making USR a reliable choice for stable and predictable returns.

Motivation:
Resolv is looking to develop depth of secondary liquidity for RLP, with Curve USR-RLP pool as its main venue. With more liquidity, Resolv will proceed with integrations with leverage DeFi protocols, enabling more use cases for RLP and its purchase flow going through Curve.

Specifications:

  1. Governance: Governance token RSLV is expected to launch soon. Currently the protocol utilises a multi-signature wallet to deploy smart contracts and operate collateral pool. As the protocol moves into public phase, the contributors will shift protocol governance to public discussions and voting.
  2. Oracles: There is currently no on-chain oracle for USR or RLP. Oracle will be set up as soon as there are more reliable sources of on-chain liquidity.
  3. Audits: Resolv is audited by MixBytes, Pessimistic, and Pashov. Links to audit reports can be found on the protocol documents page (Security | Resolv Docs)
  4. Centralization vectors: Collateral pool that backs Resolv tokens is operated by a centralized backend / Ops team. The signatories for the operations, both offchain and onchain, consist of executive team.
  5. Market History: Resolv has been live since April 2024 and has current TVL of approx $16mn. Average yield metrics since the start of the product: Native USR staking - 6.49% APR, RLP - 12.96% APR.

Voting Live:
https://dao.curve.fi/vote/ownership/842
https://crvhub.com/governance/ownership/842

1 post - 1 participant

Read full topic

Gauge Proposals Proposal to Add shezBTC/WBTC Pool to Gauge Controller

Published: Sep 10, 2024

View in forum →Remove

Curve Proposal - Enable shezBTC/WBTC gauge [Ethereum]

Summary:

This is a proposal to enable $CRV rewards for the shezBTC/WBTC gauge on Curve at: Curve.fi

$CRV incentives will increase liquidity, boosting the efficiency of trading and grow user engagement

By incentivizing, we attract committed LPs, which would result in a thriving ecosystem on Curves AMM, positioning curve as a leader for LPing high yield on correlated pairs.

References/Useful links:

Website: https://www.shezmu.io/

Docs: Abstract | Shezmu

dApp: Shezmu | Leveraging Yield

Twitter: x.com

Github: ShezmuTeam (Shezmu) ¡ GitHub

Protocol Description:
Shezmu introduces a groundbreaking hybrid Collateralized Debt Position (CDP) platform that innovatively combines the capabilities of both NFTs and Yield-Bearing Tokens. Our platform allows users to borrow against both NFTs and Yield-Bearing Tokens, providing unparalleled flexibility and liquidity in the digital asset space. In addition to the core CDP functionality, our project offers a suite of utilities designed to enhance user experience and asset value.

Core Features

  1. Hybrid CDP Platform:
  • Having BTC, ETH and Stablecoin derived assets available to borrow, users can minimize liquidation risk by borrowing an asset thats price moves in tandem with their collateral, making it the most flexible and lowest risk of liquidation CDP to exist.
  • Borrowing Against NFTs and Yield-Bearing Tokens: Users can use their NFTs and Yield-Bearing Tokens as collateral to secure loans, unlocking liquidity without relinquishing ownership of their valuable digital assets.
  • Dynamic Collateral Management: Oasis supports a wide array of both NFTs and Yield-Bearing Tokens, ensuring users can maximize their assets’ potential.
  1. Agora Bonds:
  • Enhanced Yield Opportunities: Users can participate in our bonding mechanism, providing liquidity to the platform in exchange for attractive returns. This feature ensures a steady supply of capital and rewards participants with competitive yields.
  1. Cross-Chain Swaps:
  • Seamless Interoperability: Shezmu supports cross-chain swaps, enabling users to bridge and swap assets across different blockchain networks effortlessly into Shezmu. This functionality enhances asset mobility and reduces the barriers between various blockchain ecosystems.
  1. Shezmu-Pegged NFTs:
  • Innovative Incentives: Shezmu introduces a unique pegged model for NFTs, where users can burn Shezmu tokens to earn guardian NFTs that emit rewards. This mechanism not only adds value to NFT holdings but also creates a deflationary effect, increasing the scarcity and value of remaining tokens. The NFTs may also be borrowed against in Oasis, allowing for a complete loop within our ecosystem.
    By integrating these advanced features, our hybrid CDP platform not only provides a robust solution for borrowing against NFTs and Tokens but also enriches the digital asset ecosystem with liquidity bonds, cross-chain swaps, and innovative NFT utilities. This comprehensive approach ensures users can fully leverage their assets, participate in diverse financial opportunities, and benefit from the evolving crypto landscape.

Motivation:
The curve shezBTC/WBTC gauge will play an important early role in incentivising liquidity providers to help with the process of bootstrapping initial liquidity. It will then serve as an important source of liquidity for shezBTC on an ongoing basis, supporting its utility and viability for integration into a range of DeFi protocols.

Specifications:

Governance: Here is a link to the multisig wallet for the owner safe:

https://etherscan.io/address/0xa004e4cedea8497d6f028463e6756a5e6296bad3

Cronjob Pricing Oracle: Here is a link to the Crobjob Oracle we use for fetching price for assets on our dApp:

https://etherscan.io/address/0x9a559e936395e4e10ed7435d6a43fe69fb1112f7

Audits: First audit can be viewed here:

Centralization Vectors: We have a relatively small engineering team, with one lead engineer, 2 front end developers, two back end developers, and a subgraph developer, but plans and funds are in place to scale as needed. The treasury is currently controlled by the multi-sig wallet.

Market History: Shezmu was launched on the 8th of September 2023 and shezBTC was deployed on the 18th of March 2024. Since then Shezmu TVL has gradually accumulated over $4m in TVL. There have been no major volatility events attached to shezBTC since launch.

Value: We believe in migrating the majority yield pool to wherever the majority of user provided LP is. Therefore, if the Curve pool proves very popular for our community, it is very likely that we will support it to become the primary source of liquidity.

1 post - 1 participant

Read full topic

Gauge Proposals Proposal to Add shezETH/WETH Pool to Gauge Controller

Published: Sep 10, 2024

View in forum →Remove

Curve Proposal - Enable shezETH/WETH gauge [Ethereum]

Summary:

This is a proposal to enable $CRV rewards for the shezETH/WETH gauge on Curve at: Curve.fi

$CRV incentives will increase liquidity, boosting the efficiency of trading and grow user engagement

By incentivizing, we attract committed LPs, which would result in a thriving ecosystem on Curves AMM, positioning curve as a leader for LPing high yield on correlated pairs.

References/Useful links:

Website: https://www.shezmu.io/

Docs: Abstract | Shezmu

dApp: Shezmu | Leveraging Yield

Twitter: x.com

Github: ShezmuTeam (Shezmu) ¡ GitHub

Protocol Description:
Shezmu introduces a groundbreaking hybrid Collateralized Debt Position (CDP) platform that innovatively combines the capabilities of both NFTs and Yield-Bearing Tokens. Our platform allows users to borrow against both NFTs and Yield-Bearing Tokens, providing unparalleled flexibility and liquidity in the digital asset space. In addition to the core CDP functionality, our project offers a suite of utilities designed to enhance user experience and asset value.

Core Features

  1. Hybrid CDP Platform:
  • Having BTC, ETH and Stablecoin derived assets available to borrow, users can minimize liquidation risk by borrowing an asset thats price moves in tandem with their collateral, making it the most flexible and lowest risk of liquidation CDP to exist.
  • Borrowing Against NFTs and Yield-Bearing Tokens: Users can use their NFTs and Yield-Bearing Tokens as collateral to secure loans, unlocking liquidity without relinquishing ownership of their valuable digital assets.
  • Dynamic Collateral Management: Oasis supports a wide array of both NFTs and Yield-Bearing Tokens, ensuring users can maximize their assets’ potential.
  1. Agora Bonds:
  • Enhanced Yield Opportunities: Users can participate in our bonding mechanism, providing liquidity to the platform in exchange for attractive returns. This feature ensures a steady supply of capital and rewards participants with competitive yields.
  1. Cross-Chain Swaps:
  • Seamless Interoperability: Shezmu supports cross-chain swaps, enabling users to bridge and swap assets across different blockchain networks effortlessly into Shezmu. This functionality enhances asset mobility and reduces the barriers between various blockchain ecosystems.
  1. Shezmu-Pegged NFTs:
  • Innovative Incentives: Shezmu introduces a unique pegged model for NFTs, where users can burn Shezmu tokens to earn guardian NFTs that emit rewards. This mechanism not only adds value to NFT holdings but also creates a deflationary effect, increasing the scarcity and value of remaining tokens. The NFTs may also be borrowed against in Oasis, allowing for a complete loop within our ecosystem.
    By integrating these advanced features, our hybrid CDP platform not only provides a robust solution for borrowing against NFTs and Tokens but also enriches the digital asset ecosystem with liquidity bonds, cross-chain swaps, and innovative NFT utilities. This comprehensive approach ensures users can fully leverage their assets, participate in diverse financial opportunities, and benefit from the evolving crypto landscape.

Motivation:
The curve shezETH/WETH gauge will play an important early role in incentivising liquidity providers to help with the process of bootstrapping initial liquidity. It will then serve as an important source of liquidity for shezETH on an ongoing basis, supporting its utility and viability for integration into a range of DeFi protocols.

Specifications:

Governance: Here is a link to the multisig wallet for the owner safe:

https://etherscan.io/address/0xa004e4cedea8497d6f028463e6756a5e6296bad3

Cronjob Pricing Oracle: Here is a link to the Crobjob Oracle we use for fetching price for assets on our dApp:

https://etherscan.io/address/0x9a559e936395e4e10ed7435d6a43fe69fb1112f7

Audits: First audit can be viewed here: