Menu

loading...

Settings

Bridges (1)
Hop
🐰Hop Ecosystem Discontinue MAGIC Arbitrum Nova LP mining HOP rewards

Published: Jun 21, 2024

View in forum →Remove

Unecessary selling pressure (214.33 HOP / day), useless incentive (who tf bridges to Nova in MAGIC…?).

1 post - 1 participant

Read full topic

🗳Meta-Governance Grants Committee Election Nomination Thread

Published: Jun 14, 2024

View in forum →Remove

Fortunately, the snapshot vote to renew the Hop grants committee passed with 99.88% voting in favor therefore the next step is to commence the nomination thread.

The base responsibilities for each committee member will be to:

  • Promote the grant program to attract grant applicants (i.e. hosting discord calls or x spaces).
  • Share quarterly participation and voting metrics regarding grant applicants.

If you would like to nominate yourself to join the three-person grants committee please share:

  • Background on yourself.

  • Why are you a great candidate to join the Hop grants committee?

  • Describe a successful grants program.

  • Please share a short list of RFPs you would like for the grants program to target.

  • Past experience with grants programs

  • Reach within crypto community

3 posts - 2 participants

Read full topic

💬General Discussions Introduction - Spike - Avantgarde finance

Published: Jun 03, 2024

View in forum →Remove

Hello, Hop! :dizzy:

Just wanted to introduce myself, I’m Spike, I work for Avantgarde finance. Before becoming blockchain maxi I used to be investment banker believe it or not! :laughing:

I’ve been around since 2016, saw first DAO formation and witnessed all the fun with Ethereum classic (how is it still alive).

At Avantgarde I’m covering everything related to governance - voting, decision making, we are a large delegate in a number of protocols including Compound and Uniswap.

Big big big thanks to @francom for having a chat with me recently. It was very helpful in terms of getting better understand how community functions and what are the key pain points.

I’ll be joining the call every now and then to better understand where community is moving.

And of course great to meet everyone!

Cheers!

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 28 May 2024 17:00:00 +0000

Published: May 29, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x7aed64b8e489d8e85bc11b1d503068774969d12d1a30e4d9eac2b27def0dedb0
Merkle root total amount: 284306.499537088822848506 (284306499537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1716915600 (2024-05-28T17:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1716915600

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x7aed64b8e489d8e85bc11b1d503068774969d12d1a30e4d9eac2b27def0dedb0
totalRewards: 284306499537088822848506

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 30 Apr 2024 16:00:00 +0000

Published: May 01, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0xf6e9ddfc2e29427f49ddedb817044cee70a4652da1429eca1e84db25cdce7ad1
Merkle root total amount: 282326.709537088822848506 (282326709537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1714492800 (2024-04-30T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1714492800

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0xf6e9ddfc2e29427f49ddedb817044cee70a4652da1429eca1e84db25cdce7ad1
totalRewards: 282326709537088822848506

1 post - 1 participant

Read full topic

🪳Bugs & Feedback I found a bug. do you have bug bounty program?

Published: Apr 30, 2024

View in forum →Remove

Hello
I’m curious to know if Hop Protocol offers a bug bounty program.

Additionally, I’d like to inquire about the rewards for bug reports, as I’ve discovered a critical bug.

Thank you!

2 posts - 2 participants

Read full topic

🗳Meta-Governance Hop Protocol & DAO Report

Published: Apr 10, 2024

View in forum →Remove

Hop Protocol & DAO Q1 ‘24

Welcome to our Quarterly report on the progress of Hop Protocol, the leading crypto bridge focused on enhancing blockchain modularity. In this update, we’ll highlight key advancements achieved over the past quarter, from technical upgrades to protocol and DAO growth.

Hop Protocol

In Q1 2024, Hop Protocol’s volume surpassed the $5 billion mark. Hop Protocol currently supports transfers between the following networks; Ethereum mainnet, polygon, gnosis, optimism, arbitrum one, arbitrum nova, base, linea, and polygon zkEVM.

The total volume this quarter was a 63.62% increase from last quarter with a total volume of $478 million vs $292 million Q4 2023. The total volume this quarter was higher than each quarter in 2023.

While Hop V1 has been around since 2021 and has successfully bridged over $5 billion, Hop V2 is around the corner as development inches closer to mainnet. Significant progress has been made with off-chain infrastructure as the front-end, explorer, etc. for V2 is complete after ongoing testing. Additionally, on-chain contract development is progressing well, with a focus on transaction simulations taking place in the testnet.

The bonder network continues to perform and is preparing for changes coming up with V2 and its push to decentralize the bonder role.

Hop supports liquidity pools in Ethereum mainnet, polygon, gnosis, optimism, arbitrum one, arbitrum nova, base, linea, and polygon zkEVM . The TVL of Hop’s liquidity program is roughly $28.4 million with 76.2% TVL being in ETH. USDC.e comes in second place with 6.3% of the TVL and DAI in third with 5.5%. The protocols with greatest liquidity mining TVL are Arbitrum One with 33.6%, Optimism with 26.2%, Base with 19.9%, and Polygon with 12.5%. The upgrade of the USDC bridge to CCTP has launched and this move supports cheaper and more efficient native USDC transactions and the upgrade to Hop V2.

The list below shows each source chain’s most frequented destination chains during Q1 ‘24

Ethereum > Polygon

Polygon > Ethereum and Base

Optimism > Base

Arbitrum One > Ethereum and Base

Base > Ethereum

Linea > Optimism, Arbitrum, and Base

The lifetime transfer count is roughly 3.7 million and 74.10% of transfers have been in ETH. USDC is second with around 13.64% of transfers. The average weekly transfer counts this quarter was 29k transfers.

The current circulating Hop token supply is around 75 million which represents around 7.5% of the total token supply.

The table below demonstrates that the Hop token’s current liquidity is around $682,711 in various exchanges and chains.

Hop DAO

The DAO has approximately 1.47k governance participants and 78 delegates that have been actively participating recently according to karmahq.xyz/dao/delegates/hop.

During this quarter the DAO has had five passing Snapshot votes while Tally has had three passing votes and one failed vote. The failed vote was [HIP-39] Community Multisig Refill (4) which failed due to lack of quorum.

This quarter the DAO passed the following proposals: Treasury Diversification for Ongoing DAO Expenses (HIP44), Treasury Diversification and Protocol Owned Liquidity, Delegate Incentivization Trial (Third Cycle), Community Moderator Role and Team Compensation, and finally, Head of DAO Ops Role and Election.

There are three main proposals that are currently live in the forum: Grants Committee Renewal and Redesign, Protocol Financial Stability Part 1, and Hop Single Sided Liquidity.

Topics for future discussion: Migrating to L2 for Voting and Token Redelegation.

This report solely represents the views and research of the current Head of DAO Operations which could be subject to errors. Nothing in this report includes financial advice.

1 post - 1 participant

Read full topic

🐰Hop Ecosystem [RFC] HOP Single Sided Liquidity

Published: Apr 08, 2024

View in forum →Remove

Summary

Hop DAO should LP 25,000,000 HOP as single sided liquidity in the HOP/ETH 0.30% Univ3 pool on Ethereum. This will increase HOP liquidity and market depth while providing a natural source of diversification for the DAO.

Motivation

As Hop prepares for the launch of v2 and hopefully the ensuing growth of the protocol, now is the time to improve HOP liquidity and position the DAO for increased demand. Hop has not engaged any market makers or incentivized liquidity for HOP to date. I believe that this is the correct approach, however HOP liquidity is extremely low. This makes it difficult to enter positions and leads to significant price volatility. As of this past week, ~$500k of total buy orders would basically exhaust the entire Univ3 pool. This is a relatively small amount of liquidity and could prove problematic if Hop v2 significantly increases demand. I understand that some might say, “wow token shortage good, price go up big”, but that approach is not sustainable. The current TVL of the HOP pool is approximately $280k. This proposal would increase the TVL by $1.2m which would put HOP in line with similar assets. Much of the TVL will be well above the current spot price of HOP which should limit potential adverse effects. As a community, I believe that having well aligned HOP holders is in our long-term best interest. Increasing HOP liquidity will create avenues for more participants to get on board in a reasonable manner.

Recent efforts to increase HOP liquidity are positive but still leave a ways to go. Single sided liquidity lets us LP meaningful amounts of HOP solely to the “upside” of the price range. As the price of HOP increases, the DAO is gently selling HOP for ETH based on market demand. If the price of HOP decreases once this position is in range, there will be significantly more market depth to absorb selling. From my perspective, the biggest downside of this proposal is that it effectively creates resting sell orders for HOP that will need to be filled for the price to increase in the pool. I have tried to size the proposal appropriately to increase liquidity while not overburdening the market for HOP. The impact of additional HOP appears to be very reasonable. The next section explains the mechanics and practical implications for the position.

Mechanics of Execution

I will caveat this by saying that it is difficult to model Univ3 positions and that this should be generally accurate but may be slightly off – please keep in mind that everything is priced in ETH terms so that is an additional variable that makes precision challenging. If there is a great tool for modeling Univ3 positions, please let me know because this was all done manually.

This proposal would take 25,000,000 HOP held by the DAO and LP in a range from just above the current spot price to tick #6400 which equates to roughly a $6 HOP price. To provide context for the additional liquidity, this will add ~$250k of depth between the current spot price and $0.10 HOP. As the price of HOP increases, the dollar value of the depth increases as well (e.g. there is about 2x as much additional depth between $0.10 and $0.20, etc.). The current liquidity is fairly concentrated near the spot price; this proposal would greatly increase the longer tail of liquidity throughout this range. For the purpose of illustration, if the entire range were to be filled it would yield about $31m in ETH for the DAO. This proposal will not provide any liquidity for people to “dump” on at the current prices and only comes into range if the price of HOP increases. If we determine that the liquidity is having negative impacts on HOP or unintended consequences, we can pull the liquidity at any time (I assume this would require a subsequent vote).

I believe that this proposal is a positive step towards creating a more robust environment for HOP ahead of v2 while also providing a gentle means of diversification for the DAO. Please let me know if you have any comments, suggestions or concerns.

Voting Options

  • LP 25,000,000 HOP in specified range
  • No action
  • Abstain

12 posts - 6 participants

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 02 Apr 2024 16:00:00 +0000

Published: Apr 03, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0xed1cb21099c17c1fd6e0b72240dd652b5811b74df3ca568aa8f8a98a7fb9daea
Merkle root total amount: 280084.479537088822848506 (280084479537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1712073600 (2024-04-02T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1712073600

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0xed1cb21099c17c1fd6e0b72240dd652b5811b74df3ca568aa8f8a98a7fb9daea
totalRewards: 280084479537088822848506

1 post - 1 participant

Read full topic

🗳Meta-Governance [RFC] Protocol Financial Stability - Part 1

Published: Mar 27, 2024

View in forum →Remove

Useful Links

Summary

  • This RFC outlines the importance of prioritizing financial stability for HopDAO and introduces the first part of a three-part proposal aimed at effective treasury management.
  • The core of the Framework emphasizes the importance of a structured approach to managing the treasury to guarantee resilience and promote sustainable growth. This includes transparent operations, a clear financial plan, and strategies to uphold token stability.
  • Prior to the snapshot, active engagement with the community is essential to gather feedback and refine the proposed framework

Intro

After discussing with delegates and reviewing poll results, along with understanding the DAO’s potential from a developer’s perspective, it’s clear that prioritizing Hop Financial Stability is crucial. Managing the treasury involves many aspects, from basic budgeting to dealing with complex issues like protocol-owned liquidity and spreading out investments.

Our plan is straightforward:

First, we’ll identify the main problems and decide what’s most important. We’ll use feedback from the community and look at the DAO’s current financial situation, especially with the upcoming V2 launch. This will help us figure out where to focus our efforts.

Next, we’ll create a simple plan for managing the treasury. We’ll outline basic rules and goals for how to handle our money. This will give us a clear path forward and make sure everyone knows what to do. This is the initial part and it will be covered in this post.

Then, we’ll look at how to manage the liquidity of our token. We’ll talk about why it’s important and come up with ideas to keep our token stable and attractive to investors. This will be the second part of our RFC.

Lastly, we’ll set up a plan for how to actually do all this. We’ll decide who’s in charge, what goals we want to meet, and how we’ll measure our success. This will be like a roadmap for the DAO to follow and it will be the third and the last part of our RFC.

With this plan, we’ll be ready to manage our treasury effectively and make sure Hop stays strong and successful.

Let’s work together to make it happen!

Problem Statement

Improving our financial stability is essential for the DAO’s success, just like it is for any other business. We need to consider all aspects of our finances, but we also need to prioritize what’s most important.

The Hop Protocol makes a lot of money, but not all of it goes into our HopDAO Treasury. We have a great community, and it’s important to listen to them to understand how they see the financial situation and what’s important to them when making investment decisions.

Recently, we did a survey to find out what the community thinks we should focus on. From the feedback we received from delegates, the survey, and my own experience, here are the things that HopDAO needs:

a) Hop requires a systematic and well-thought-out approach to managing its treasury to ensure the DAO’s financial resilience and create a platform for sustainable growth.

b) To achieve this, we need a Treasury Management Framework. This framework should include:

  • a transparent operational model with actions, accountable persons, and governance structure,
  • a well-defined financial plan that aims to establish a sustainable runway, liquidity targets, supplemented by proactive budgeting, and regular reporting,
  • a HOP token liquidity management plan to ensure Protocol’s token stability and attractiveness for current and future investors.

These priorities form the foundation for the Treasury Management Framework that I’m about to introduce.

Updated Hop Treasury Management Framework

We have laid out the basic structure needed for a successful treasury management plan here.

Now, let’s refine and tailor it to fit the priorities of our DAO. As mentioned earlier, the key is to focus on setting clear principles and objectives from the start. Once we have this solid foundation, our governance process will become transparent and flexible enough to adjust to changes in the market and our financial needs over time.

TMF Initial Components

The structure of the content within the Treasury Management Framework is designed to provide a comprehensive approach to treasury management for Hop. It encompasses the following key components:

1. Treasury Management Principles

Following these principles helps us manage our DAO’s finances well. We focus on being open, inclusive, and responsible. These principles help us handle our funds in a decentralized way, aiming for transparency, reducing risks, and aligning with our goals.

  • Transparency: We believe in open communication and sharing relevant information about the DAO’s finances. Everyone should have access to know how funds are allocated and investments are made. Transparency builds trust and ensures that everyone is on the same page. We commit to monthly financial reporting, accessible via the Hop Community Forum.
  • Simplicity: “Simplicity is the ultimate sophistication”. We believe in simplicity over complexity. Treasury management doesn’t have to be convoluted and confusing. We aim to streamline our processes, making them accessible and easy to understand for everyone involved. By keeping it simple, we reduce risks and avoid unnecessary complications.
  • Diversification: We encourage spreading the risk by diversifying our investments. Instead of relying on HOP, we explore various opportunities across stablecoins, ETH, and WBTC, deployed across multiple DeFi protocols and strategies. Diversification keeps us balanced and safeguards against unexpected exploits.
  • Accountability and Decentralisation: We recognize the importance of having a dedicated individual or team responsible for driving decisions forward. This ensures that inertia is overcome and progress is made. While we value efficiency, we also advocate for a democratic process where the DAO collectively votes on fundamental guidelines. This strikes a balance between swift action and maintaining oversight to prevent reckless trading practices.
  • Risk Management: Our foremost priority is safeguarding our financial resources. We meticulously assess various risk metrics, from market-related risks to strategy-specific vulnerabilities, ensuring our treasury remains robust and sustainable.
  • Focus: Our treasury management decisions should align with our DAO’s objectives, focusing on ensuring the long-term sustainability of the DAO and it’s token. By keeping our objectives in sight, we make purposeful and impactful decisions.

2. Treasury Management Objectives

As framed in the problem statement section, the goal of the Treasury Management is to build:

…a systematic and well-thought-out approach […] to ensure the DAO’s financial resilience and create a platform for sustainable growth.

I.e., translating this into 5 key objectives that inform our work for the framework:

  1. Meeting Operational Needs: One of the primary objectives of the DAO’s treasury management is to ensure that a DAO has sufficient cash and liquidity to meet its financial obligations and fund its day-to-day operations. This involves optimizing cash flows, managing working capital effectively, and maintaining appropriate levels of liquidity to mitigate the risk of cash shortages. The rest of the longer-term oriented capital, e.g., not needed within the next 24-36 months for operational needs, should be used to take advantage of investment opportunities with different risk profiles.
  2. Provide a sustainable liquidity for HopDAO token: Liquidity is a foundational element for the success and stability of any token-based project. It ensures that new investors can seamlessly enter the market while providing an exit path for those looking to divest. HopDAO Treasury should be able to identify mid-term liquidity improvement strategies, and propose long-term solutions to enhance protocol liquidity.
  3. Data-Driven Decision Making: By leveraging data, we aim to optimize investment choices, risk management strategies, and operational efficiency. This principle ensures that our DeFi strategies are grounded in solid analysis of on-chain and off-chain data, as well as utilising state-of-art statistical concepts, enhancing the effectiveness of our treasury management.
  4. Overseeing Risks, related to Treasury: Treasury management aims to identify, assess, and manage various financial risks faced by a DAO. This includes protocol risk, liquidity risk, credit risk, market risk, and operational risk. The objective is to implement strategies and/or hedging techniques to minimize the impact of these risks on the DAO’s financial performance.
  5. Regulatory Considerations: We stay updated on DeFi regulations to adapt our strategies and partnerships accordingly. This helps us avoid legal risks and adjust our investments as needed. We stay resilient and flexible amid regulatory changes.

Once the DAO approves these initial components, we can move forward with implementing and executing the framework.

TMF Top Priority Components Overview and Next Steps

Now, we’re at a critical point where we need to discuss the practical execution of our objectives. The proposal will be divided into two main areas: Protocol Liquidity Management and Treasury Management execution. Here’s what you can expect in Part 2 and Part 3:

Part 2 - Protocol Owned Liquidity Management Overview:

  1. POL Research: Every decision made by the DAO should be rational and data-driven. I aim to provide the Hop community with an overview of common practices used to manage protocol liquidity. The goal is to assess various protocols and strategies, ranking them based on our specific needs.
  2. Protocol Liquidity Management Framework: In this section, I’ll introduce the principles I plan to use to achieve deep liquidity and price stability.

Part 3 - Treasury Management Mandate:

This final part of the RFC will be structured as a mandate, which will be submitted to the snapshot after gathering all necessary feedback from the community and delegates. Here are the topics we will discuss:

  1. Treasury Management Operational Model & Governance: This section outlines the central components of the framework, aligning all stakeholders and ensuring clarity in the treasury management process. It details procedures, policies, and tools for efficient cash flow management, risk mitigation, and decision-making within the DAO.
  2. Financial Management and Budgeting: Emphasizing cash flow forecasting and liquidity management, this section explores strategies to ensure adequate liquidity for operational needs. It aligns budgeting practices with the DAO’s financial goals.
  3. Treasury Asset Allocation: Efficient allocation of treasury funds is vital for maximizing returns while managing risk. This section covers the selection and allocation process for deploying the DAO’s financial resources, including diversification, rebalancing, and investment strategies.
  4. Reporting: Transparency is essential for the success of a DAO. This section focuses on reporting tools and mechanisms to ensure full transparency in treasury management, emphasizing timely and accurate reporting for stakeholders.
  5. KPIs, Deliverables, and Terms: This section outlines the specific terms, expectations, timeline, and compensation associated with the mandate.

Conclusion

By implementing the Treasury Management Framework, HopDAO can establish a robust and sustainable approach to managing its financial resources. This framework guides every aspect of treasury management, from principles to governance, empowering DAOs to make informed decisions and achieve their financial goals while maintaining transparency and mitigating risks.

I’m thrilled to be part of this journey and excited to contribute to HopDAO’s Treasury success. Your feedback on this proposal is invaluable, and I welcome any thoughts or suggestions you may have. Feel free to reach out to me directly through DMs for a more detailed conversation. Let’s work together to shape the future of HopDAO!

2 posts - 2 participants

Read full topic

🗳Meta-Governance October 2023 - March 2024 HIP 4 Delegate Compensation Reporting

Published: Mar 27, 2024

View in forum →Remove

Hey Hop DAO,

Since [HIP-46] Renewal of Hop Delegate Incentivization Trial (Third Cycle) passed, it’s time to create a new Delegate Compensation Reporting Thread for this period (October 11, 2023 – March 27, 2024). The last delegate compensation reporting period ended October 10, 2023, therefore anything from then up until March 27,2024 falls within this reporting period.

To make matters easier for delegates who are eligible for incentives, the Head of DAO Operations will verify the voting and communication requirements for each delegate but each delegate is expected to share in this thread their voting and communication ratio, their lowest HOP delegated during the period, and finally their incentive rewards amount for the period based on the calculation. Please use the calculation from the recent snapshot vote to renew the delegate incentivization program. Please share your communication publicly in each proposal’s governance forum thread or create your own voting and communication thread for the Head of DAO Operations to verify. Please include your Ethereum address as well.

Delegates can utilize this Dune Querry 12 5 to find the lowest level of Hop within the time period.

Delegates can also use this graph 9 5 to determine their compensation by using their lowest Hop for the specified period.

Delegates can go ahead and report below in this thread.

Below are the snapshot votes and Tally votes since the last reporting period.

Snapshot Votes Since Last Delegate Reporting Thread

  • [Temperature Check] Treasury Diversification & Protocol Owned Liquidity (multichain HOP/ETH LPs)
  • [HIP-41] Incentivize Hop AMMs on Supported and Upcoming Chains
  • Hop Community Moderator Compensation
  • [HIP-43] Proposal to create Head of DAO Operations
  • [HIP-44] Treasury Diversification for Ongoing DAO Expenses
  • [HIP-45] Head of DAO Operations Election
  • [HIP-46] Renewal of Hop Delegate Incentivization Trial (Third Cycle)

Tally Votes Since Last Delegate Reporting Thread (August, September, October)

  • [HIP-39] Community Multisig Refill (4) on Feb 5th, 2024 was defeated
  • [HIP-40] Treasury Diversification & Protocol Owned Liquidity on Feb 5th 2024 passed
  • [HIP-39] Community Multisig Refill (5) on Feb 15th, 2024 passed
  • [HIP-44] Treasury Diversification for Ongoing DAO Expenses on Mar 4th 2024 passed

For example;
francom.eth
Voting: 11/11
Communication: 9/9 (HIP 40 & HIP 44 had to be voted on in snapshot and tally but you only have to communicate rationale once for each of these proposals).

lowest Hop during period: x
incentive rewards during this period: x

20 posts - 7 participants

Read full topic

🐰Hop Ecosystem [Request for Comment] Launch Hop on NEO X (NEO EVM) Mainnet

Published: Mar 18, 2024

View in forum →Remove

Launch Hop on NEO X (NEO EVM) Mainnet

Point of Contact: Tony Sun

Proposal summary

We propose to Hop community to deploy Hop Bridge protocol to the NEO Ethereum Virtual Machine rollup known as “NEO X” on behalf of the community.

We believe this is the right moment for Hop to deploy on NEO X, for several major reasons:

· NEO X is a new zk-rollup that provides Ethereum Virtual Machine (EVM) equivalence (opcode-level compatibility) for a transparent user experience and existing NEO ecosystem and tooling compatibility. Additionally, speed of fraud proofs allows for near instant native bridging of funds bridge (rather than waiting seven days).

· An Ethereum L2 scalability solution utilizing cryptographic zero-knowledge technology to provide validation and fast finality of off-chain transaction computations.

· A new set of tools and technologies were created and engineered and are contained in this organization, to address the required recreation of all EVM opcodes for transparent deployment and transactions with existing Ethereum smart contracts.

· NEO X is aligned with NEO and its values.

· Hop is already deployed on POS with good success

· Hop gaining market-share through early mover advantage

NEO X main net will launch in May to June. Our aim is to have Hop as one of our early stage bridge partners, as we view Hop as a highly crucial product for users to bridge their assets across various chains.

About NEO X

Neo was founded 2014 and has grown into a first-class smart contract platform. NEO is one of most feature-complete L1 blockchain platforms for building decentralized applications.

Neo X is an EVM-compatible sidechain incorporating Neo’s distinctive dBFT consensus mechanism. Serving as a bridge between Neo N3 and the widely adopted EVM network, we expect Neo X to significantly expand the Neo ecosystem and provide developers with broader pathways for innovation.

In this pre-alpha version of the TestNet, we have aligned the Engine and dBFT interfaces. The main features are as follows:

dBFT consensus engine support has been added to Ethereum nodes. Geth Ethereum node implementation is taken as a basis.

A set of pre-configured stand by validators act as dBFT consensus nodes. All the advantages, features and mechanics of dBFT consensus are precisely preserved.

Ethereum P2P protocol is extended with dBFT-specific consensus messages.

Invasive existing Ethereum block structure modifications are avoided as much as possible to stay compatible with existing Ethereum ecosystem tools. MixHash block field is reused to store NextConsensus information and Nonce field is reused to store dBFT Primary index.

Multisignature scheme used in Neo N3 is adopted to the existing Ethereum block structure, so that it’s possible for Neo X consensus nodes to add M out of N signature to the block and properly verify signature correctness. Extra block field is reused for this purpose.

Secp256k1 signatures are used for block signing.

The multi-node dBFT consensus mechanism, enveloped transactions, and a seamless native bridge connecting Neo X with Neo N3 will be introduced in subsequent versions.

*Please be aware that this pre-alpha version of NeoX is in the active development phase, meaning that ALL data will be cleared in future updates.

Motivation

There’s significant value in Hop being available on an EVM. Deploying early on NEO X helps solidify Hop’s place as a leading DEX and a thought leader.

Additionally, given the community and user uptake Hop has seen on NEO X PoS, it’s only natural to make its deployment on NEO X a priority.

Partner Details

Neo Global Development

This proposal is being made by Tony Sun, an employee of Neo Global Development. Neo Global Development is a legal entity focused on the ecosystem growth and maintenance of the suite of NEO.

Partner Legal

The legal entity that is supporting this proposal is Neo Global Development Ltd, a British Virgin Islands corporation known as “NGD”.

Delegate Sponsor

There is no delegate co-authoring or sponsoring this proposal. Instead, this is a proposal submitted by Tony Sun of NGD to support the growth of NEO as part of the overall NEO community.

Conflict of Interest Declaration

There are no existing financial or contractual relationships between NGD and any of Sushiswap’s legal entities, including Sushiswap, SUSHI tokens, nor investments of Sushiswap…

What potential risks are there for this project’s success? How could they be mitigated?

Deploying on NEO X should pose minimal risks, relative to deploying on alternate blockchains. As an Ethereum Layer Two, it uses Zero Knowledge proofs to inherit NEO’s core safety, while allowing developers to easily deploy existing EVM codebases. The bridge has been disintermediated, and Sushiswap can expect reputable Oracle providers to be available as data providers from Day One. NEO X’s EVM testnet has been running for the past two months. Additionally, the deployment has been audited multiple times, by auditors including Red4Sec. Welcome to NEO

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 05 Mar 2024 16:00:00 +0000

Published: Mar 06, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x04ee319bdff2f925dacd5b7b5e7e565de0ca4319c31ee30eb77f1cc0b8b7a1d8
Merkle root total amount: 275744.119537088822848506 (275744119537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1709654400 (2024-03-05T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1709654400

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x04ee319bdff2f925dacd5b7b5e7e565de0ca4319c31ee30eb77f1cc0b8b7a1d8
totalRewards: 275744119537088822848506

1 post - 1 participant

Read full topic

🗳Meta-Governance [RFC] Treasury Diversification for Ongoing Expenses

Published: Feb 15, 2024

View in forum →Remove

This RFC is made with the same goals as the original [RFC] Treasury Diversification. Updated parameters are below.

Summary

Sell 25% of Hop DAO’s ARB holdings (209,251 ARB) for USDC. This should raise approximately $440k for Hop DAO at current prices (~$2.10).

Motivation

Hop DAO will need stablecoins to cover ongoing expenses. The current and future ongoing expenses that currently exist are:

Execution

The onchain execution of the proposal will send a message through the Arbitrum messenger to trigger a transfer of the ARB currently in the Hop Treasury alias address to the Community Multisig. The Community Multisig can then complete the sale in a series of transactions.

Voting Options

  • Sell 25% of Hop DAO’s ARB holdings for USDC
  • No action
  • Abstain

11 posts - 9 participants

Read full topic

🗳Meta-Governance Nominations for Head of DAO Ops Election

Published: Feb 13, 2024

View in forum →Remove

The DAO has voted to create a Head of DAO Ops role with 2.5 million HOP tokens voting in favor. The Head of DAO Operations will connect different aspects of the DAO and make sure there is cross-collaboration and communication to propagate personal responsibilities for the DAO’s subgroups. To qualify for this role, one must have been materially active in the DAO for at least 6-months, having attended community calls, posted on the forum, used the hop bridge, and held HOP).

If you would like to take on this role, please share your nomination below with a short excerpt on who you are and why you would make a great Head of DAO Ops.

For the Head of DAO Ops discussion thread

For the Snapshot Vote

6 posts - 4 participants

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 06 Feb 2024 16:00:00 +0000

Published: Feb 07, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x66029da6eccca2631353cb14a4bc878d774c0ca304104df2720f41bd304e6de8
Merkle root total amount: 270774.119537088822848506 (270774119537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1707235200 (2024-02-06T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1707235200

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
POLYGONZK_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x66029da6eccca2631353cb14a4bc878d774c0ca304104df2720f41bd304e6de8
totalRewards: 270774119537088822848506

1 post - 1 participant

Read full topic

🗳Meta-Governance RFC: Renewal of the Hop Delegate Incentivization Trial (3rd Period)

Published: Jan 27, 2024

View in forum →Remove

References

Original Delegate Incentivisation Trial 18

Delegate Amendment 11

Renewal of hop delegate incentivization trial

Simple Summary

Over the past year, Hop DAO trialed delegate compensation to participate in Hop DAO Governance actively. This is a proposal to extend the Hop Delegate Incentivization Program for a period of twelve months.

Motivation

Through the delegate incentivisation trial over the past year, delegates have been incentivised to actively participate in Hop DAO Governance by taking part in discussions on the forum, voting on proposals and ensuring to share their rationale for voting a particular way. This program has created a healthy culture of governance-related engagement in the Hop DAO.

Renewing this Hop Delegate Incentivization trial will ensure that the quality of Hop DAO does not decline.

This Incentivisation Program will retain the delegate talent that the Hop DAO has attracted; its continued existence will allow future delegates to join the Hop DAO and allocate resources to improving the Hop Protocol.

How did the program work?

Delegates are required to vote on proposals and communicate their rationale for voting in their delegate thread on the Hop DAO Forum.

Under the current Delegate Compensation Program, delegates are compensated using a formula where delegate incentives increase with the amount of HOP delegated, but in a decreasing fashion. This means that a small delegate will receive a greater incentive per voting weight than a large delegate, but a larger delegate will receive more.

Delegates utilise this Dune Querry 3 to ascertain their lowest level of Hop for that month and use this visualization graph 1 to ascertain the incentives they are due according to their lowest amount of Hop for that month.

Finally, delegates would self-report under a thread dedicated to reporting delegate eligibility each month. The following information would be required of a delegate.

Vote participation percent = (Number of proposals voted upon ÷ all proposals) * 100

Communication participation percentage = Number of proposals referenced with voting position and reasoning for that position ÷ all proposals ) * 100

Lowest Amount of Hop for the period

Incentives to be paid out.

Previous Changes to the Program

Sequel to the feedback provided in the previous proposal, the following changes were implemented in the previous renewal proposal.
There shall be a new formula used to calculate incentives.
Where:
I = Incentives to be received
h = lowest level of HOP delegated that month
M = Multiplier based on consecutive participation periods

To calculate the multiplier (M), we can use the following formula:

M = 1 + (0.1 * P)

Where:
P = Number of consecutive completed 6-month participation periods (capped at a certain value, e.g., 5)

This multiplier starts at 1 for new delegates and increases by 0.1 for each consecutive completed 6-month participation period, capped at a certain value (e.g., 1.5 after five periods).

The Participation rate would be removed; therefore, new delegates who reach the 90,000 threshold could join as delegates at any time.

The incentive formula would be amended to include a Multiplier based on consecutive participation periods.

The primary import of these changes is that new delegates can join Hop Governance at any time, while old delegates are incentivized to continue participating.

There is no opt-in or opt-out mechanism. Delegates will have to self-report to receive compensation.

Specification

This proposal requests for the Hop Delegate Incentivization Program to be renewed for another twelve months; the program will retain all the guidelines and procedures laid down by the original proposal 18, its subsequent amendment 11 and amendment in previous renewal.

Next Steps.

If this proposal passes, the Hop DAO Delegate Incentivization Program will continue for another twelve months.

25 posts - 10 participants

Read full topic

🗳Meta-Governance [RFC] V2 Open questions (financial perspective)

Published: Jan 24, 2024

View in forum →Remove

Related discussion(s)

Rationale

We have a V2 launch coming up soon, and we need to ensure that we have all financial aspects covered. The market could be in our favor, but as we discussed before, we should be proactive. Here is my list of open questions for the upcoming V2 launch from both financial, Ops, and Growth perspectives. Feel free to add your questions; the aim is to keep this as a record of things we should consider.

Open questions:

  • Protocol-Owned-Liquidity
    • Single-sided vs. bonding vs. PALM vs. Tokemak V2 etc.
  • HOP Valuation
    • What would be the parameters of the floor price, etc.? ( For valuation I would suggest a competitors analysis combined with a secondary market sentiment analysis)
  • Treasury
    • Circulation, payments, principles, diversification (Will create another Temp Check following this post to gather feedback)
  • Risk management
    • Liquidity risk, eg. stablecoin reserves, HOP volatility etc.
    • Market risk
    • Governance risks, eg. Governance attacks
  • Growth
    • Do we plan to have grant programs or any other incentives?
  • Token Utility
    • How do we ensure we have enough intrinsic value to hold and use HOP?
    • What would be the tokenomics?

Next Steps

  1. Define a rough timeline for the V2 launch so we can prepare accordingly
  2. POL research: @fourpoops is rocking with his proposal. I would love to double down on it and create a comparison between different options as part of a larger Risk Management initiative (plan to post it this week)
  3. I strongly encourage the community to start discussions on Growth and Token Utility
  4. Treasury and risk management setup
  5. Fast Head of DAO nomination. I think this volume of work shouldn’t be on the devs’ shoulders but rather on one person who can handle these aspects

I have created a working file where I am already preparing the research on all topics covered here. Feel free to comment and participate!

1 post - 1 participant

Read full topic

🗳Meta-Governance [RFC] Head of DAO Governance and Operations Role

Published: Jan 11, 2024

View in forum →Remove

DAOs can be chaotic due to their unstructured nature but there are DAO service providers who create governance frameworks and push along the day-to-day operations for the benefit of the respective DAOs. Hop DAO has had help from several groups since its inception. GFX Labs was the first to help with DAO operations given their extensive experience in governance and ops. When they exited the DAO, StableLab took on the role of helping the DAO with operations and governance initiatives. Unfortunately, StableLab has decided to refocus their resources elsewhere during this prolonged bear market and have recently exited the DAO.

While it is hard during this extensive bear market to retain talent…. the show must go on. With that in mind, I propose creating a new role titled Head of DAO Operations where an individual is responsible for the day-to-day operations acting as a liaison between the different participants and subgroups of the DAO such as; core developer team, grants committee, ambassadors, delegates, multisig signers and more.

The Head of DAO Operations is not to be construed as a central figure of authority but more like the glue that connects different aspects of the DAO and makes sure there is cross collaboration and communication and helps propagate personal responsibilities for each of the DAOs subgroups. It is imperative for the Head of DAO ops to be fully aligned. Therefore, to qualify for the role one must have been materially active in the DAO for at least 6-months, having attended community calls, posted on the forum, used the hop bridge, held hop).

The Head of DAO Ops will be in charge of community calls, the governance forum (pushing proposals from start to finish with the respective author), and pushing along the grants committee, ambassador program and multisig signers.

Additional responsibilities:
⁃ evaluating and defining compensation for existing and new committees and their members
⁃ Assigning budgets to committees
⁃ Verifying the data posted each month for the delegate compensation thread
⁃ Providing an overview of DAO ops at a regular cadence
⁃ Oversee and manage the transition from old committee members to new committee members when appropriate
⁃ More rapidly iterate on HIP amendments when they are needed
⁃ Help reform the grants committee and handle some of the ops side
⁃ 30 hours a month (1.5xday)
⁃ If a good faith effort to accomplish the tasks set forth as the Head of DAO ops is not made, the DAO will not pay the compensation.

This role should go through a 6-month initial term to make sure DAO operations continue to run smoothly in the short term while preparing for a long-term solution regarding ongoing operations. Since this role requires constant participation and time commitment, I believe compensation for this role should be $3k/month with a 1-year vesting period.

Compensation to be made in HOP token. Vesting starts the day after the election when the role officially begins and the work is to commence. Payment to be made retroactively every 3-months.

22 posts - 11 participants

Read full topic

🗳Meta-Governance [RFC] Hop Community Moderator Compensation

Published: Jan 10, 2024

View in forum →Remove

Authors: Chris Whinfrey, Rxpwnz

This proposal will provide retroactive compensation to Rxpwnz and the rest of the moderator team who have all provided essential support for the past year and a half. It will also establish a Lead Community Moderator role and sets up an ongoing moderator compensation program to continue rewarding these contributors.

Background

Since Hop’s inception over a year ago, the role of Community Moderator has been entirely voluntary. This arrangement served the DAO well and attracted moderators driven by genuine enthusiasm for the protocol rather than financial incentives. However, it is now crucial to establish a structured framework for rewarding, managing, and training these contributors.

The Hop Community Moderators watch the Hop Discord and forum 24/7 to mitigate spam, scammers, technical difficulties, and much more. This includes manually banned approximately 5,000 Discord scammer or spam profiles, fielding an endless stream of questions from the community, and keeping appropriate stakeholders informed to ensure technical issues are resolved quickly. They’ve played an essential role in building Hop into the trusted platform it is today.

Rxpwnz has taken on a leadership role on the moderator team and regularly goes above and beyond basic moderator responsibilities. He regularly delves into technical and operational security topics including assisting with stuck transactions and reporting phishing websites to Metamask, Google and DNS providers leading to their rapid deactivation. He continually contributes ideas to enhance various aspects of Discord operations. This includes integrating Discord bots and streamlining channels to ensure they remain clean and topic-related. During Arbitrum Bridge Week, he even provided users with his own personal funds to help cover gas fees when needed.

Retrospective Rewards for Dedicated Moderators

Our community moderators have been actively contributing for over 18 months, and it’s time to recognize and reward their invaluable efforts in driving Hop’s growth.

Rxpwnz has been consistently engaged in a wide range of responsibilities, which include Discord and Forum moderation, managing Hop Guild, enhancing operational security (OpSec) by identifying and taking down over 150 phishing sites impersonating Hop, contributing to business development discussions regarding potential partnerships, engaging with developers interested in collaborating with Hop, and offering troubleshooting and improvement assistance. We propose a compensation rate of $3,000/month, both for his active contributions and in retrospective recognition starting June 9th, 2022. This compensation will be divided in a 50:50 ratio between $USDC and $HOP, with the latter being linearly vested over a 1-year period.

Our other moderators, while relatively less active, have been instrumental in moderation tasks and effectively conveying information about issues to key stakeholders. To acknowledge their important contributions, we propose one-time bonuses as follows:

Nauzystan: $4,000
Cai: $2,000
Hossein: $1,000
Abruzy: $1,000

Compensation for these bonuses will also be distributed in a 50:50 ratio between $USDC and $HOP, with the latter subject to linear vesting over 1 year.

The amount of $HOP distributed for retroactive payments should be calculated using the time-weighted average price from the launch of Hop DAO to the date of this proposal passing. Partial months should be prorated appropriately.

Establishing a Lead Community Moderator Role

This proposal establishes a Lead Community Moderator role initially filled by Rxpwnz. A new Lead Community Moderator can be appointed via subsequent governance proposal or by election should there be multiple interested individuals.

Compensation system

In addition to the ongoing compensation for the Lead Community Moderator mentioned above, this proposal establishes a discretionary budget to be used by the lead moderator to reward volunteer moderators from the community. Each month, approximately $2,000 worth of $HOP tokens will be made available to the Lead Community Moderator to distribute to the moderator team when appropriate. The lead moderator will conduct monthly assessments, guided by considerations of individual involvement and performance (activity and delivered Key Performance Indicators) to determine compensation. Any amounts distributed and their justification will be reported by the Lead Community Moderator to the Hop community in either written form or on the regular community calls. In months where the lead chooses to distribute only part or none of the budget, the $HOP will remain in the Hop Community Multisig and does not accrue toward future months.

8 posts - 8 participants

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 09 Jan 2024 16:00:00 +0000

Published: Jan 10, 2024

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x92fc91c612c407ea903737b27d0288b46c87b1b64b4c455bd129eea87f9230f5
Merkle root total amount: 268277.949537088822848506 (268277949537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1704816000 (2024-01-09T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1704816000

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x92fc91c612c407ea903737b27d0288b46c87b1b64b4c455bd129eea87f9230f5
totalRewards: 268277949537088822848506

1 post - 1 participant

Read full topic

💬General Discussions Request: Feedback on which forum features matter most

Published: Dec 27, 2023

View in forum →Remove

I am a UX researcher looking into the tooling that DAOs use for operations and decision making. Specifically, looking into what DAOs love about Discourse (the forum of choice) and what could be improved. :thinking:

I have created a small discord group here to help me collect information from any volunteers who would be happy to help. Questions will be usually quick and short, and if we do any long form interviews, incentivize them in appreciation of the feedback.

The end goal is to provide better tooling to DAOs, but this can only be achieved by talking to them. If the Hop community is willing to help, join, please feel free to jump in and say hello.

There is only one channel there #general, since we want to keep comms simple.

Thank you and Much Appreciated!

2 posts - 1 participant

Read full topic

🗳Meta-Governance Is an anon mode helpful for Hop?

Published: Dec 20, 2023

View in forum →Remove

I have been researching how DAOs make decisions and I am curious about anonymity. Since this feature is possible on discourse and is easy to enable:

  1. Have we considered enabling it here
  2. If yes, why, if not, why not?

My thesis is that optional anonymity would allow users to communicate candidly without needing to create a throwaway account, and would love your feedback on this.

1 post - 1 participant

Read full topic

Automated 🤖 AUTOMATED: New Merkle Rewards Root Tue, 12 Dec 2023 16:00:00 +0000

Published: Dec 13, 2023

View in forum →Remove

This is an automated post by the merkle rewards worker bot :robot:

A new merkle root has been published to GitHub:

Merkle root hash: 0x9d0e96490a215cf5b44c2d19b1057574e3c04d9f39f31eedab41e9c2166cc974
Merkle root total amount: 264296.449537088822848506 (264296449537088822848506)
Start timestamp: 1663898400 (2022-09-23T02:00:00.000+00:00)
End timestamp: 1702396800 (2023-12-12T16:00:00.000+00:00)
Rewards contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Rewards contract network: optimism

Instructions to verify merkle root:

docker pull hopprotocol/merkle-drop-framework
docker run --env-file docker.env -v ~/Downloads/op_merkle_rewards_data:/tmp/feesdb hopprotocol/merkle-drop-framework start:dist generate -- --network=mainnet --rewards-contract=0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73 --rewards-contract-network=optimism --start-timestamp=1663898400 --end-timestamp=1702396800

Supply RPC urls in docker.env:

ETHEREUM_RPC_URL=https://example.com
GNOSIS_RPC_URL=https://example.com
POLYGON_RPC_URL=https://example.com
OPTIMISM_RPC_URL=https://example.com
ARBITRUM_RPC_URL=https://example.com
BASE_RPC_URL=https://example.com
NOVA_RPC_URL=https://example.com
LINEA_RPC_URL=https://example.com
USE_API_FOR_ON_CHAIN_DATA=true

Web app to publish root:

Contract information for multisig signers:

Contract address: 0x45269F59aA76bB491D0Fc4c26F468D8E1EE26b73
Method: setMerkleRoot
Parameters:
_merkleRoot: 0x9d0e96490a215cf5b44c2d19b1057574e3c04d9f39f31eedab41e9c2166cc974
totalRewards: 264296449537088822848506

1 post - 1 participant

Read full topic

🗳Meta-Governance L2BEAT Delegate Communication Thread

Published: Dec 13, 2023

View in forum →Remove

Intro

Firstly, allow us to introduce ourselves for anyone who isn’t already familiar with L2BEAT.

L2BEAT is an independent, public goods company who acts as an impartial watchdog for the Ethereum Layer2 ecosystem. Our mission is to provide comprehensive and unbiased analysis and comparative evaluations of Layer 2 solutions . We are committed to the verification and fact-checking of the claims made by each project, with a special focus on the security aspects. What sets L2BEAT apart is our unwavering commitment to delivering accurate and reliable information.

In addition, L2BEAT has a governance team (@Kaereste and @Sinkas) which actively participates in constructive discussions of specific protocol challenges and issues, fostering the discourse toward increasingly permissionless, open source, and trustless systems. Our participation in various DAOs and public debates reflects this commitment.

For more information on L2BEAT and our participation in Hop’s Governance, please refer to our delegate profile on Tally..

Delegate Communication Thread

To promote transparency and communication as delegates, we’ll be regularly updating the below thread with our actions in the governance of Hop. Our updates will include how we voted for different proposals and our rationale.

Update #1

Voting

[Snapshot] (HIP-37) Authereum Labs Engagement Adjustment - Voted FOR

We voted in favour of adjusting the compensation to be received by Authereum Labs since they were able to provide their services with a smaller team than expected, and, by extension, at a lesser cost.

[Snapshot] (HIP-38) Treasury Diversification - Voted FOR

Even though we believe there should be a more holistic approach to treasury diversification, we understand the risks associated with being overly exposed to a single asset and as such we voted in favour of the proposal.

[Tally] (HIP-38) Treasury Diversification - Voted FOR

We also voted in favour of the proposal during the subsequent on-chain vote.

[Snapshot] (HIP-39) Community Multisig Refill - Voted FOR

The community multisig should always have at least a couple of months’ worth of expenses and as such we voted in favour of the proposal to refill it.

[Tally] (HIP-39) Community Multisig Refill - Voted FOR

We also voted in favour of the proposal during the subsequent on-chain vote.

Discussions

Hop Grants Committee Renewal and Redesign

We participated in the discussion around the renewal and redesign of the Hop Grants Committee and invited the community to our office hours (which happen every Friday at 4pm UTC/11am EST) to discuss it further.

L2BEAT’s Hop Office Hours

To further our communication with our constituents and any interested party in the community, we’ll be hosting recurring Office Hours on Google Meets.

The office hours will be held every Friday at 4pm UTC/ 11am EST

During the Office Hours, you will be able to reach L2BEAT’s governance team, which consists of Kaereste (Krzysztof Urbanski) and Sinkas (Anastassis Oikonomopoulos) and discuss our activity as delegates.

The purpose of the office hours is to gather feedback from the people who have delegated to us, answer any questions in regards to our voting activities and rationale, and collect input on things you’d like us engage in discussions about.

You can add the L2BEAT Governance Calendar in your Google Calendar to find the respective Google Meets links for every call and to easily keep track of the Office Hours, as well as other important calls and events (e.g. voting deadlines) relevant to Uniswap that L2BEAT governance team will be attending or hosting.

2 posts - 1 participant

Read full topic

🐰Hop Ecosystem Copra Loans - Protocol Managed Liquidity

Published: Dec 07, 2023

View in forum →Remove

Hi everyone,

Summary
Copra Finance wishes to partner with Hop Protocol in issuing a protocol loan. The loan would not be used for token liquidity, but to finance protocol activities, offering an additional source of revenue, as proof of the reliability of Hop’s smart contracts, and to prove that Hop can consistently generate revenue to repay the loan interest.

Introduction
Copra Finance helps defi protocols set up loans to help finance their growth and remain competitive. Our high-level goal is to create a secondary market for ‘internet corporate bonds’, comprised of defi protocol loans. We aim to help defi protocols shore up liquidity without distributing native token incentives, although we can help facilitate OTC-style token deals after loans reach maturity. To reiterate, our liquidity deals are not to support your native token, but rather to offer lenders a safe way to finance your growth, with the option to buy some of your native tokens at maturity should they want to.

For our lenders, who are typically tradfi institutions or whales, our bonds are a fixed-income product with ETH, BTC, or stablecoins. Many defi protocols rely on spoken word deals with whales, whose reliability cannot be assured. Copra loans can provide a safer, fairer and more flexible arrangement for both parties.

We are currently developing an Arbitrum Opportunity Vault, a collection of small/medium-sized protocols that wish to finance their growth, we aim to support them by providing them with reliable liquidity. We have also applied for an Arbitrum grant, and if successful will be able to support our borrowers with additional offerings from the grant.

All aspects of our loans are facilitated on-chain through our smart contracts, including a credit account that manages funds, a revenue escrow account that receives future income, and a token warrant account for managing native token deals. Funds can only be deployed from our smart contract credit account to whitelisted pools.

How a Copra Loan Could Benefit Hop

  1. For borrowers such as Hop, a loan would help to boost TVL and overcome any liquidity bottlenecks. In the case of Hop, loan liquidity would be used to support your bridge pools.
  2. Seeing as Hop already has considerable TVL, additional benefits of interest include the ability to generate additional income. Any yield generated by the loan funds above the preset revenue escrow ceiling will be returned to Hop at maturity.
  3. Facilitating deals with whales or other large lenders such as protocols, to provide Hop with rearrangeable, fixed-duration liquidity. In turn, these lenders get a share of protocol revenue to satisfy the loan interest and are paid in the same currency as the loan principal.
  4. Seeing one of our bonds through to maturity can help give confidence to future liquidity providers. We intend to generate the equivalent of a credit score to be used as proof of your reliability. Being able to repay the loan with interest will also prove Hop’s ability to consistently generate revenue. This may also allow you to assess the validity of your v2 fee structure in supporting Dao activities.
  5. Finally, you will have the opportunity to sell some of your Hop tokens to lenders at maturity at a predetermined price, this can support your protocol token’s current valuation, and the lenders, having been repaid the loan principal with interest interest will not be under pressure to sell.

From looking at your liquidity pools it is clear that the majority of Hop’s liquidity is in the form of USDC and ETH (over $2m and $8m respectively), hence these would likely be the currency of choice for such a loan, and hence the currency of the loan interest.

Your ETH pool for example contains many whales and also protocols. Copra’s smart contracts could help to facilitate fixed duration and predictable liquidity deals with any of these lenders. The lenders would receive fixed income from your protocol revenue, and you would receive their liquidity at a fixed interest rate that could be repurposed or rearranged as most beneficial for the protocol. Deals could be made with protocols such as Beefy to enhance their pool yield offerings in exchange for predictable liquidity.

Loan Terms
The ‘risk-free’ yield of defi comes from staked ETH, naturally, the yield on defi protocol bonds must come with a higher yield due to the risk premium. We currently offer 14% APR to our lenders but are open to negotiation.
However, we are open to discussing loan size and duration, as well as servicing any other liquidity needs.

Bottom Line
The bottom line is that Copra’s trustless loans can be used to support Hop in whatever ways are needed. The deployed liquidity is rearrangeable and reliable due to its fixed duration. The entire process is automated by smart contracts, the funds will be under the custody of our credit account.

We are currently developing our product and forming bonds with our first few clients. We have detailed protocol docs, which can be shared on request. Any questions and feedback are welcomed, and we hope to be able to form a long-lasting partnership with Hop.

3 posts - 1 participant

Read full topic

🐰Hop Ecosystem [Request for Comment] Incentivize Hop AMMs on Supported and Upcoming Chains

Published: Dec 05, 2023

View in forum →Remove

Summary

Hop V1 is on track to be deployed on a number of new chains in the Ethereum ecosystem that have already gone through Hop governance. This includes:

It is likely that the ETH and HOP bridges will be deployed on each of these chains. Given that, the ETH bridges would benefit from having AMM incentives applied to them to help for a seamless Hop experience. The HOP bridges do not use AMMs and will not be considered for these incentives.

This proposal suggests allocating HOP to the ETH AMMs of the new chains to be supported by Hop. If accepted, there will not be a new post/vote for the addition of incentives on the new chains since this is meant to account for all of them.

This post is meant to open up the discussion about the addition of incentives on these chains and their respective amounts.

Proposed Amounts

This proposal seeks to set up an incentive program for the ETH AMM on Linea, Polygon zkEVM, Scroll, zkSync, and PGN. The incentive values below were calculated by looking at the TVL of each chain and comparing it to the TVL of Optimism and multiplying that ratio with the HOP incentives on the ETH bridge on Optimism to derive a value. A multiplier of 3x was added to account for the potential growth of these chains given that they have been live for far shorter than the reference chain, Optimism. The exact formula used is:

HOP Incentive Amount = (Chain TVL / Optimism TVL) * Optimism ETH Bridge HOP Incentives * 3

Using the TVL data from L2Beat on December 5th, 2023 and applying that calculation to each chain, we get:

Chain TVL (Million) HOP Incentives / Month
Optimism $4,130.00 346,800
Linea $175.00 44,085
Polygon zkEVM* $114.00 28,718
Scroll $42.90 10,807
zkSync $536.00 135,025
PGN** $1.53 385

*The original Polygon zkEVM proposal asked to reallocate 15% of the current Polygon PoS chain’s incentives to the Polygon zkEVM chain. This will be applied automatically, as it has already been voted on, but is not counted in the numbers from this proposal. With these additional 15%, the number of HOP incentives distributed on Polygon zkEVM becomes 60,968 per month.

**It is expected that the TVL of PGN will drastically increase prior to Hop support so that the effort/time/cost of supporting the chain makes sense for the entire Hop ecosystem. The support of the chain must also pass a Snapshot vote.

These values should give the AMM on each network the ability to build up liquidity for an awesome Hop experience. Summing all the new incentives results in 219,021 additional HOP distributed per month, or 2,628,252 per year (0.263% of the total supply). However, it is expected that (1) some of these chains will not be live for a few weeks or months, and (2) when they are live, these will only be live for a few a short distribution cycles until the release of V2.

It is important to note that Hop V2 is coming soon. The system is set up in such a way that direct incentives will be far less important to the health of the system and thus this current incentive system will not apply for Hop V2. These proposed Hop V1 incentives will only be in place until V2 is fully live, meaning that these incentives will only be distributed a small handful of times.

Timeline/Next Steps

  • This RFC will last a few days in order to gather any appropriate discussion or feedback.
  • After discussion and feedback, a delegate will post this proposal to Snapshot.
  • If the Snapshot vote passes, AMM incentives for the ETH bridge on these chains will be automatically applied when the chain is supported.

13 posts - 12 participants

Read full topic

💬General Discussions Ending engagement with Hop

Published: Nov 23, 2023

View in forum →Remove

I am resigning from all positions I hold and will no longer be working with Hop. Thank you to the many friends I made over the last year.

1 post - 1 participant

Read full topic

🐰Hop Ecosystem [RFC] Treasury Diversification & Protocol Owned Liquidity (multichain HOP/ETH LPs)

Published: Nov 20, 2023

View in forum →Remove

Goals and Motivation
In the interest of bolstering the financial health and autonomy of the Hop Protocol, this proposal presents an initiative to further diversify the Hop DAO treasury holdings while establishing the beginnings of significant protocol-owned liquidity (POL). This move aims to mitigate the current skew between HOP’s circulating market cap and its fully diluted value, which stands as an impediment to stable price discovery and large-scale, value-aligned investment in HOP from non-private parties.

HOP’s circulating to fully diluted supply is far too skewed (4M circulating mkt cap, 40M fully diluted value FDV). This skew makes it difficult for proper price discovery to occur and even more difficult for anyone to make a large, opinionated allocation to HOP at a stable valuation in the open market. Moreover, as the HOP token is playing an increasingly growing role in the protocol, POL increases accessibility for the token while generating fees for the DAO. This proposal brings idle HOP into the market, makes it productive, increases market liquidity, and retains DAO ownership.

Generally, the amounts are small and ensure a continued reserve of HOP in the treasury for future developmental use cases such as grants, incentives, and other value-creating opportunities.

Proposal Details
This proposal would seek to:

  1. Allocate 1,500,000 HOP ($57,000 @ $.038) and 57,000 USDC from the treasury for distribution to Mainnet, Optimism, and Arbitrum. The total value is ~$114,000 ($57,000 + $57,000).

  2. Sell 57,000 USDC for ETH

  3. Bridge HOP and ETH to each chain in the predetermined proportions (see calculations further down in proposal):

    • Mainnet: 49.33%
    • Optimism: 30.04%
    • Arbitrum: 20.75%
  4. Deposit HOP/ETH liquidity into DEXes

    • Mainnet: Uniswap v3 0.3% tier (full range?)
    • Optimism: Velodrome v2
    • Arbitrum: Camelot v2 (potentially v3?)
  5. Lastly, deposit veNFT into compounding Relay strategy created for us by Velodrome (including this in this proposal, because it’s DEX related and fairly low stakes)

Each chain/dex’s proportion of the 1,500,000 HOP is decided as a blended percentage of each DEX’s HOP liquidity & HOP 24 hr volume, and each chain’s percentage of Hop protocol’s bridging volumes (from volume.hop.exchange). Current HOP/ETH liquidity across chains is ~$350,000, so this proposal would increase the outstanding liquidity by ~33%. You can see the breakdown of current liquidity, volume, and bridge volume here in order to calculate the blended percentages:


link to sheet

These 3 chains are lower hanging fruit because we already have multisig capabilities and they’re the top 3 by liquidity and bridging profiles by quite a large margin. In the future we can and should look into Base, Polygon, Gnosis and other chains that Hop Protocol supports.

Notably, this proposal requires zero expenditures or ongoing incentives from the DAO. It simply pairs idle HOP in the treasury with a small amount of idle USDC (to sell for ETH).

25 posts - 10 participants

Read full topic

🐰Hop Ecosystem Optimism Retrospective Quarterly Report Thread

Published: Nov 16, 2023

View in forum →Remove

Hop Protocol’s Governance Power (Changes)

Q1
Hop began with approximately 200K OP provided to us by the Optimism Foundation as part of an experimental Protocol Delegation Program that has since been discontinued. Following the length of time required by the DAO to launch elections, the ambassadors were unable to reach the required vote threshold to qualify for the next season. I appealed this decision, with more information below.

200K OP > 0 OP

Q2
0 OP

Number of votes done by the ambassador (including any missed vote)

Q1
3 (100% participation during the period of having any voting power)

Q2
N/A

Proposals are written by the ambassador

Q1 → Q2
After learning that Hop, through no fault of the ambassadors, would be unable to participate in the following season of governance, I immediately appealed to the Foundation admin to explain our situation. I followed up with her in direct messages as well to continue to understand what the best next steps would be.

The path forward was to investigate other means of achieving voting power. One of ways that was explored was using the OP granted to Hop as onboarding rewards. At this time, this OP was understood to be held in a community multisig which, by definition, the Hop DAO should hold complete control over.

https://snapshot.org/#/hop.eth/proposal/0xc10a71e1254cd883e31b35cc59819201cc9ae66bec8d479cabbb2327aa4ad99a

If successful, this program would have not only given Hop 5x more voting power, it would have soon been our only voting power. Unfortunately, after the comment period and during the live vote, a member of the community highlighted a new rule that slightly predated the ambassadors disallowing use of OP received through a grant to be used in governance. This situation was quickly rectified in a future proposal I drafted which clarified that the address containing the onboarding funds is not to be considered the community multisig as earmarked funds should be segregated from others.

https://snapshot.org/#/hop.eth/proposal/0x11cc2f01417b75e257b4029f19f06102b70e126a2ce7cfda272ad24f485e73da

Even without this, I continued to use the platforms I had to advocate for Hop, particularly through Delegation Week which I played a role in organizing and presenting in.

https://x.com/m0xandrew/status/1660799933467832325?s=20

Progress of business development opportunities

Q1 → Q2
These problems were communicated to the Hop DAO through community calls and on the Discord, as was the decision to focus more towards information gathering and building connections and relationships. While permitted, I also participated in grant proposal votes. The connections, relationships, and broader participation have proven to be valuable to the Hop Labs team for staying up-to-date with some of the latest research around important technical aspects of bridging like shared sequencing and intra-Superchain bridging.

Following the conversation with an Optimism developer, and a number of conversations and outreach steps with various Optimism builders’ Telegram channels, I was able to find an active community focused on these problems and quickly relayed any relevant developments to the appropriate people at Hop Labs. This included, but was not limited to, research related to the unique properties and capabilities rollups gain upon sharing sequencers, and grants being successfully awarded for this work.

Thinking beyond the interests of tokenholders exclusively and interpreting the ambassador mandate as serving Hop and Optimism users and individuals in the community more broadly, I also raised concerns around the structure of Optimism governance and suggested some solutions to ensure ongoing alignment.

Furthermore, Franco and I attended a number of community calls, sharing our input and advocating for Hop as we familiarized ourselves with the Optimism culture, practices, and operating procedures. This was instrumental for forming direct relationships with members of the community. This has enabled me to create a direct path to many of the leaders in Optimism through mediums like Twitter and Telegram and leverage that as a tool for furthering Hop’s goals and values. On top of this, I not only attended, but also frequently presented on nearly all of the Hop community calls during the past 6 months and have made myself available through Discord and the forum during this time as well.

Compensation

Full Compensation: 155239.327296 HOP

$500 * 6 months / .0386 HOP (price as of 11:45 ET Oct 29, 6 months after program launch) = 77619.6636481 // Optimism Compensation
77619.6636481 * 2 (including work done on behalf of Hop DAO at Arbitrum) = 155239.327296

2 posts - 2 participants

Read full topic

Core (2)
Ethereum Magicians
ERCs ERC-7726: Common quote oracle

Published: Jun 20, 2024

View in forum →Remove

For a while now we have been toiling on what is now a minimal standard for oracle value feeds.

Draft EIP

Some oracle implementations from the community: GitHub - alcueca/awesome-oracles: Common Oracle Specification and Adaptors
Some oracle implementations from Euler: GitHub - euler-xyz/euler-price-oracle: Euler Price Oracles, a library of immutable oracle adapters and components

This all probably started when I wrote this article about using value conversions instead of prices in smart contracts:

In very short, this is a minimal standard with a single function, from the spec:

getQuote

Returns the value of baseAmount of base in quote terms.

MUST round down towards 0.

MUST revert with OracleUnsupportedPair if not capable to provide data for the specified base and quote pair.

MUST revert with OracleUntrustedData if not capable to provide data within a degree of confidence publicly specified.

- name: getQuote
  type: function
  stateMutability: view

  inputs:
    - name: baseAmount
      type: uint256
    - name: base
      type: address
    - name: quote
      type: address

  outputs:
    - name: quoteAmount
      type: uint256

There is a bit more info on the spec, including the lack of a priceOf function.

Although due to process the discussion should take place here, where it is public and searchable, we also have a telegram group that you are welcome to join: Telegram: Join Group Chat

1 post - 1 participant

Read full topic

EIPs Template for `discussion-to` threads

Published: Jun 19, 2024

View in forum →Remove

As announced here, I’d like to propose a default template we could suggest to EIP authors for their discussion-to threads on EthMagicians.

The template should be auto-populated when people create a new post in the EIPs category. We could potentially store this template in the ethereum/eips repo, too.

Here’s a first draft of what it could look like:


discussion-to Template

It is recommended to use the following template when creating a discussion-to thread for your EIP on ethereum-magicians.org.

Update Log

This section should list significant updates to the EIP as the specification evolves. The first entry should be the PR to create the EIP. The recommended format for log entries is:

For example, using EIP-1:

  • 2024-06-05: Enable external links to Chain Agnostic Improvement Proposals (CAIPs), commit 32dc740.

External Reviews

This section should list notable reviews the EIP has received from the Ethereum community. These can include specific comments on this forum, timestamped audio/video exchanges, formal audits, or other external resources. This section should be the go-to for readers to understand the community’s current assessment of the EIP. Aim for neutrality, quality & thoroughness over “cherry-picking” the most favorable reviews.

The recommended format for entries is:

For example, using EIP-1559, one entry could be:

  • 2020-12-01: “An Economic Analysis of EIP-1559”, by Tim Roughgarden, full report

Outstanding Issues

This section should highlight outstanding issues about the EIP, and, if possible, link to forums where these are being addressed. This section should allow readers to quickly understand what the most important TODOs for the EIP are, and how to best contribute. Once issues are resolved, they should be checked off with a note giving context on the resolution.

The recommended format for new entries is:

Once issues are addressed, these becomes:

For example, using EIP-3675, one entry could be:

  • 2021-07-08: Repurpose the DIFFICULTY opcode, tracking issue
    • 2021-10-30: Introduce EIP-4399, EIP PR

2 posts - 2 participants

Read full topic

EIPs Custom data access model

Published: Jun 19, 2024

View in forum →Remove

The custom data access model uses solidity’s delegate mode to obtain the contract’s data read permissions. Corresponding reading logic can be developed through any third-party contract to obtain the desired data form. This model can save gas costs when it requires multiple accesses to the memory of a contract to obtain the final data form. It can even embed the required data processing logic directly into the agent contract, which is equivalent to native execution of data. Access and compute without making external calls.

1 post - 1 participant

Read full topic

EIPs EIP for EVM Native Bundles

Published: Jun 17, 2024

View in forum →Remove

Discussion thread for: [will link EIP once cleaned up & merged]

Today, all sequencing logic for a mainnet block is controlled by the single winner of the JIT PBS block auction. This is problematic as sequencing, the choice of who gets to alter what piece of state in what order, influences value flow. The goal of this EIP is to give transactions and smart contracts more control over how they are sequenced through explicit delegation of local sequencing rights.

Technical Summary

This EIP aims to enable more fine-grained and multi-party block building by introducing two new EIP-2718 transaction types and one new opcode. These new additions would provide:

  • The ability for transactions to delegate their local sequencing to a specified external party.
  • The ability for an external party to build ‘bundles’ of transactions that are run in order in a block.
  • The ability for smart contracts to see who put the transaction in a bundle if the transaction was in a bundle.

One of the EIP-2718 transactions would extend normal transactions to include two new fields: bundle_signer and an optional block_number. The bundle_signer would be the entity who is delegated local sequencing rights for the transaction, and the block_number would be the block number that the transaction is valid in if not zero.

The other EIP-2718 transaction would be a meta-transaction whose only function is to order transactions that delegated sequencing rights to the signer of the transaction. This meta-transaction could only sequence transactions that delegated to it and could also delegate itself to another external party and specify a block number. This transaction would not start an execution context for itself.

The opcode, potentially named BUNDLE_SIGNER, would expose the most immediate external party who put the transaction into a bundle if present.

Other relevant pieces of technical information:

  • Unlike searcher PBS bundles, there is no revert protection provided to the sequenced transactions. This is to enable this EVM change to work with all types of EVM block builders, including ones that do not do simulations.
  • If a transaction in the meta-transaction’s bundle is invalid, the bundle signer is charged for the invalid transaction as if it were just CALLDATA bytes. This is for DOS protection and due to the inability for a bundle creator to control all state that it is building on.
  • If a transaction specifies a bundle_signer, it must be included in a bundle signed by the signer to be valid. This is to prevent competition between the total block builder and the delegated bundle creators.

Example Use Cases

Together, these new forms of expression would enable:

  • Smart contracts to auction off the right to be the first entity to operate on a piece of state, such as:
    • AMMs auctioning off the first swap to lessen LVR. (Example using PBS searcher bundles).
    • Oracles auctioning off the rights to be the first transaction post update to cover posting costs.
    • Lending protocols auctioning off the right of liquidation of hard-to-price collateral.
  • Smart contracts to order transaction operations for user benefit, such as:
    • Lending protocols placing user liquidity adds before liquidations to reduce bad debt creation.
    • AMMs preventing sandwiching.
  • Front-ends and wallets to explicitly sell their order flow to mini-builders who do not have to win the entire block.

Unanswered Questions:

  • Difficulty of verifying the new opcode for zk-evms.
  • How this composes with account abstraction efforts.

Feedback Wanted!

  • Is this plan technically infeasible for any reason?
  • Are you interested in this?
  • Is there a different design which could enable a similar result?

This EIP idea is an attempted technically more coherent plan of an idea expressed in this Eth Research post.

3 posts - 2 participants

Read full topic

Working Groups ePBS breakout room #3

Published: Jun 15, 2024

View in forum →Remove

Agenda

ePBS Breakout room #3 · Issue #1067 · ethereum/pm · GitHub

Moderator:

Notes

Additional info

1 post - 1 participant

Read full topic

Working Groups All Core Devs - Execution (ACDE) call #190

Published: Jun 15, 2024

View in forum →Remove

Agenda

Execution Layer Meeting 190 · Issue #1066 · ethereum/pm · GitHub

Moderator: @timbeiko

Summary

Recap by @timbeiko:

(from Eth R&D Discord with minor edits for user names and links)

Recording

Additional info

Notes by @timbeiko: Tweet thread
Notes by @Christine_dkim: Ethereum All Core Developers Execution Call #190 Writeup | Galaxy

1 post - 1 participant

Read full topic

Working Groups Verkle implementers call #19
Primordial Soup F-star name for Consensus Layer upgrade after Electra

Published: Jun 13, 2024

View in forum →Remove

An F-star name is needed for the Consensus Layer upgrade after Electra.
Assumes a cross layer upgrade combined with :elephant: Osaka having Verkle as the main feature.

The consensus layer uses star names for upgrades whilst the execution layer uses Devcon cities.

See: Post-Merge Network Upgrade Naming Schemes

Recent upgrade names:

  • :owl: Shapella (Shanghai + Capella)
  • :blowfish: Dencun (Cancun + Deneb)
  • :european_castle: Pectra (Prague + Electra)

@protolambda has already suggested Fosa (Felis :cat2: + Osaka)

@hwwang advised that Fulu & Felis received support in the ACDC zoom chat (yet another reason why ACD chats need to be exported) and offered ChatGPT-generated proposals

Assuming a cross layer upgrade, any F-star name will be combined with the execution layer name Osaka in a portmanteau (Portmanteau generator), so any name choice should also consider this.

https://portmanteaur.com/?words=Fulu+Osaka
Fulu Osaka
fusaka, fosaka, fuka, faka, fulusaka, fulosaka, fuluka, fulaka, fula, fulka, fsaka, fka, fulsaka, fuaka, fua, fuluaka, fulua, fuosaka
Osaka Fulu
olu, osalu, osulu, osu, osakalu, osakulu, osaku, oslu, osaklu, osfulu, osakfulu, osau, osakau, oulu, osaulu, ofulu, osafulu

https://portmanteaur.com/?words=Felis+Osaka
Felis Osaka
fesaka, fosaka, feka, faka, felisaka, felosaka, felika, felaka, fela, felka, fsaka, fka, feliska, felsaka, felissaka, feaka, fea, feliaka, felia, feosaka, felisa
Osaka Felis
olis, osalis, oselis, osas, osis, osakalis, osakelis, osakas, osakis, oslis, oss, osaklis, osfelis, osaks, osakfelis, ois, osais, osakais, oelis, osaelis, ofelis, osafelis

(Apologies if I am stepping on the process for selecting star upgrade names as I’m not a core dev)


F Star names

From: List of proper names of stars - Wikipedia


Poll

Poll is for signaling purposes only.

Click to view the poll.

2 posts - 2 participants

Read full topic

Working Groups All Core Devs - Consensus (ACDC) call #135

Published: Jun 13, 2024

View in forum →Remove

Agenda

Consensus-layer Call 135 · Issue #1069 · ethereum/pm · GitHub

Moderator: @ralexstokes

Summary

ACDC 135 summary by @ralexstokes:

We began with some announcements:

Then discussed Electra devnet-1 from the CL side:

  • Agreed to merge this PR into the devnet-1 specs, which refactors the attestation layout following EIP-7549 and has implications for the SSZ merkelization of this type
  • Client teams agreed to target v1.5.0-alpha.3 of the consensus specs for devnet-1 . Be on the look out for release soon™!
  • Wrapped this segment with some rough timelines on devnet-1 readiness, most clients seemed to think a couple weeks after the specs release is all that would be needed to get an implementation ready, which means we can keep Pectra moving along :tada:

And then moved to PeerDAS work:

  • Started by calling out this PR to move PeerDAS to formal inclusion in Pectra to reflect client intent, even if PeerDAS is developed separately from the “core” Pectra set.
  • Next discussed how to proceed on PeerDAS development given that we want to work in parallel to the other Pectra work; PeerDAS implementer’s agreed on the PeerDAS breakout call #1 to implement PeerDAS on top of Deneb for the time being, to minimize thrash with Pectra changes as the core EIP set is still in the process of stabilizing. The intent is to rebase PeerDAS on top of the Pectra changes once it is clearer that the other Pectra EIPs have stabilized, ideally over the next few Pectra devnets.

We then turned to raising the blob count in Pectra:

  • The intent is to raise the blob count to provide an increase in Ethereum’s data throughput in the upcoming hard fork.
  • However, there are a few complications:
    • Preliminary analysis shows turbulence at the current blob count.
    • PeerDAS handles blob data differently than today’s EIP-4844 mechanism, which unlocks further scale but makes it hard to compare the blob count today to a blob count under Pectra.
  • In light of these facts, there are a variety of opinions on how to raise the blob count in Pectra; including: increasing the blob count even without PeerDAS, increasing the blob count with PeerDAS, or simply deploying PeerDAS and leaving the blob count alone. Analysis of the future Pectra devnets should give us more confidence in the right approach — check the call for the full nuance here.
  • We also covered a proposal to uncouple the blob count, which currently is set independently on the EL and CL (yet has to match); there is an initial PR here that has the CL drive the blob count but discussion on the call raised a few more questions to do this safely. We also discussed how to handle the change in the blob base fee when the blob count changes. In short, expect some possible changes in how blob accounting is carried out in Pectra.

We concluded with a great update on the SSZ-ification of the protocol (expect to see a devnet by the next CL call), and a call-out to determine the name of the “F-star” for the next CL fork to accompany Osaka on the EL.

From Eth R&D Discord

Recording

Transcript

[To be added]

Additional info

Notes by @Christine_dkim: Ethereum All Core Developers Consensus Call #135 Writeup | Galaxy
F-star name discussion: F-star name for Consensus Layer upgrade after Electra

1 post - 1 participant

Read full topic

EIPs EIP-7723: Network Upgrade Inclusion Stages
Uncategorized Why is ERC-1400 not listed on eips.ethereum.org?

Published: Jun 12, 2024

View in forum →Remove

I was wondering, why ERC-1400 family (ERC-1410, ERC-1594, ERC-1644, ERC-1643) is not listed on eips.ethereum.org? Has it been stagnant and pulled back?

2 posts - 2 participants

Read full topic

Working Groups PeerDAS breakout #1
Uncategorized Yellowpaper correction

Published: Jun 10, 2024

View in forum →Remove

Hey everyone, I made this PR in the yellowpaper repository with some small corrections,

if anyone has thoughts on it I would like to know

1 post - 1 participant

Read full topic

ERCs ERC-7721: Lockable Extension for ERC1155

Published: Jun 09, 2024

View in forum →Remove

This feature enables a multiverse of NFT liquidity options like peer-to-peer escrow-less rentals, loans, buy now pay later, staking, etc. Inspired by ERC7066 locking on ERC721, this proposal enables locking on ERC1155.

Complying with the need for enhanced security and control over tokenized assets, this extension enables token owners to lock individual NFTs with tokenId, ensuring that only approved users can withdraw predetermined amounts of locked tokens. Thus, offering a safer approach by allowing token owners to specify approved token IDs (setApprovalForId) to set an amount for withdrawal along with the default function of (setApprovalForAll)

EIP
ERC Pull Request

Please let us know your thoughts on this proposal and if our motivation seems useful! :crossed_fingers:

1 post - 1 participant

Read full topic

ERCs ERC-7722: Opaque Token

Published: Jun 09, 2024

View in forum →Remove

Dear Ethereum Magicians, I would like to discuss this ERC draft with you. We have successfully applied this mechanism in our own solutions and now aim to formalize it into an ERC and share it with the community. I am looking forward to your feedback!

Pull Request
ERCs/ERCS/erc-7722.md


eip: 7722
title: Opaque Token
description: A token specification designed to enhance privacy by concealing balance information.
author: Ivica Aračić (@ivica7), SWIAT
status: Draft
type: Standards Track
category: ERC
created: 2024-06-09

Abstract

This ERC proposes a specification for an opaque token that enhances privacy by concealing balance information. Privacy is achieved by representing balances as off-chain data encapsulated in hashes, referred to as “baskets”. These baskets can be reorganized, transferred, and managed through token functions on-chain.

14 posts - 6 participants

Read full topic

ERCs ERC-7720: Deferred Token Transfer

Published: Jun 08, 2024

View in forum →Remove

Abstract

The standard enables users to deposit ERC-20 tokens that can be withdrawn by a specified beneficiary at a future timestamp. Each deposit is assigned a unique ID and includes details such as the beneficiary, token type, amount, timestamp, and withdrawal status.

Motivation

Sometimes, we need deferred payments in various scenarios, such as vesting schedules, escrow services, or timed rewards. By providing a secure and reliable mechanism for time-locked token transfers, this contract ensures that tokens are transferred only after a specified timestamp is reached. This facilitates structured and delayed payments, adding an extra layer of security and predictability to token transfers. This mechanism is particularly useful for situations where payments need to be conditional on the passage of time.

1 post - 1 participant

Read full topic

Working Groups ePBS breakout room #2
EIPs EIP-7719: P2P History Network

Published: Jun 07, 2024

View in forum →Remove

Draft*

This EIP formalizes the usage of Portal’s History network for inclusion in Ethereum. The network lookups are based off blockhash.

Link to EIP PR

Link to formal spec:

1 post - 1 participant

Read full topic

EIPs EIP-7718: Portal Wire Protocol a framework for discv5

Published: Jun 06, 2024

View in forum →Remove

This Post is a draft, but the core subject matter is likely to remain unchanged

Discv5 (Node Discovery Protocol v5) is a protocol used by the consensus later and soon to be used by the execution layer to find nodes on the network. Discv5 is an extendable protocol which allows building new protocols on top of it, utilizing TalkRequests, which is a message type of Discv5.

This EIP proposes a framework over Discv5 called the Portal Wire Protocol, a generic framework which allows building new DHT networks referred to as Overlay Networks these Overlay Networks inherit the performance optimizations of the base Portal Wire implementation, well also accelerating the development of new Overlay Networks networks. Overlay Networks each maintain their own Kademlia DHT routing table.

EIP Draft can be found here: Add EIP: Portal Wire Protocol a framework for discv5 by KolbyML · Pull Request #8629 · ethereum/EIPs · GitHub

More information can be found in this specification

1 post - 1 participant

Read full topic

Wallets Test accounts need clear warnings

Published: Jun 06, 2024

View in forum →Remove

Lots of us use accounts with well known keys for testing, such as the test test test ... junk Mnemonic which generates the 0xf39Fd…, 0x70997…, 0x3C44C… etc. series of addresses.

TL;DR these test accounts should have warnings in account lists & when signing using them

However, these are rarely if ever clearly demarcated as being test wallets in Wallet UIs and other tools. See example from MetaMask:

image

As a developer that routinely spins up fresh local/ephemeral nodes as part of the testing process it’s a big frustration having to reconfigure my Web3 wallet(s) every time, so having the de-facto standard handful of accounts which are assumed to always be present and have gas is time-saving and convenient.

However, sometime this slips into the real-world by accident. The following story is that of a co-worker (non-developer). At some point he imported one of the test wallets into MetaMask while demoing an app to somebody at a conference, where it remained in his account list for a few months.

Then, needing a new wallet to setup Gnosis Pay, looked in his Metamask and there it was, sitting at the end of the account list after his Ledger and other routinely used accounts, proceeded to go through KYC, and enter that address in, tried to fund the card but the $20 test transaction didn’t seem to go through… strange he thought. Tried sending 0.1 ETH … but that disappeared!

Initially he thought he’d been hacked, maybe it was malware, a keylogger, was his seed phrase brute forced? Fortunately… the wallet address was one of these test accounts and nothing more sinister, but up until that point there was no indication that basically every every Ethereum adjacent developer has used these accounts at one point or another.

This is yet another story to add onto the giant burning fire of user frustration, this is not the first time something like this has happened - far from it - and it certainly won’t be the last.

But, if we can do one simple thing in our apps, in our wallets, in our services, in our deterministic icon generators:

  • Clearly demarcate test accounts with known keys and warn the users, as they may not realize until it’s too late

My suggestions:

  • Deterministic icon generators: overlaid with a warning sign
  • Account name auto-fill, instead of Account N, it could be Test Account!!! N
  • When signing, include a big warning that it is a test account

1 post - 1 participant

Read full topic

Working Groups Verkle implementers call #18
Wallets Open source alternatives to privy, web3auth and dynamic

Published: Jun 05, 2024

View in forum →Remove

gm,

as some of you may know, I’m running a social media website built on the Ethereum stack. For those who don’t, it’s https://kiwinews.xyz

In any case, a core UX concept is that we use temporary keys that we create in the browser, store in local storage, and use for confirmation-less signing when a user upvotes, submits a link, or leaves a comment.

Now, we’ve obviously done that to provide our users with a better UX so that they don’t have to sign each interaction on the side cumbersomely using their custody wallet.

The system works by delegating posting rights from your custody wallet, for example, managed by your Rainbow or Metamask wallet, to this temporary key in your local storage. We use a simple delegation protocol on Optimism. The user has to send a transaction that connects the keys onchain for the Kiwi News nodes to witness the connection.

All this said a problem that has significantly lowered users’ engagement is that they naturally use multiple different browsers and devices. For example, a user may use a mobile and desktop device. Hence, creating multiple local-storage-specific keys is necessary, and leading them to delegate these keys on Optimism is necessary, too. But this is problematic because it is far from a seamless user experience. If we have to lead the user to send multiple transactions on Optimism, even if they’re virtual free in terms of costs, they cost us engagement and churn. Asking users to send transactions is scary to them and a drop-off point, so we try to avoid it.

Hence, a while ago, I actually started to look into solutions that would make the local storage key less perishable, and I also became interested in finding a solution that would somehow easily synchronize keys between a user’s mobile and desktop device.

In fact, with Apple and Google’s campaign to establish Passkeys more, I became really interested in them. I found out that there is the “largeBlob” extension, which allows the developer to store a small payload in iCloud. This payload is only accessible to the user when they authorize themselves with FaceID to Apple using Passkeys. This is useful as it allows me to store a full Ethereum private key in the largeBlob and retrieve it from any Apple device the user has later on.

So I ended up implementing Passkeys into my app, and it is actually fairly usable on Apple devices. The user can back up the temporary key using the largeBlob extension, and then upon “Connecting their Wallet,” they can consider “Connecting with Passkeys,” which essentially prompts them to authenticate themselves and then downloads the Ethereum private key from iCloud using largeBlob.

Honestly, this was great because I actually don’t have much concerns about storing this temporary key on iCloud:

  • The temporary keys in Kiwi News are technically revocable onchain, and so if they ever leak or there are safety concerns, we could ask users to revoke their delegation.
  • These keys aren’t meant to hold any funds. Their purpose is strictly to post content on behalf of the custody wallet and so, for a user who’s willing to opt in, I think it’s totally fair to post them to iCloud. That said, Kiwi News can be used entirely with your Ethereum wallet and you never actually have to delegate to a temporary wallet, so using Passkeys is optional as of now.

So, with that out of the way, let me tell you the caveats to using Passkeys:

  • Apple and Google are fighting about integrating the “largeBlob” extension. My reading is that Google wants to go forward with PRF instead of “largeBlob,” and so my understanding is that PRF won’t allow developers to store arbitrary data.
  • While there seems to be momentum for RIP-7212 for secp256r1, using this curve for Kiwi News (which would be reasonable) would mean that the user still has to authorize themselves pretty frequently using FaceID when signing stuff (which isn’t really a good trade-off for a social media site).
  • As of now, the Passkeys integration that we’ve done has terrible browser and OS support. It basically exclusively works for Safari on Mac and iOS devices. Chromium-based browsers don’t seem to work because of Google’s unwillingness to implement them. iOS 16 devices don’t work. And there is a mysterious bug that if users use 1Password on iOS to manage their Passkeys, it breaks our entire flow. That said, it is my assessment that this entire situation will take years and not months to be fixed, too, which may be time that we don’t have as a startup.
  • Finally, I think for an Android and Mac user, Passkeys will never seamlessly work as Google and Apple have decided that their respective solutions will only ever work well “in their ecosystem.”

So having found out all of this through integrating with Passkeys, it has made me feel rather pessimistic about their future, so I started looking for alternatives.

To solve this problem for users, recently, I’ve started considering privy another time, and I found that they perfectly solve our use case. For the sake of simplicity, I’m going to refer to the specific solution as privy, although there are also alternatives such as web3auth or dynamic, which, to my knowledge, all provide roughly the same service. My layman’s understanding of its inner workings is that:

  • Users can connect their wallets as usual, and privy takes care of sending the signature request, etc.
  • But privy also allows to provision so-called “embedded wallets,” which are (on a UX-level) essentially equivalent to the temporary keys that Kiwi News currently stores in the browser’s local storage, except that privy encrypts these keys, stores them on their servers (or in an iframe, I’m not sure how it works exactly), but in any case, this allows a privy-using developer to generate an embedded wallet that can be synced across a user’s multiple devices and browsers.

While privy is often touted as a tool to onboard new users to crypto, I’m actually not interested in allowing a user to, for example, start trying my site with a Google-login or whatever, but, instead, I find it really useful that embedded wallets can be synced across devices!

So, to me, as having already spent months trying to come up with a reasonable solution for my users, and considering that this issue really kills engagement on the site, I find it now quite tempting to integrate with privy. This is because privy, as opposed to other solutions, seems to work independently of whether Apple and Google find a solution to the largeBlob conflict. privy’s embedded wallet synchronization doesn’t rely on browsers finishing to implement new features, it just relies on a user being able to sign through the SIWE process.

Now, this integration, however, obviously comes with a caveat, which is that it will allow privy to hold my users’ keys hostage. What do I mean by that?

If I lead all my users to create an embedded wallet with privy, and so their temporary key is now stored with privy servers, then for my site to function, I will have to continue integrating with privy - and if I ever want to migrate away, I’d have to ask all my users to send a transaction to Optimism to delegate a new key. So logically, this will also allow privy to charge us quite a bit in the future. And it makes us reliant on them to provide a safe and properly functioning service.

In fact, I think, as of now, this entire situation doesn’t even yet warrant writing an Ethereum Magicians post, but I feel like there is a greater pattern at play here where companies try to intentionally capture a user’s keys because they know that this will increase their app’s moat.

Without trying to sound too accusatory, I think you can also see this with Warpcast’s strategy to generate their own Ethereum key in the app, as this is done to lock down the user’s key and hence make it rather unlikely for the user to swap into other clients/apps with that key as importing and exporting of seed phrases isn’t recommended and a scary act. All of this increases the defensibility of building their app. It makes it harder for others to compete as users are being locked into the ecosystem/app.

So, looking at how privy works and that it makes it very unlikely that, with integrating it, I will give my users the capability to “exit” with their embedded wallets, I couldn’t help but wonder what open-source alternatives exist that also address all my concerns above.

It seems to me that, at least for a site like mine, which doesn’t need the keys to hold actual money, there don’t seem to be that many requirements that would complicate an integration.

Additionally, I feel like this use case is integral for anyone building crypto consumer use cases as temporary wallet keys must inevitably somehow be synced across devices and browsers, and since the more financial-minded self-custody wallets (and Google and Apple) don’t seem to be too interested in providing solutions here. Their interest is in locking down the keys and keeping them safe instead.

Hence I would love to connect and hear other’s thoughts on this!

I feel like I’m pretty much in the trenches here as I’m one of the few who have attempted to build an actual social consumer use case with the Ethereum wallet stack that doesn’t primarily deal with sending funds around.

So I’d be super happy if this post actually had an impact where it’d change the strategy of some wallet providers in the future, where they start to pay attention to these use cases to help users keep custody of their keys.

Or, in case I haven’t done my research, it’d be helpful if there was something like a privy that I could somehow self-administer so that I’m not giving up control of my users’ keys to give them a better user experience.

5 posts - 3 participants

Read full topic

EIPs EIP Fun Newsletter #50: PeerDAS

Published: Jun 03, 2024

View in forum →Remove

Hi,everyone! I am Zoe from EIP Fun. EIP Fun strives to be the developer relation platform for Ethereum core developers and an adoption accelerator for ERC standards and projects. Our mission to Serve Ethereum Builders, and Scale the Community.

We’d like to share our newsletter with all Magicians this week to kick off our journey! Your thoughts and discussions are welcome. If there are specific topics you’d like us to cover, please reach out to @EIPFun .

Click to read our EIP Fun Weekly #50 :point_right: https://eipfun.substack.com/p/eip-fun-weekly-50-peerdas

:studio_microphone: ACDC #134 Insights: Pectra Devnet 0 launch and scope expansion to include PeerDAS and SSZ code changes.
:sparkles: Hot EIPs: ERC-7208 (On-chain Data Container) and ERC-7496 (NFT Dynamic Traits).
:love_letter: PeerDAS: Enhancing Ethereum’s security and robustness.

Find more information about EIP Fun :point_right: Introducing EIP Fun - EIP Fun

:sparkles:Thank you for your reading!

1 post - 1 participant

Read full topic

Working Groups All Core Devs - Execution (ACDE) call #189

Published: Jun 02, 2024

View in forum →Remove

Agenda

Execution Layer Meeting 189 · Issue #1052 · ethereum/pm · GitHub

Moderator: @timbeiko

Summary

Recap by @timbeiko

And, lastly, we flagged the upcoming EPBS (tomorrow) and PeerDAS (June 11) breakouts:

From Eth R&D Discord

Recording

Additional info

Notes by @timbeiko: Tweet thread
Notes by @Christine_dkim: Ethereum All Core Developers Execution Call #189 Writeup | Galaxy

1 post - 1 participant

Read full topic

Working Groups All Core Devs - Consensus (ACDC) call #134

Published: May 31, 2024

View in forum →Remove

Agenda

Consensus-layer Call 134 · Issue #1050 · ethereum/pm · GitHub

Moderator: @ralexstokes

Summary

Recap by @ralexstokes:

This call was packed with various discussions around scoping the upcoming Pectra hard fork.

  • Began with a recap of devnet-0, which generally went really well!
  • Next, covered a variety of updates, extensions or modifications to the existing EIP set
  • Then turned to a new EIP (number 7688) to consider inclusion reflecting learnings from devnet-0
  • Touched on early PeerDAS devnet alongside devnet-0, which also went very well given how early the implementations are
  • Spent the rest of the call discussing fork scoping for Pectra
    • Check out the call for the full color
    • Many teams/contributors expressed a variety of options
    • Rough consensus formed around the importance of PeerDAS
    • To reflect an intent to include PeerDAS in Pectra, while derisking feature sets and timelines, client teams agreed to build PeerDAS as part of the Electra fork with a separate activation epoch
      • If PeerDAS R&D goes well, this activation epoch can simply be the Pectra fork epoch
      • If client teams discover difficulties with PeerDAS after implementation over the coming months, we have the option to set the PeerDAS activation epoch after the Pectra epoch (including possibly setting it so far in the future we would be able to schedule for a future hard fork)
    • Otherwise, we agreed to keep the existing Pectra scope as is
  • We didn’t quite have time to finish the discussion around EIP-7688 (stable container SSZ upgrade), which we will cover on the next call!

From Eth R&D Discord

Recording

Transcript

pm/AllCoreDevs-CL-Meetings/Call_134.md at master · ethereum/pm · GitHub

Additional info

Notes by @Christine_dkim: Ethereum All Core Developers Consensus Call #134 Writeup | Galaxy

1 post - 1 participant

Read full topic

Primordial Soup Compiler Fingerprinting and Detection in EVM Bytecode

Published: May 30, 2024

View in forum →Remove

This post serves as discussion for my article/research: Compiler Fingerprinting in EVM Bytecode | Jonathan Becker

Note: I’m working on running this across all contracts, as well as figuring out why the remaining 1.9% of contracts aren’t being classified.

1 post - 1 participant

Read full topic

Interfaces eth_simulateV1: simulate chain
ERCs ERC-7725: Exponential Curves

Published: May 30, 2024

View in forum →Remove

Hello :wave:
I’ve been looking into on-chain exponential curves for a while to assist in a reputation project that aims to decrease the soul-bounded governance power as time passes. Similar projects such as ENS have implemented this to decay the premium on expired names.

But every single project that strives to use exponential curves ends up using different formulas. Thus I’ve found myself developing a standard that everyone can use to easily manage exponential curves on-chain.

EXPCurves on GitHub

This smart contract implements an advanced exponential curve formula designed to handle various time-based events such as token vesting, game mechanics, unlock schedules, and other timestamp-dependent actions. The core functionality is driven by an exponential curve formula that allows for smooth, nonlinear transitions over time, providing a more sophisticated and flexible approach compared to linear models.

function expcurve(
    uint32 currentTimeframe,
    uint32 initialTimeframe,
    uint32 finalTimeframe,
    int16 curvature,
    bool ascending
  ) public pure virtual returns (int256);

The smart contract provides a function called expcurve that calculates the curve’s decay value at a given timestamp based on the initial timestamp, final timestamp, curvature, and curve direction (ascending or descending). The function returns the curve value as a percentage (0-100) in the form of a fixed-point number with 18 decimal places.

We can create up to 4 types of curves or even keep a straight line. You can play around with the curvature (k) to determine the steepness of the curve.

You can also fork this spreadsheet and play with the formula and the resulting charts.

I’m currently writing the EIP but I would like to know if this contribution was previously developed to avoid duplicated work, otherwise, let me hear what you think and what I should’ve done differently!

ascending_with_positive_curvature
descending_with_negative_curvature

1 post - 1 participant

Read full topic

Working Groups Future of EOA/AA Breakout Room #4
Working Groups Future of EOA/AA Breakout Room #3

Published: May 30, 2024

View in forum →Remove

Agenda

Future of EOA/AA Breakout Room #3 · Issue #1053 · ethereum/pm · GitHub

Moderator: @timbeiko

Summary

Call recap by @timbeiko

From Eth R&D Discord

Recording

https://www.youtube.com/watch?v=0vHHhZgrJ58

Additional info

Notes by @poojaranjan: EOA/AA Breakout Room - HackMD
Berlin workshop notes by @adietrichs: Summary AA Event May 24 Berlin - HackMD
Proxy pattern idea by @matt: 7702proxy.md · GitHub
Next breakout: Future of EOA/AA Breakout Room #4

1 post - 1 participant

Read full topic

Ethereum Research
Economics Pre-confirmation Liveness Slashing Penalties from the Proposer's Perspective

Published: Jun 21, 2024

View in forum →Remove

Current designs around pre-confirmations involve a slashing penalty on liveness, that is if a proposer who commited to pre-confirmations misses its proposal, part of its collateral is burned or redistributed to the user that sent the pre-confirmation as a payback.

This post explores the liveness penalty from the point of view of proposers from an economical perspective.

Sources of Liveness Issues

Liveness issues are complex and can come from different actors or sources, part of them are the result of the proposer’s actions or choices, part of them don’t depend on the proposer. For example:

  • proposing a block in time but being reorg by the next proposer,
  • failure from the relayer to send the header in time,
  • failure from the relayer to propagate the signed header in time and reveal the block to the proposer.

As a result, the decision on whether to opt-in or not from a proposer perspective has to take into account an inherent risk outside of its actions. Using a statistical approach on network history sounds like an easy starting point.

Economical Minimal Viability

In the last 7 days on the network, about 0.54% of slots were missed, to break-even economically (that is, for an operator to not lose or win anything in the long run), assuming the liveness fault is 1 ETH, the minimal extra-tip of a pre-confirmation would be 0.0054 ETH.

To put it in perspective, the median execution reward in the last 7 days is ~0.048 ETH, so with 1 ETH of collateral, the pre-confirmations would need to be about 10% of the block’s value with the current network conditions. Using P(miss) as the probability to miss a block, the break-even formula is:

(1 - (P(miss))) * tip = P(miss) * penalty

And so the minimal tip:

tip = {(P(miss) * penalty) \over (1 - P(miss))}

With 1 ETH as a collateral, here is the model for low probabilites of missed block with P(miss) < 0.025:

download

Zooming out up with P(miss) < 0.5:

download

Opt-in if Economically Viable

One idea to make it viable at scale with little effort from proposers would be for the pre-confirmation sidecar on the proposer side to opt-in to pre-confirmations only it if the tip is above what’s economically sound given the current rate of misses on the network. For example, if in the last 24 hours the average missed block proposal is 0.5%, only commit to pre-confirmations which tip is above 0.005 ETH.

This approach requires the relayer to pass the pre-confirmation tip information to the proposer to decide whether or not to commit to pre-confirmations, or the proposer to send the minimal-tip to the builder so it can provide a block that match it.

The advantage of this approach is if the network is struggling at scale, the risk for a proposer to miss a slot increases, and so it makes sense for proposers to opt-out of pre-confirmations until the situation resolves. Increasing the pre-confirmer bid under such conditions makes sense as more risk is taken.

A disavantage is that the missed block proposal rate is an approximation: it doesn’t account for totally offline validators, or for the extra-cost involved in validating the pre-confirmation on the proposer side which can take time and increase the risks of missing the slot.

Alternatives

Adjusted Liveness Penalty

Instead of using a minimal tip as a way to decide if it’s viable, the liveness penalty could be dynamically adjusted to what is the minimal viable condition. The tip could then be a fixed value.

User-Defined Liveness Penalty

The user sending the pre-confirmation could also decide both the liveness penalty and the tip as suggested in User-Defined Penalties: Ensuring Honest Preconf Behavior, and adjust it to what the current state of the network is/what validators accept. The assumption here is maybe for some pre-confirmations the goal is to be as soon as possible on the L1, and so, reducing the liveness penalty would increase their probabilities of being pre-confirmed. On the other hand an arbitrage pre-confirmation could prefer to opt-in for a larger liveness penalty as its opportunity would be lost if the block is missed.

Caveats

This simple break-even model on the proposer side has no incentive, it is unclear if it will motivate proposers to opt-in.

1 post - 1 participant

Read full topic

Data Science Blob Usage Strategies by Rollups and Non-rollup Applications

Published: Jun 20, 2024

View in forum →Remove

Full Report

TDLR

  1. The main applications using blobs are rollups, accounting for approximately 87%. Non-rollup applications mainly include Blobscriptions and customized type 3 transactions.
  2. Rollup applications choose different blob usage strategies according to their own situations. The strategies will consider the number of blobs carried by type 3 transactions, blob utilization, and blob submission frequency to balance the costs of availability data fees and delay costs.
  3. Non-rollup applications can be characterized and distinguished from rollup applications by the number of blobs carried by type 3 transactions, blob utilization, and blob submission frequency. These features help identify scenarios of blob abuse, allowing for the design of corresponding anti-abuse mechanisms.
  4. In most cases, using blobs as a data availability solution is more cost-effective than calldata. However, there are a few scenarios where calldata is cheaper: blob gas price spikes and blob utilization is extremely low.
  5. Short-term fluctuations in blob gas price is mainly influenced by the demand from non-rollup applications. Rollup applications have a relatively inelastic demand for blobs, so they do not significantly impact short-term fluctuations in blob gas prices.
  6. Currently, rollup applications do not seem to consider blob gas price as a reference factor in their blob usage strategies.
  7. The probability of blocks containing type 3 transactions being reorganized is extremely low. Additionally, carrying more blobs does not increase the probability of block reorganization. However, there is a clustering phenomenon in block height for blocks containing type 3 transactions.

Introduction

This report provides an in-depth analysis of type 3 transactions used for carrying blobs from the time of the Ethereum Decun upgrade until May 22, 2024. It focuses on blob usage strategies of rollup and non-rollup applications. The dataset, data processing programs, and visualization code for this report are open source, detailed in the following “Dataset” section.

Type 3 Transactions & Blobs Share by Applications

Rollup Applications

Observations from Figure 1 on the proportion of type 3 transactions:

  • Base, Scroll, Linea, and Starknet are in the same tier, having the highest transaction proportions.
  • Arbitrum, Optimism, and Zksync are in the next tier, having the second-highest transaction proportions.

This phenomenon seems counterintuitive as Arbitrum and Optimism have higher TPS than Scroll, Linea, and Starknet and should have a higher proportion of type 3 transactions.

Figure 2 shows that counterintuitive phenomenon is caused by different rollup strategies in the number of blobs carried by type 3 transactions.

Observations from Figure 2 on the proportion of blobs:

  • Base stands alone, having the highest proportion of blobs.
  • Arbitrum and Optimism are in the same tier, having the second-highest proportion of blobs.
  • Scroll, Linea, Starknet, and Zksync are in the same tier, having a medium proportion of blobs.

This phenomenon aligns more with intuition: blob proportions are directly related to the scale of rollup’s availability data, thus showing a positive correlation with rollup TPS.

The difference between the proportion of type 3 transactions (31%) and blobs (14%) for non-rollup applications indicates that non-rollup applications and rollup applications have different needs.

Non-Rollup Applications

  • Rollup applications are B2B businesses aiming to fill fine-grained Layer 2 transaction availability data, so their type 3 transactions are not limited to carrying only 1 blob.
  • Non-rollup applications are B2C businesses aiming to upload complete text, images, etc., so their type 3 transactions usually carry only 1 blob to meet their needs.

Rollup Blob Usage Strategies

Rollup Strategy Model

This section models the rollup blob usage strategies with

  1. blobNumber, i.e. the number of blobs carried by type 3 transactions
  2. blobUtilization, i.e. blob space utilization
  3. blobInterval, i.e. the blob submission interval

Fee Cost

The fee cost per transaction for rollups is expressed as:

\begin{equation} feeCost = \frac{1}{k}(\frac{blobCost}{blobUtilization}+\frac{fixedCost}{blobNumber*blobUtilization}) \end{equation}
  • fixedCost: the fixed cost of a type 3 transaction
  • blobCost: the cost of a single blob
  • The larger the blobUtilization, the lower the amortized cost of the blob fee \frac{blobCost}{blobUtilization} and the fixed cost \frac{fixedCost}{blobNumber*blobUtilization}, resulting in a lower fee cost feeCost.
  • The larger the blobNumber, the lower the amortized cost of the fixed cost \frac{fixedCost}{blobNumber*blobUtilization}, resulting in a lower fee cost feeCost.

Delay Cost

The delay cost per transaction for rollups is expressed as:

\begin{equation} delayCost = F(\frac{blobNumber*blobUtilization*k}{tps}) \end{equation}
  • The larger the blobUtilization, the larger the delay cost delayCost.
  • The larger the blobNumber, the larger the delay cost delayCost.
  • The larger the tps, the smaller the delay cost delayCost.

The derivation of the formula can be found in the full version.

Rollup Strategy Analysis

Non-Rollup Blob Strategies

Rollup applications are B2B, while non-rollup applications are B2C. Therefore, non-rollup applications differ from the rollup strategy model. For non-rollup applications:

  • The number of blobs carried by type 3 transactions depends on the size of the content (texts/images) stored in the blobs.
  • Blob utilization depends on the size of the content (texts/images) stored in the blobs.
  • Blob submission intervals depend on the immediate needs of C-end users, with no delay costs involved.

  • According to Figure 5 (Others ), 1 blob can meet the needs of most non-rollup applications.

  • According to Figure 6 (Others ), the blob utilization is concentrated between 20% and 40%, indicating that non-rollup applications generally cannot fill the blob, with the data size mainly between 25.6 kB and 51.2 kB.

  • According to Figure 7 (Others ), about 83% of blobs have a submission interval of less than 1 minute, indicating a relative high frequency of user demand for non-rollup applications.

In summary, the type 3 transactions for non-rollup applications can be characterized as: high-frequency transactions carrying 1 low-utilization blob .

The essence of this characterization is that non-rollup applications are driven by immediate needs and are less concerned about the fee cost per data byte compared to rollup applications.

This characterization allows for the identification of non-rollup applications, which in turn helps design mechanisms to limit blob abuse by non-rollup applications.

Is Using Blobs Always More Cost-effective than Calldata?

Introducing feeRatio to measure the relative advantages of the two solutions:

\begin{equation} feeRatio = \frac{calldataFeeCost }{blobFeeCost} \end{equation}
  • When feeRatio ≥ 1, it indicates that using blobs as a data availability solution is not worse than calldata.
  • When feeRatio < 1, it indicates that using blobs as a data availability solution is worse than calldata.

Figure 8 also shows a few cases where feeRatio < 1 (red), indicating that calldata is more cost-effective than blobs:

  • Mostly in non-Rollup applications (Others):
    • Non-rollup applications generally do not care about the cost differences between blobs and calldata; they care about using blobs itself, such as in Blobscriptions.
  • A few in Metal rollup:
    • Rollup application Metal seems not to have considered switching between blobs and calldata in its strategy, leading to suboptimal choices in some extreme cases.
    • Extreme cases are mainly due to Metal’s low blob utilization (see Figure 6) coinciding with a spike in blob gas prices.
    • However, given that extreme scenarios are rare and maintaining two data availability solutions is costly, Metal’s suboptimal strategy in extreme cases seems acceptable.

The analysis of blob and calldata solutions in this section only considers fee costs and not delay costs. Considering delay costs, calldata has an actual advantage.

Blob Gas Price and Blob Usage Strategies

Analysis of Blob Gas Price Fluctuations


Figures 9 and 10 show that in scenarios of high blob gas prices (> 10), the proportion of non-rollup applications (Others) is significantly higher than in scenarios of low blob gas prices (< 10).

Therefore, it can be concluded that the surge in blob gas prices is mainly driven by the demand from non-rollup applications, rather than rollup applications. Otherwise, the proportion of rollup and non-rollup applications should remain stable.

How Rollups Respond to Blob Gas Price Fluctuations

Hypothesis 1: The higher the blob gas price, to reduce fee costs, applications should carry more blobs in type 3 transactions, i.e., the number of blobs should be positively correlated with blob gas prices.

Figure 14 shows that the hypothesis does not hold.

Hypothesis 2: The higher the blob gas price, to reduce fee costs, applications should increase blob utilization, i.e., blob utilization should be positively correlated with blob gas prices.

Figure 15 shows that the hypothesis does not hold.

Hypothesis 3: The higher the blob gas price, to reduce fee costs, applications should delay blob submissions, i.e., blob submission intervals should be positively correlated with blob gas prices.

Figure 16 shows that the hypothesis does not hold.

In Figures 9 and 10, readers might notice that some rollup applications seem to respond to high blob gas prices. Scroll seems to suspend blob submissions under high blob gas prices. However, this conclusion is incorrect. The reason is that not all rollups immediately used blobs after the EIP-4844 upgrade.

Blobs and Block Reorg

From the Decun upgrade to May 22, there were 171 type 3 transactions included in the forked blocks and 348,121 included in the canonical blocks. Therefore, the proportion of type 3 transactions being forked is approximately 0.049%. This section explores the relationship between block reorg and blob.

Blob Number Distribution in the Canonical and Forked Blocks with Blobs

Hypothesis: More blobs increase the probability of block reorganizations.

If the hypothesis holds, the following inequality should be satisfied:

\begin{equation} P(reorg|blob=n) > P(reorg|blob=n-1) \end{equation}

According to Bayes’ theorem, inequality above is equivalent to:

\begin{equation} \frac{P(blob=n|reorg)}{P(blob=n)} > \frac{P(blob=n-1|reorg)}{P(blob=n-1)} \end{equation}

We check whether the actual data satisfies inequality and obtain the following table:

The table above shows that equation (10) does not hold for all n. Therefore, the hypothesis does not hold, indicating that more blobs are not significantly related to block reorganizations.

Distribution of Type 3 Transactions and Blobs by Applications in the Canonical and Forked Blocks with Blobs


Figures 18 and 19 show that the proportion of type 3 transactions/blobs for Zksync and Scroll in forked blocks is significantly higher than in the canonical blocks.

Applications seem to have some connection with block reorganizations, possibly related to differences in blob usage strategies by applications:

  • Zksync and Scroll are less strategic in selecting the timing of submitting type 3 transactions, targeting block heights prone to reorganization.
  • The unique characteristics of Zksync and Scroll’s type 3 transactions make the blocks containing them more likely to be reorganized.

Clustering Phenomenon of Forked Blocks with Blobs


If each block has the same probability of being reorganized, the forked blocks should be evenly distributed across the block height range. However, Figure 20 shows a clustering phenomenon in block heights for forked blocks, possibly related to network conditions.

In addition, the clustering phenomenon that occurs in block reorganization seems to be somewhat related to the applications that submit blobs. For example, type 3 transactions for non rollup applications are only included in forked blocks between 19500000 and 19600000.

3 posts - 2 participants

Read full topic

Block proposer Block Building is not just knapsack!

Published: Jun 19, 2024

View in forum →Remove

Authors: @Mikerah Afonso @sarisht

Shoutout to Gabearro Ventalitan Nerla Yun Qi and Surya for all the vibes and discussions!

This project was done as a Hackathon Project at IC3 camp last week.

TL;DR

We present a formal model or block building in blockchains. We show that block building is at least a combination of the Knapsack problem and the Maximum Independent Set problem, thus showing that block building is an NP-hard problem. Next, we provide various greedy algorithms with different tradeoffs. Then, we show simulation results to justify the algorithms and benchmarks. Our results show that tweaking the greedy solution with the results of the known knapsack constraint outperforms the currently used greedy algorithm by ~15% in terms of fee earned. Finally, we discuss how this is relevant for block builders in Ethereum in practice and directions for future research.

Introduction

Block building in Ethereum has evolved into a multimillion-dollar industry, particularly with the introduction of MEV-Boost. This has significantly increased the revenue earned by the builders. However, the builders’ algorithm for selecting transactions and transaction bundles needs more study. In collaboration with Flashbots, Mikerah (group lead for the project) has recently worked on a project that formalizes the model for block building as a knapsack problem. This model considers each transaction’s utility (the fee offered by the transaction) and cost (the gas used by the transaction), with a budget for the maximum price that can be paid (the gas limit for the block). The practical relevance of this research is evident, as it addresses a significant limitation of the current model, where not all transactions are independent of each other.

The Problem

Let’s delve into the heart of the matter by examining why transactions are not independent, a key challenge in block building.

Bitcoin Blockchain

The most critical problem described in the Satoshi Nakamoto blockchain paper was catching double-spending. If two transactions try to spend the same UTXO, only one of them should make it on-chain. Thus, we can see that some transactions are dependent on each other. However, that is not all; some transactions that interact with Bitcoin’s OP-Code design can also depend on each other. A classic example of this would be that in an HTLC, either a refund transaction (released by revealing a pre-image of a hash) or payment (released when the timelock on the transaction expires) can go through. If both transactions are simultaneously in the mempool, then the transactions conflict with each other.

Ethereum Blockchain

Ethereum inherits the double-spending transaction problem, but owing to its smart contract and gas fee design, it only partially suffers from the other type of conflict since the fee is paid based on the gas used. This causes the model to shift slightly, where the fee paid and the gas used depends on other chain transactions. Further, in the presence of searchers, some transactions are bundled such that multiple bundles contain the same transaction and thus cannot be included in the block simultaneously.

Model

We first introduce the assumptions we make before describing the mathematical formulations.

Assumptions

  • Dependent fees and gas are hard to model since we cannot have a boolean representation. Thus, we only consider “Conflicts” and touch upon “Dependency.” Conflicts are situations in which the transactions cannot occur together, and dependency is when one transaction requires another transaction to be executed before it is valid.
  • We further ignore the optimal ordering of transactions inside a block. Ordering transactions in a particular order can lead to higher profits due to MEV, which we ignore for the same reason as above.
  • For Ethereum, under the conditions of EIP 1559, the fee considered is the part above the base fee. Any transaction with a negative fee is ignored.

Given these assumptions, we now model the binary allocation problem with constraints and dependencies as follows:
Let T be the set of transactions. A transaction in T is denoted by tx_i.
Let f_i denote the fee associated with a transaction tx_i.
Let g_i denote the gas associated with a transaction t_i
Let B be the maximum block gas limit.

Then, we have the following optimization problem
Maximise

\sum_{i\in n} f_ix_i

Subject to

\begin{align*} &\sum_{i\in n} x_ig_i \leq B \\ & x_i+x_j \leq C_{ij}, \forall i\neq j \in n\\ & x_j - x_i \leq M_{ij}, \forall i\neq j \in n\\ & x_i \in \{0,1\} \end{align*}

where,

  • C_{ij} = 1 if t_i and t_j are conflicting transactions, 2 otherwise
  • M_{ij} = 0 if t_j depends on t_i and can only be allocated after t_i, 1 otherwise

Since, in practice, it is hard for a block builder to infer the 3rd condition (without executing all of the transactions) within a limit snapshot of the transactions within their order flow pools, we can omit the 3rd constraint to simplify the problem. If the builder comes across such a transaction, it would be considered invalid.

As such, we can obtain the following simplified optimization problem
Maximise

\sum_{i\in n} f_ix_i

Subject to

\begin{align*} &\sum_{i\in n} x_ig_i \leq BL \\ & x_i+x_j \leq C_{ij}, \forall i\neq j \in n\\ & x_i \in \{0,1\} \end{align*}

where,

  • C_{ij} = 1 if t_i and t_j are conflicting transactions, 2 otherwise

Reductions

Now, we present formal arguments as to why block building is an instance of the knapsack problem and the maximum independent set problem.

Reduction to knapsack

The reduction of the above problem to knapsack is easy to see. We assume no conflicts arise amongst any transactions. In that case, the problem is the same as solving a knapsack problem, with the utility as the fee paid by the transaction, space occupied as the gas used by a transaction, and finally, the block’s gas limit determines the knapsack size. Thus, the block-building problem is at least as hard as the knapsack problem.

Reduction to Maximum Independent Set

If we can solve the above instance of block building problem without any constraint that limits the size of the block in polynomials, then consider the following instance where the block gas limit is set to the sum of gas of all transactions in the mempool. This would imply enough space for all the transactions in the mempool to fit in the block. This problem is now equivalent to finding the maximum weighted independent set because we can consider all transactions as vertices, and an edge exists between two vertices if the corresponding transactions conflict. The above reduction creates the instantiation of the maximum weighted independent set problem, which is again known as NP-hard.

Algorithms for approximate result

As we mentioned above, block building is an NP-hard problem with reductions to both the knapsack problem and the maximum weighted independent set problem. Since we know that the maximum weighted independent set problem doesn’t have a C-approximation, this implies that the block-building problem also doesn’t have a C-approximation.

As such, we devise several greedy algorithms in order to solve the block-building problem in practice.

Greedy Classic (GC)

We expect today’s builders to use the first algorithm we present. It follows the most widely used knapsack greedy solution, where all objects are sorted according to the ratio of their utility to cost, and then greedily allocate space to each object until you can no longer allocate more space. Due to the added conflict constraint, the builder must check for conflict with any transaction already added to the block. Thus, the algorithm works as follows:

Algorithm input: T = \{t_i\}, F = \{f_i\}, G = \{g_i\}
Algorithm output: An ordered block with gas used less than BL
Algorithm description:

Sort T by corresponding F/G
Let B  := {}
Let BS := 0
For each t in T, f in F, g in G do:
    if t has any conflict with tx in B: continue;
    if g + BS < BL: B.append(t); BS += g
return B

In practice, the conflict between transactions is only known if simulated sequentially. We propose two constraints on how this conflict can be modeled.

  • Two transactions t_1 and t_2 conflict if the transactions cannot be executed together. This can happen if some address is trying to double-spend some money it has or if two searcher bundles try to extract MEV from the transaction. We call this conflict a “Real” conflict.
  • Two transactions t_1 and t_2 conflict if they interact with the same address. We call this conflict an “All” condition. These transactions do not necessarily invalidate each other. Still, we keep this as a potential conflict condition since this conflict is more straightforward to determine (constant size operation) than the other constraint (gas size operation), and thus can be helpful for builders optimizing based on the time computing is used.

Note: In the solution simulation, we assume that p=0.95 of transactions in the “All” conflict are not in the “Real” conflict.

Based on the definition of conflicts, we present the two baseline greedy solutions, which we label CG All and CG Real.

Knapsack Greedy

The greedy solution described above is not a good approximation solution. Looking back at the knapsack problem, we get a 1/2 approximation over the optimal solution by comparing the above classic greedy with the utility of the first object that was not allocated.

The algorithm begins by running an instance of the greedy classic. It then finds the highest paying (highest f/g) transaction and adds it to the block. Adding this transaction would require modification of the block since some transactions in block (B) conflicted with this transaction, or the transaction could not be inserted due to insufficient space. Thus, we remove transactions that conflict with this new addition and then make enough space to add this transaction. After inserting the transaction, we repeat the greedy insertion until the block is again full. We repeat the above algorithm until we have seen each transaction at least once over the greedy solution.

The pseudocode for the solution is as follows:

Sort T by corresponding F/G
Let B  := {}
Let B_f:= {}
Let S  := {}
Let BS := 0
while S != T: 
    let t := t in T, not in S, with maximum f/g:
    remove any transaction from B that conflicts with t.
    remove smallest f/g txs until there is space to insert t.
    B.append(t)
    S.append(t)
    For each t in T, f in F, g in G do:
        if t has any conflict with tx in B: continue;
        if g + BS < BL: B.append(t); BS += g; S.append(t)
    if sum(B.f) > sum(B_f.f): B_f = B

return B_f

# B.f is the fee corresponding to each transaction in B

In this greedy protocol, we attempt to enforce the inclusion of a transaction every time. It is still distinct from the greedy knapsack 1/2 approximation, but it tries to replicate what was accomplished by the knapsack greedy but for all items not picked by the greedy algorithm.

This solution will outperform its classic greedy counterpart since it computes maximum over all solutions, one of which is the classic greedy solution. Like the classic greedy solution, we analyze this when conflicts are “Real” and “All”.

Classic Greedy Informed Solutions

Solving the knapsack problem is very easy compared to all known NP-Hard problems, especially the maximum independent set condition we have been imposing. Thus, we allow the builder to solve the knapsack reasonably accurately and quickly via a BLP solver. The knapsack solution gives the builder some idea about how to build the block, and then when there are conflicting transactions in the chosen block, the “later” transactions are discarded. In this solution, we run a knapsack LP solution. On the output of the LP, we sort the output based on i) f/g ratio ii) f, and finally iii) g. The way greedy works here is that the transactions are picked in the order of the metric, and whenever there is a conflict, the LP solver is recalled, but removing constraints on the already added and the conflicting transaction (x_i is set to 1 for all that have already been chosen and x_i is set to 0 for the conflicting transaction). This is repeated until the block is full.

Let B  := {}
Let B_c:= {nil}
Let BS := 0
Let C  := {}
while B_c != B:
    B_c = LP.solve(sum(x.f), x.g <= BL, C)
    Sort B_c by "heuristic"
    for t in B_c:
        if t has any conflict with tx in B: 
            C.add(x_t = 0)
            break;
        B.append(t)
        C.add(x_t = 1)

return B


# Replace "heurestic" by f/g for standard, 
                       f for high-value 
# Sorting is in descending order 

We label these transactions as CGI-f/g and CGI-f. We only analyze the “All” conflicts for this since the time to run the algorithm is potentially higher than for the other Greedy Algorithms.

Simulation

Due to our limited time to work on the project, we tried to replicate the transaction data synthetically instead of working with real transactions. To properly simulate Ethereum mempool transactions, we choose the following dataset:

Dataset

We choose 2000 transactions under this distribution.

  • 80%: SMALL: g ~ N(24k, 3k) f/g ~ N(16,4) - These low gas-consuming transactions have minimal smart contract interactions and thus use less gas. In almost all cases, the gas fees for these transactions are small since they are usually never a priority transaction.
  • 18% : LARGE1: g ~ N(200k, 20K) f/g ~ N(16,4) - These represent transactions that have a significant contract execution; however, in this case, these are still not priority transactions, since the user is okay to wait for some time for the contract execution.
  • 2% : LARGE2: g ~ N(200k, 20K) f/g ~ N(40,10) - These are the priority transactions. Usually, these have high gas usage since they mostly interact with, for example, DeFi contracts and want to be executed as soon as possible.

We simulate the conflicts among these transactions by randomly choosing transactions such that each transaction has a \sigma number of conflicts. While our preliminary results constitute the same \sigma across all types of transactions, in practice, the larger transactions, especially the high-paying ones, would have a more significant number of conflicts since usually MEV extracting bundles would be constructed around them.

Results

We ran our simulation over 100 blocks with the mempool created as above.

When we consider \sigma=2 number of conflicts per transaction, we see the following results:


Increasing the number of conflicts each transaction had increases the problem’s difficulty. Therefore, the various greedy algorithms have a larger separation in performance:

For \sigma = 10,

For \sigma = 20,

For \sigma = 40,

Future Research Direction

Based on our results, solving the block-building problem is an NP-Hard problem, and as long as conflicts exist amongst the transactions, it remains a complex problem.

However, this does not mean that all hope is lost. The block-building problem may have more potential than the Maximum Independent Set problem. Combining the space of Knapsack and Maximum Independent Set gives us a smaller search space to find a satisfactory approximate solution for the issue at hand.

Further, for Ethereum bundles from searchers, if tx_i and tx_j conflict, as well as tx_j and tx_k conflict, then there is a high likelihood that tx_i and tx_k also conflict. This eases the constraints on the solution since, amongst an all-2-all graph of transactions, for MIS, you only need to pick the transaction with the highest utility (also satisfying knapsack).

Another thing to note is that our algorithms can inform how block builders construct blocks in practice. Notably, the Classical Greedy Informed algorithm, in which we sort the transactions by highest fee, is closest to the optimal solution.

That being said, the most exciting extension to this research would be modeling the block-building problem as a job sequencing problem instead and somehow estimating how utility (fee+MEV) from one transaction affects the utility of other transactions sequenced after the first transaction.

On that note, we invite potential collaborators to explore new ideas for building blocks that maximize the builders’ utility.

3 posts - 3 participants

Read full topic

Block proposer Fork-Choice enforced Inclusion Lists (FOCIL): A simple committee-based inclusion list proposal

Published: Jun 19, 2024

View in forum →Remove


^focil => fossil => protocol ossification

by Thomas, Barnabé, Francesco and Julian - June 19th, 2024

This design came together during a small, week long, in-person gathering in Berlin with RIG and friends to discuss censorship resistance, issuance, and Attester-Proposer-Builder-Consensus-Execution-[insert here] Separation.

Thanks to Luca, Terence, Toni, Ansgar, Alex, Caspar and Anders for discussions, feedback and comments on this proposal.

tldr

In this post, we introduce Fork-Choice enforced Inclusion Lists (FOCIL), a simple committee-based IL design.

FOCIL is built in three simple steps:

  1. Each slot, a set of validators is selected to become IL committee members. Each member gossips one local inclusion list according to their subjective view of the mempool.
  2. The block proposer collects and aggregates available local inclusion lists into a concise aggregate, which is included in its block.
  3. The attesters evaluate the quality of the aggregate given their own view of the gossiped local lists to ensure the block proposer accurately reports the available local lists.

This design ensures a robust and reliable mechanism to uphold Ethereum’s censorship resistance and chain neutrality properties, by guaranteeing timely transaction inclusion.

Introduction

In an effort to shield the Ethereum validator set from centralizing forces, the right to build blocks has been auctioned off to specialized entities known as builders. Over the past year, this has resulted in a few sophisticated builders dominating the network’s block production. Economies of scale have further entrenched their position, making it increasingly difficult for new entrants to gain significant market share. A direct consequence of oligopolistic block production is a deterioration of the network’s (weak) censorship resistance properties. Today, two of the top three builders are actively filtering out transactions interacting with sanctioned addresses from their blocks. In contrast, 90% of the more decentralized and heterogeneous validator set is not engaging in censorship.

This has driven research toward ways that allow validators to impose constraints on builders by force-including transactions in their blocks. These efforts recently culminated in the first practical implementation of forward \text{ILs} (\text{fILs}) being considered for inclusion in the upcoming Pectra fork (see design, EIP, and specs here). However, some concerns were raised about the specific mechanism proposed in EIP-7547, leading to its rejection.

Here, we introduce FOCIL, a simple committee-based design improving upon previous IL mechanisms (Forward ILs, COMIS) or co-created blocks (CBP) and addressing issues related to bribing/extortion attacks, IL equivocation, account abstraction (AA) and incentive incompatibilities. Note also Vitalik’s recent proposal “One-bit-per-attester inclusion lists”, where the committee chosen to build the list is essentially the whole set of attesters.

Design

In this section, we introduce the core properties of the FOCIL mechanism (see Figure 1.).

High-level overview

Each slot, a set of validators is randomly selected to become part of an inclusion list (\text{IL}) committee. \text{IL} committee members are responsible for creating local inclusion lists (\text{IL}_\text{local}) of transactions pending in the public mempool. Local \text{ILs} are then broadcast over the global topic, and the block producer must include a canonical aggregate (\text{IL}_\text{agg}) of transactions from the collected local \text{ILs} in its block B. The quality of \text{IL}_\text{agg} is checked by attesters, and conditions the validity of block B.

Figure 1. Diagram illustrating the FOCIL mechanism.

Mechanism

  • Validator Selection and Local Inclusion Lists
    • A set of validators is selected from the beacon committee to become \text{IL} committee members for slot n. This set is denoted as \text{IL}_\text{committee}(n) = \{ 1, \dots, m \}, where m is the number of \text{IL} committee members.
    • Each \text{IL} committee member i \in \text{IL}_\text{committee}(n) releases a local \text{IL}, resulting in a set of local \text{ILs} for slot n, defined as \text{IL}_\text{local}(n) = \{ \text{IL}_1, \dots, \text{IL}_m \}.
    • Each local \text{IL}_i contains transactions: \text{IL}_i = \{ \text{tx}^1_i, \dots, \text{tx}^{j_i}_i \}, where each \text{tx} is represented as \text{tx} = (\text{tx}[\text{From}], \text{tx}[\text{Gas Limit}]), and j_i indicates the number of transactions in \text{IL}_i. The From field represents the sender’s address, and the Gas Limit field represents the maximum gas consumed by a transaction. This is used to check whether a transaction can be included in a block given the conditional IL property.
  • Block Producer’s Role
    • The block producer of slot n, denoted \text{BP}(n), must include an \text{IL} aggregate denoted \text{IL}_\text{agg} and a payload in their block B = (B[\text{IL}_\text{agg}], B[\text{payload}]).
    • \text{IL}_\text{agg} consists of transactions: \text{IL}_\text{agg} = \{ \text{tx}^1_\text{agg}, \dots, \text{tx}^{t_\text{agg}}_\text{agg} \} where each transaction \text{tx}_\text{agg} is defined as (\text{tx}_\text{agg}[\text{tx}], \text{tx}_\text{agg}[\text{bitlist}]), and the \text{payload} must include transactions present in the \text{IL}_\text{agg}.
    • The bitlist \text{tx}_\text{agg}[\text{bitlist}] \in \{0, 1\}^m indicates which local $\text{IL}$s included a given transaction.
    • The function \text{Agg} takes the set of available local ILs \text{IL}_\text{local}(n) and outputs a “canonical” aggregate. The proposer aggregate \text{IL}_\text{agg}^\text{proposer} is included in block B, and each attester evaluates it quality by comparing it against its own \text{IL}_\text{agg}^\text{attester}, using the function \text{Eval}(\text{IL}_\text{agg}^\text{attester}, \text{IL}_\text{agg}^\text{proposer}, Δ) \in \{ \text{True}, \text{False} \}.
  • Attesters’ Role
    • Attesters for slot n receive the block B and apply a function \text{Valid}(B) to determine the block validity.
    • \text{Valid} encodes the block validity according to the result of \text{Eval}, as well as core IL properties such as conditional vs. unconditional.
    • Here are some scenarios to illustrate \text{IL}-dependent validity conditions:
      • If local \text{ILs} are made available before deadline d, but the proposer doesn’t include an \text{IL}_\text{agg}^\text{proposer}, block B is considered invalid.
      • If no local \text{ILs} are made available before deadline d, and the proposer doesn’t include an \text{IL}_\text{agg}^\text{proposer}, block B is considered valid.
      • If block B is full, local $\text{IL}$s were available before d, and the proposer doesn’t include an \text{IL}_\text{agg}^\text{proposer}, block B is still considered valid.
      • If \text{IL}_\text{agg}^\text{proposer} doesn’t overlap with most of attesters’ \text{IL}_\text{agg}^\text{attester} according to \text{Eval}, block B is considered invalid.

The core FOCIL mechanism could be defined as:

\mathcal{M}_\text{FOCIL}= (\text{Agg}, \text{Eval}, \text{Valid})

Timeline

The specific timing is given here as an example, but more research is required to figure out which numbers make sense.

  • Slot n-1, t = 6: The \text{IL} committee releases their local \text{ILs}, knowing the contents of block n-1.
  • Slot n-1, t=9: There is a local \text{IL} freeze deadline d after which everyone locks their view of the observed local \text{ILs}. The proposer broadcast the \text{IL}_\text{agg} over the global topic.
  • Slot n, t=0: The block producer of slot n releases their block B which contains both the payload and aggregated \text{IL}_\text{agg}.
  • Slot n, t=4: The attesters of slot n vote on block B, deciding whether \text{IL}_\text{agg} is “good enough” by comparing the result of computing the \text{Agg} function over their local view of available local \text{ILs} (applying \text{Eval}) and checking if block B is \text{Valid}.

Aggregation, Evaluation and Validation Functions

As mentioned in the mechanism section, FOCIL relies on three core functions. Each of these needs to be specified to ensure the mechanism fulfils its purpose.

  • The \text{Agg} function is probably the most straightforward to define: Transactions from all collected local \text{ILs} should be deterministically aggregated and deduplicated to construct \text{IL}_\text{agg}. We let:

    • \text{IL}_\text{local} = \{\text{IL}_1, \text{IL}_2, \ldots, \text{IL}_m\} be the set of local inclusion lists collected from committee members m.
    • Each \text{IL}_i = \{\text{tx}_i^1, \text{tx}_i^2, \ldots, \text{tx}_i^{t_i}\}
      be the transactions in the local inclusion list of the i-th committee member.
    • Each transaction \text{tx} be defined by (\text{hash}, \text{sender}, \text{nonce})

    \text{Agg}(\text{IL}_\text{local}) can be thus defined as:

    \text{Agg}(\text{IL}_\text{local}) = {\text{tx} | \text{tx} \in \bigcup_{i \in m} \text{tx}_{i} }
  • The \text{Eval} function is used by each slot n attester to assess the quality of the \text{IL}_\text{agg} included in block B. Each attester calculates the \text{Agg} function over all local \text{ILs} they have observed in their view and then compares their generated \text{IL}_\text{agg}^\text{attester} to the one included by the proposer \text{IL}_\text{agg}^\text{proposer}. To allow for some leeway, the function includes an overlap parameter Δ (in %). The \text{Eval} function can then be defined as:

    \text{Eval}(\text{IL}_\text{agg}^\text{attester}, \text{IL}_\text{agg}^\text{proposer}, Δ) = \begin{cases} \text{True} & \text{if } \frac{|\text{IL}\text{agg}^\text{attester} \cap \text{IL}\text{agg}^\text{proposer}|}{|\text{IL}_\text{agg}^\text{attester} \cup \text{IL}_\text{agg}^\text{proposer}|} \geq Δ \\ \text{False} & \text{otherwise} \end{cases}

    Note that the \text{Eval} function, and especially its parameter Δ, will determine the trade-off between (1) the quality of the \text{IL}_\text{agg}^\text{proposer} and the agency we are willing to give to proposers, and (2) liveness, as we might see an increase in missed slots if the criteria are set too strictly.

  • The \text{Valid} function encodes whether the \text{IL}_\text{agg} conforms to pre-defined core \text{IL} properties, such as:

    • Conditional vs. Unconditional: Should the proposer include as many \text{IL} transactions in the block as possible as long as there is space left, or is there dedicated block space reserved for \text{IL} transactions?
    • Where-in-block: Where should \text{IL} transactions be included in the block? Should they be placed anywhere, at the top of the block, or at the end of the block?
    • Expiry: How long do transactions remain in the \text{IL} once they have been included? What happens if a slot is skipped?

More rules

In the following section, we introduce other rules that could be added to the core mechanism to specify:

  • How users should pay for having their transactions included (\text{Payment})
  • How rewards can be distributed across FOCIL participants (\text{Reward})
  • How local \text{ILs} are constructed (\text{Inclusion})
  • Interactions between \text{IL} and payload transactions (\text{Priority}).

User Bidding, \text{Payment} and \text{Reward} rules

  • Users place bids based on the value they assign to having their transactions included in block B. They need to take into consideration the FOCIL mechanism \mathcal{M}_\text{FOCIL}, but also how the EIP-1559 mechanism works to set their base fees, denoted \mathcal{M}_\text{1559}. For instance, a user t makes a bid b^t(v^t, \mathcal{M}_\text{FOCIL},\mathcal{M}_\text{1559}) = (\delta^t, f^t), where \delta^t is the maximum priority fee and f^t is the maximum total fee (i.e., base fee r + priority fee \delta^t).
  • The vector of bids from all users is denoted as \mathbf{b} = (b^1, b^2, \dots, b^T), where each b^t represents the bid from user t.
  • The \text{Payment} rule p(\mathbf{b}) = (p_0(\mathbf{b}), p_1(\mathbf{b}), \dots, p_t(\mathbf{b}), \dots, p_m(\mathbf{b})) ensures that users pay no more than their priority fee \hat{\delta}^t = \min(\delta^t, f^t - r). Here, p_0(\mathbf{b}) represents the payment to the block producer, and p_t(\mathbf{b}) represents the payment made by user t to all other \text{IL} committee members, where the set of users has size m and the block producer is indexed by 0.

The \text{Payment} rule defined above is meant to give a general view of how the value paid by users’ transactions can be redistributed across FOCIL participants (e.g., \text{IL} committee members, block producer) to incentivize behavior that is considered good for the network, in this case preserving its censorship-resistant properties. Incentivizing \text{IL} committee members for including transactions strengthens the robustness of the mechanism by increasing the cost of censorship, or the amount a censoring party would have to pay for \text{IL} committee members to exclude transactions from their local \text{ILs}. Delving into the specifics of how the builder and \text{IL} committee members should be rewarded is beyond the scope of this post as distributing rewards in an incentive-compatible way, especially during congestion, gets quite complex.

However, here are three high-level options to consider:

  • Option 1: All transaction priority fees go to the builder, and \text{IL} committee members are just not incentivized to include transactions in their local \text{ILs}. This simple option doesn’t require any changes to the existing fee market, but entirely relies on altruism from \text{IL} committee members. We could even consider an opt-in version of FOCIL, where validators can choose to be part of a list that may be elected to become \text{IL} committee members and participate in building \text{ILs} altruistically. However, it wouldn’t increase the cost of censorship nor would it make it very appealing for validators to participate in the mechanism. This could also lead to out-of-band payments from users wanted to have their transactions included in local \text{ILs}.
  • Option 2: Priority fees from transactions included in the block are given to the \text{IL} committee members. To distribute rewards among members, we could implement a weighted incentive system by defining a \text{Reward} rule to calculate and distribute rewards for each member, considering the quantity (i.e., count) and uniqueness of transactions included in their local lists (see Appendix 1 of the COMIS post for more details). If transactions are not part of the \text{IL}_\text{agg}, priority fees go to the builder. However, this approach could be problematic during congestion periods with the conditional \text{IL} property, as builders might be incentivized to fill the block with transactions that are not in the \text{IL}_\text{agg}, even if \text{IL} transactions have higher priority fees. To address this, we might need to design a mechanism that redirects priority fees to the builder during congestion. However, the practical implementation and potential secondary effects need further investigation.
  • Option 3: A third option is to introduce a new, separate inclusion fee that always go to IL committee members while priority fees always go to the builder. This would likely address the concerns of Option 2 related to congestion but would introduce a whole other variable that users need to set. A useful distinction between Option 2 and Option 3 is whether the complexity is pushed upon the IL committee members or the end users.

Another interesting question to explore is the impact of fee distribution across \text{IL} committee members on mechanisms like MEV-burn. Options 2 and 3 would effectively “reduce the burn” and produce a similar effect as MEV-smoothing, but on a smaller scale limited to the size of the \text{IL} committee (h/t Anders).

\text{Inclusion} Rule

The \text{Inclusion} rule determines the criteria according to which \text{IL} committee members should build their local \text{ILs}. In FOCIL, we define it with the premise that IL committee members will try to maximize their rewards. Assuming Option 2 for the \text{Payment} rule, the \text{Inclusion} rule could be to include all transactions seen in the public mempool, ordered by priority fees.

\text{Priority} Rule

We assume the block will be made of two components: a payload and an \text{IL}_\text{agg} included by the proposer to impose constraints on transactions that need to be included in the builder’s payload. Imposing constraints to the block payload via the \text{IL}_\text{agg} thus requires a priority rule to determine what happens during congestion. Generally, the priority rule in FOCIL states that transactions in the \text{IL}_\text{agg} might be excluded if the block can be filled with the builder’s payload transactions. In other words, the block will still be valid even if some transactions in the \text{IL}_\text{agg} are not included, as long as the block is completely full (i.e., the 30 M gas limit is reached).

Note: Rules are not set in stone and should be interpreted as candidates for FOCIL. Rules also don’t necessarily have to be made explicit. For instance, we can define the \text{Reward} such that the dominant strategy of the \text{IL} committee is to adhere to the \text{Inclusion} rule without any kind of enforcement by the protocol.

Improvements and Mitigations

In this section, we discuss improvements over previous \text{IL} proposals, focusing on simplification and addressing specific implementation concerns.

Commitment attacks

One of the main differences between FOCIL and the forward IL (\text{fIL}) design proposed in EIP-7547 is that FOCIL relies on a committee of multiple validators, rather than a single proposer, to construct and broadcast the \text{IL}. This approach imposes stricter constraints on creating a “good” aggregate list and significantly reduces the surface for bribery attacks. Instead of targeting a single party to influence the exclusion of transactions from the \text{IL}, attackers would now need to bribe an entire \text{IL} committee (e.g., 256 members), substantially increasing the cost of such attacks. Previous designs (e.g., COMIS and anon-IL), also involved multiple parties in building inclusion lists but still relied on an aggregator to collect, aggregate, and deduplicate local \text{ILs}. In FOCIL, the entire set of attesters now participates in enforcing and ensuring the quality of the \text{IL} included in the proposer’s block, thus removing single-party dependency other than the proposer. Additionally, it is worth noting that a censoring proposer would have to forego all consensus and execution layer rewards and cause a missed slot to avoid including transactions in the \text{IL}.

Splitting attacks and IL equivocation

Another concern with \text{fILs} focused on possible “splitting” attacks using \text{ILs}. Splitting attacks like timed release or “equivocation” occur when malicious participants attempt to divide the honest view of the network to stall consensus. On Ethereum, a validator equivocating by contradicting something it previously advertised to the network is a slashable offense. If there is evidence of the offence being included in a beacon chain block, the malicious validator gets ejected from the validator set. Quick reminder that in the EIP-7547 design, the proposer for slot n-1 is responsible for making the \text{IL} to constrain proposer n, and can broadcast multiple \text{ILs} (check out the No-free lunch post to see why, and how it relates to solving the free data availability problem). This means a malicious proposer could split the honest view of the network through \text{IL} equivocation without being slashed. However, this is not a concern with FOCIL, since \text{IL}_\text{agg} has to be part of proposer $n$’s block. An \text{IL} equivocation would thus be equivalent to a block equivocation, which is a known, slashable offense from the protocol’s perspective.

Incentives incompatibilities

Previous \text{fILs} proposals did not consider incentivizing the \text{IL} proposer(s) for including “good” transactions. Relying on altruistic behavior might be fine, but there is always the risk that only very few validators will choose to participate in the mechanism if there is no incentive to gain. There is a strong argument to be made that the adoption of any \text{IL} mechanism might be very low if validators risk being flagged as either non-censoring or censoring entities by revealing their preferences (see the Anonymous Inclusion Lists post), and if they are not rewarded for contributing to preserving the network’s censorship resistance properties. In FOCIL, we consider mechanisms to distribute rewards across \text{IL} committee members and mention two options (Option 2 and Option 3 in the \text{Payment} rule section) for sharing transaction fees based on the quantity (i.e., count) and uniqueness of transactions included in their local lists. We hope to continue working in this direction and to find incentive-compatible ways to increase the costs of censorship.

Same-slot censorship resistance

By having FOCIL run in parallel with block building during slot n-1, we can impose constraints on the block by including transactions submitted during the same slot in local \text{ILs}. This is a strict improvement over \text{fILs} designs, where the forward property imposes a 1-slot delay on \text{IL} transactions. This property is particularly useful for time-sensitive transactions that might be censored for MEV reasons (see Censorship resistance in onchain auctions paper). Admittedly, the mechanism is not exactly real-time because we still need to impose the “local \text{IL} freeze” deadline d so block producers have time to consider \text{IL}_\text{agg} transactions before proposing their block.

\text{IL} conditionality

A core property of \text{ILs} is their conditionality, which determines whether ILs should have dedicated block space for their transactions (unconditional) or share block space with the payload and only being included if the block isn’t full (conditional). For FOCIL, we’re leaning towards using conditional \text{ILs} for a couple of reasons. Firstly, it might generally be best to give sophisticated entities like builders the maximum amount of freedom in organizing block space as long as they include \text{IL} transactions. Allowing them to order transactions and fill blocks as they prefer, rather than imposing too many restrictions on their action space, reduces the risk of them using side channels to circumvent overly rigid mechanisms. Specifically, the unconditional property just couldn’t really be enforced effectively with FOCIL, since builders wanting to use \text{IL} dedicated block space could simply “buy up \text{IL} committee seats” from the elected validators to include their transactions via local \text{ILs}. Another reason to opt for conditional \text{ILs} is the flexibility in the size of the list. With unconditional ILs, an added block space must strictly set an arbitrary maximum \text{IL} gas limit (e.g., 3M gas). In contrast, conditional \text{ILs} allow for a much more flexible \text{IL} size, depending on the remaining space in the block. The known tradeoff with conditional \text{ILs} is block stuffing: censoring builders might fill their blocks up to the gas limit to keep \text{IL} transactions out. More research is needed to determine the sustainability of block stuffing, as consecutive full blocks exponentially increase base fees and the overall cost of this strategy.

Account Abstraction accounting

In previous proposals, \text{IL} summaries were constructed as structures to constrain blocks without committing to specific raw transactions. Each \text{IL} summary —or \text{IL}_\text{agg} for FOCIL— entry represents a transaction by including the following fields: From and Gas Limit. Satisfying an entry in the IL summary requires that at least some transaction from the From address has been executed, unless the remaining gas in the block is less than Gas Limit . The idea is simple: if a transaction was previously valid and had a sufficiently high basefee, the only two things preventing its inclusion are the lack of sufficient gas in the block or its invalidation, which would require a transaction from the same sender to have been previously executed. Here we rely on a property of Ethereum EOAs: the nonce and balance of an EOA determine the validity of any transaction originating from that EOA, and can only be modified by such a transaction.

However, even limited forms of Account Abstraction that have been considered for inclusion in Electra (e.g., EIP-3074 or EIP-7702) allow a transaction to trigger a change in an EOA’s balance, without originating from that EOA. This raised concerns regarding previous \text{fIL} proposals, as proposer n is not aware of what is included in builder $n$’s payload when proposing its \text{IL}. This could lead to a scenario where proposer n includes a transaction txn_A from address A in the \text{IL}, while builder n includes an EIP-7702 transaction txn_B, originating from address B but sweeping out all the ETH from address A, and thus invalidating txn_A. Consequently, builder n+1 would no longer be able to include txn_A, though no other transaction from address A has been previously executed. In other words, the IL summary would be unsatisfiable.

In FOCIL, one simplification is that the constraints from the \text{IL}_\text{agg} apply to the block that is being built concurrently. This means a transaction in the \text{IL}_\text{agg} can’t be invalidated because of a transaction in the previous block, as it can in \text{fIL} designs. In other words, we do not need to worry about what happened in the previous block in order to check for satisfaction of the \text{IL}_\text{agg}. However, a builder could still insert EIP-7702 transactions in its payload that invalidate \text{IL}_\text{agg} transactions. To handle this case, we can do the following when validating a block:

  • Before executing the block’s transactions, we store nonce and balance of all From addresses that appear in the \text{IL}_\text{agg}.
  • After execution, we check the nonce and balance of all From addresses from the \text{IL}_\text{agg} again, and for each (From, Gas Limit) pair in the \text{IL}_\text{agg} we require that either the nonce or the balance has changed, or the Gas Limit is more than the remaining gas.

If the nonce has changed, some transaction from that address has been executed. If the balance has changed but the nonce has not, some AA transaction has touched that address. In either case, that address has transacted in the block, and the entry is satisfied.

Note: With "full” AA, transactions could have validity that depends on arbitrary state (e.g., the price changing in a Uniswap pool). In such cases, relying on a reduced form of transactions (i.e., entries with From and Gas limit fields) is insufficient, as the full validation logic of the transaction is needed. Due to the free data-availability problem, putting raw transactions on-chain is not an option. Instead, attesters could check this locally since they need to construct their own \text{IL}_\text{agg}^\text{attester} and could, therefore, evaluate the full validation logic. This allows them to verify if the transaction has been invalidated and if its inclusion should be enforced. However, attesters might have \text{IL}_\text{agg}^\text{attester}\text{s} that contain different transactions from the same From address, leading to a situation where one transaction might be invalidated while another is not. This would result in split views and potential attacks

8 posts - 4 participants

Read full topic

Economics Burn incentives in MEV pricing auctions

Published: Jun 18, 2024

View in forum →Remove

Burn incentives in MEV pricing auctions

Thanks to Barnabé Monnot, Thomas Thiery and Caspar Schwarz-Schilling for feedback and comments.

Introduction

Overview

This post presents a rudimentary review of incentives for burning MEV under the “simple” MEV burn mechanism presented by Justin, as well as its slot auction counterpart, “execution auctions” presented by Barnabé. The analysis is also applicable to Francesco’s original MEV smoothing design. These auctions—involving builders bidding, attesters enforcing a base fee floor, and proposers selecting a winning bid—will be defined as MEV pricing auctions (in the author’s view, the “execution auction” moniker could also be extended to cover all MEV pricing auctions).

The post highlights how incentives to drive up the price floor (and thus burn more MEV) can emerge in these designs regardless of any direct profit motive among builders for doing so. Importantly, stakers and staking service providers wish to ensure that competitors do not attain more rewards for selling MEV capture rights than them. They may therefore integrate with builders to bid away competing stakers’ profits. Auctions that set a price floor on proposers’ MEV capture rights will thus be influenced by the overarching staking metagame. It is only at this layer that griefing attacks against proposers to burn their MEV capture rights can be understood. Adverse competition during the consensus formation process might hypothetically lead attesters to bias their MEV base fee floor during split views, rejecting or admitting blocks depending on how it impacts their bottom line (in their roles as both builders and stakers). This is something to be attentive to. Naturally, burning MEV might also be considered a public good, and such incentives are reviewed in the text as well.

MEV pricing auctions

In MEV burn–a simple design, Justin formulated an add-on to enshrined proposer–builder separation (ePBS), modifying the MEV smoothing design. Builders can specify a base fee and a tip in their block bids. At some specific time before the slot begins (e.g., 2 seconds), attesters observe the highest base fee among the bids (“observation deadline”) and impose it as a subjective base fee floor when attesting to the proposer’s block. Only bids with a base fee above the floor are accepted, and the base fee is burned.

If builders bid before the observation deadline with the same timing as today, then the mechanism will burn substantial MEV. Concerns have however been raised over the risk of collusion between proposers and builders and lack of proper incentivization. A recent write-up on the benefits of the design and MEV burn in general generated similar worries of a stable equilibrium of late bidding.

The design can be further modified to involve auctioning off the rights to the entire slot, 32 slots in advance (“execution auction”). A benefit of this design is the ability to offer long-lived preconfirmations and—hypothetically—the reduced value-in-flight during the auction. The same concerns raised for the block auction design can be applied to the slot auction design, because the beacon proposer might still benefit from colluding with builders to form late-bidding cartels when selecting the execution proposer.

A modified MEV pricing auction, MEV burn with builder kickbacks, attempts to compensate builders for bidding early. That design is not the focus of this post, but incentives and side effects in uncompensated MEV pricing auctions will affect its relevance.

Five burn incentives in MEV pricing auctions

The outlined concerns of late bidding are valid, but it turns out that it is not possible to analyze MEV burn without incorporating stakers as participating agents. In such an analysis, competition for attaining the most yield will—under equilibrium—drive participants to burn each other’s MEV. Other incentives for burning MEV also exist. The analysis starts from the most idealistic public good example in (A) and gradually builds toward a metagame of active collusion to discourage other stakers in (E) (see Figure 1).

Figure 1. Five types of builders potentially burning MEV in MEV pricing auctions: (A) Public good builder, (B) For-profit public good builder, (C) Extortion racket, (D) Staker-initiated griefing, (E) Staker-initiated griefing cartel. The incentives behind (D) are important to understand (indicated by an arrow).

(A) Public good builder

The first example is a builder that dedicates resources to burning MEV without a direct profit motive. If Ethereum’s users believe that burning MEV is a public good, and in particular if no other incentive is sufficient, they may come together to fund the development and operation of a public good builder. Initiatives to fund public goods are fairly prevalent within the Ethereum ecosystem. The public good builder can for example consistently bid according to guaranteed MEV at the observation deadline in the block auction design. This ensures that the MEV is burned while the builder will not suffer any direct losses from the bid. In the slot auction design, the builder would instead need to bid according to its expected MEV for the entire slot and might bid slightly below to stay safe.

The public good builder will likely not be the best and will often be outbid in terms of tips from other builders in the proposer auction (taking place after the observation deadline), in which the proposer selects a winning bid. But the operation can still be very impactful. After all, priority fees are a significant portion of all value (in this post these fees are also treated as MEV), and some further “low-hanging MEV fruits” are potentially available without dedicating too large resources for extraction. While the builder may use any public goods funding received diligently and not strive for any profit, pursuing an idealistic path can still raise the originators’ public profile and provide significant economic benefits in the future (perhaps not even directly related to building blocks).

(B) For-profit public good builder

A builder that positions itself as providing a public good may also enjoy direct economic benefits from its operation if some validators sympathize with the mission. There may for example be a market fit for builders that do not censor, nor extract various types of toxic MEV. In the block auction design, the builder could keep the MEV base fee in line with the available (non-censorship/non-toxic) MEV during the attester auction, and then pivot to tipping afterward, retaining some small profit margin. The MEV in some blocks is not particularly geared towards specialized searchers, and stakers may not lose that much in tips for some blocks by selecting the public good builder. Therefore, the public good builder could have higher profit margins in the blocks it does eventually get to build than builders that have not positioned themselves as providing a public good. A builder bidding before the observation deadline might of course also hope that its bids are the only ones to reach the proposer in times of degraded network conditions.

(C) Extortion racket

Given the lower effort required for extracting some of the MEV, it seems like (A) and (B) could have a natural position and high impact within the Ethereum ecosystem. But it may very well be that no successful public good builder can be sustained over the long run. After all, many stakers will not be particularly enthusiastic over a builder that burns their MEV opportunities.

Still, consider the importance of a dedicated MEV-burning builder within the staking ecosystem. If the builder is operational, proposers will lose out on a lot of value relative to if it does not operate. Is there a business opportunity here? Perhaps a builder could commit to burning the maximum possible MEV but abstain from doing so if it receives a bribe from the proposer? It seems natural that proposers would be willing to pay for this, since the proposer stands to capture most value from the available MEV if none is burned. But the prospect of competition makes the business model perilous. If a sole extortive builder is profitable, then a few more may try to enter the market as well. There is not much use in paying off two builders if it turns out that a third burned the MEV anyway through a bid. A mechanism for reconciling this ex-post would become rather complex. The validator may then be better off by simply not negotiating with any extortion racket.

While the extortion racket seems unsustainable, it helps to underscore the power that builders have over proposers. The ultimate incentive for burning MEV then emerges when changing the responsible actor from one unaffected by the staking equilibrium (extorting builder) to one that is not (other stakers). The auction will eventually become part of the metagame of the overarching staking equilibrium.

(D) Metagame—staker-initiated griefing

Staking service providers (SSPs) compete for delegated stake and derive income by taking a cut of the staking yield when they pass it back to the delegators. An SSP must ensure that the yield it offers delegating stakers is competitive relative to offers from other SSPs. The MEV pricing auction may therefore lead SSPs to burn competing proposers’ MEV by tightly integrating with builders or running them in-house. If a competitor burns an SSP’s MEV, then the SSP must respond in kind or will lose out on delegators and thus income. When considering the metalevel of SSPs, this equilibrium seems more stable than an equilibrium of late bidding leading to little or no MEV burn. All it takes to break the late-bidding cartel is one defecting SSP builder, forcing others to respond.

An SSP that through a builder griefs other stakers without taking any loss executes something comparable to a discouragement attack with an infinite griefing factor. This is a very advantageous attack, primarily because delegators will flow to the best performing SSP. In addition, a reduction in overall yield for other stakers pushes down the quantity of supplied stake, bringing up the equilibrium yield. Thus, even if some delegators do not flow to the SSP that burns its competitor’s MEV, the expected staking yield (that the SSP will share in the profit from) will still go up, if the competitor’s customers simply stop delegating. Of course, the cost of running the builder must be accounted for. But large SSPs can amortize that cost across a vast amount of yield-bearing validators.

Yet, directly profiting from the MEV is almost always better than burning it. When an SSP’s builder is able to extract more MEV in a competitor’s slot than any other builder, it will still be better off only bidding to a level that ensures it wins the auction. The SSP must thus make a probabilistic judgment as to the uniqueness of its MEV opportunity in the particular slot before deciding how to proceed (or more precisely, any edge in MEV value V_e relative to the second best builder). An SSP builder must in essence bid before the observation deadline up to the point where the expected payoff from burning the marginal MEV is equal to the expected payoff from waiting and hoping to extract it. There are some game-theoretic nuances to this that here will be set aside, with some aspects discussed in the next section. The point is to assert that there are stronger incentives for builders to bid before the observation deadline than what has been previously understood, because a builder might be run by an SSP that indirectly profits from burning other stakers’ potential MEV revenue.

What happens in the metagame to smaller SSPs and solo stakers? They may not afford to run a builder of their own to ensure that their competitors’ MEV is burned. It is of course possible for solo stakers to try to come together to form a union around a builder, where each contributor is guaranteed to see their validators excluded from MEV base fee bids by the specific builder (and receive full tips during the proposer auction). There is then a question of if they will be able to organize such a union, but also if it really would be necessary. On the one hand, if there are several “griefing builders” running concurrently among the largest SSPs, parties holding less stake may not need to run their own griefing builder. Everyone will see their MEV burned anyway, since the big SSPs burn each other’s and everyone else’s MEV. On the other hand, a party not having a griefing builder readily available may be suboptimally positioned when considering the prospect of cartelization.

(E) Metagame—staker-initiated griefing cartel

Can builders operating at the metalevel collude to selectively burn or selectively not burn MEV, depending on the identity of the slot’s validator? The cartel would strive to ensure that all participating SSPs (or any union of solo stakers) receive the MEV in their validators’ proposed blocks, while minimizing MEV in all other validators’ blocks.

However, if attesters are honest, builders can only cartelize to selectively burn or not burn MEV that they uniquely are able to extract. As long as competing builders are operational, this substantially limits the power of any cartel. Therefore, the advantage of (E) over (D) is not substantial.

Proposer is part of the cartel

When the beacon proposer is part of the cartel, members will abstain from bidding before the observation deadline to ensure that as much value as possible flows to the proposer. This type of cartelization has been highlighted as a concern (1, 2) in the debate around MEV pricing auctions. The idea is that participants come to an explicit or implicit agreement to not bid before the observation deadline. Yet the incentive to burn MEV is stronger than previously understood, since stakers outside the cartel will wish to grief cartel members by bidding early (D), and so from this perspective, the risk of late-bidding-cartelization is lower than feared.

It might also be difficult to efficiently uphold cartelization, because it is not possible for members to know which, if any, defected in pursuit of (D). One avenue would be to try to share the profits from every slot to give all participants incentives to hold back bids before the observation deadline. Yet overall, the existence of (A), (B), and (D) means that some value will still reasonably be burned by public good builders or any competitors not part of the cartel.

Proposer is not part of the cartel

When the beacon proposer is outside the cartel, the goal is to deprive it of revenue while still capturing as much of the MEV as possible. It will still be more profitable for the cartel to extract any unique MEV opportunity rather than burn it. Define V_s as the value a builder can attain in the slot auction and V_b as its value for the block auction (from a block built at the observation deadline). When a builder can extract the most MEV, it has an edge V_e over the second-best builder (kept constant for simplicity). Just as in (D), the cartel can bid up to V_b-V_e or V_s-V_e, with the difference that V_e expands if the cartel collectively gains a larger edge against the best builder outside of the cartel. This expansion is what the cartel tries to capitalize on, both when the proposer is part of the cartel (expanding V_e to lower the burn) and when not (expanding V_e to increase builder profits). A challenge—just as in (D)—is that the cartel might not be able to properly estimate V_e. After the observation deadline, the cartel attempts to extract as much value as possible, leaving the MEV either burned or in their hands.

Collusion at other levels

The presentation so far has been somewhat simplistic. It bears mentioning that collusion need not happen at the level of the builders, but can for example happen at the level of searchers or any out-of-protocol relay that the cartel still finds beneficial to maintain before posting to the P2P layer. In all scenarios of successful cartelization, if some stakers (for example solo stakers) are unable to act collectively, they may end up at the short end of the discouragement dynamic.

Risks associated with attester–builder integration

The analysis so far indicates that (D) may have a significant effect on its own but that it does not necessarily lead to the riskier cartelization in (E). But what might happen when we give SSPs tools for depriving each other of revenue? While SSPs will always compete, competition in MEV pricing auctions is on the verge of seeping into the consensus formation process. At the consensus level, all participants are expected to behave honestly and are rewarded for good behaviour. Through staker–builder integration in (D)-(E), SSPs will come to actively influence each other’s rewards, cooperating or griefing each other. A risk is that SSPs might navigate down perilous paths in this landscape.

It has been noted that MEV pricing auctions suffer from attesters potentially having split views of the MEV base fee floor. Biasing the outcome in a split view one way or the other might benefit one builder over another, result in a block being forked out to deprive the beacon proposer of all rewards, or allow the proposer to reap higher rewards when selling MEV capture rights. One concern is that SSPs might eventually try to profit by tuning their attestations of the MEV base fee floor to produce favorable outcomes. This can also be done as part of a cartel. The honest majority assumption need not be broken to derive profits, due to split views. It is only necessary to put a thumb on the scale, and a competitive consensus formation might make such behavior more likely.

Of course, stakers who do not honestly attest to which bids they have observed at which specific time point subject themselves to risks of social slashing if malicious behavior can be uncovered. This is always a potential final resort under proof of stake. In essence, just as it is prudent to be cautious of MEV or excessive issuance as strata for cartelization, it also seems prudent to be cautious of MEV pricing auctions as a stratum for consensus adversity.

Block vs. slot auctions in terms of MEV pricing

Will block auctions or slot auctions burn more MEV? Is one more centralizing than the other? These questions are not easy to answer, because it depends on which burn incentive that comes to dominate, the likelihood of cartelization under different designs, etc. This section will discuss some differences (previous writings on block vs. slot auctions provide a broader perspective).

Block vs. slot auctions concerning (D)

Assume that (D) becomes an important incentive for burning MEV. Further, assume a competitive market without cartelization and perfect information about how much MEV each participant can extract. In the block auction design, the builder can bid V_b-V_e for the block at the observation deadline to maximize burn while retaining opportunities to extract value. It then updates its block and bid through tips in the proposer auction up until the slot boundary. There is V_s-V_b worth of value that the proposer hopes to attain through tips, and V_e worth of value left for the builder (under these simplified conditions).

In the slot auction design, the builder can instead bid V_s-V_e already at the observation deadline. It is just buying the rights to build the block, not committing to its content, and that value is an entire slot’s worth of MEV. Naturally, V_s will here just be an estimate, and the risk that builders take on by bidding on an expected value instead of a tangible value might be worth some fraction of the total bid value. But incomplete information around competitors’ eventual final bids will likely serve to pull down the bid value at the observation deadline more. The staker–builder can ideally burn V_s-V_e of a competing beacon proposer’s auctionable MEV, and again retain V_e for itself. The difference in MEV burn between the two designs is then V_s-V_b.

If the staker–builder could estimate V_s also in the block auction design (which nominally is easier since it bids much closer to the deadline), it could bid V_s-V_e-V_g already at the observation deadline. Since the bid is attached to a block containing only V_b of MEV, V_g is reserved as a tip for the proposer auction. If there is no tip, the proposer might elect to pick the block from the observation deadline, depriving the builder of V_s-V_b. However, while the proposer might specifically wish to do so if the same builder bids with low tips also in the proposer auction, a staker can obfuscate its identity by running several builders (the kickback design disincentivizes obfuscation).

In either design, it seems most likely that the burn ends up being lower than these theoretical maxima due to incomplete information in combination with the fact that capturing the MEV is more valuable than burning it. The staker–builder will therefore operate with quite some margin to maximize expected profits.

Block vs. slot auctions concerning (A)-(B)

The analysis for (D) is to some extent also applicable for (A) and (B). The public good builder could theoretically bid higher in the slot auction than in the block auction. However, the risk associated with overbidding in the slot auction design might be more serious for these builders. In the block auction design, the available value will be much clearer, making it easier for an unsophisticated builder to make low-risk bids.

Value of preconfirmations

As previously mentioned, the slot auction design facilitates execution layer preconfirmations, which can provide a welfare gain to Ethereum. In addition, their value can be burnt (just as in execution tickets), since builders are bidding to attain that value. This increases the burn of the slot auction design.

Builder centralization under competition over expected MEV

If builders have different strengths and weaknesses, they will intermittently attain the highest V_b in the block auction design. While one builder might be able to extract the highest MEV in expectation, not all blocks will play to its strengths. However, in the slot auction, builders bid on expected MEV, and one specific builder might then always have the highest expected V_s. This could potentially be a centralizing force, depending on how secondary markets evolve.

Conclusion

There are strong incentives for burning MEV even in designs that do not directly compensate for it, for example to provide a public good service or to ensure that other participants in the staking metagame do not attain a higher yield. Uncompensated MEV pricing auctions accommodates these incentives. Of particular relevance is staker-initiated griefing (D). It seems clear that SSPs will seek to influence builders’ bidding strategies, and this can lead to staker–builder integration. Still, this form of integration does not necessarily lead to censorship or higher MEV profits; thus not negating sought benefits of proposer–builder separation. If it is desirable to give an outside party an independent incentive to burn MEV, then builder kickbacks are an option. They can also be applied to the slot auction design.

When implementing a MEV burn mechanism, it is important to ensure that the burn mechanism does not accidentally set fire to Ethereum’s consensus mechanism. Giving SSPs tools for griefing each other could lead to adverse competition during the consensus formation process. A particular concern is then if emerging attester–builder integration leads attesters to bias their MEV base fee floor, rejecting or admitting blocks depending on how it impacts their bottom line (in their roles as both builders and stakers). Which of the different scenarios (A-E) that would predominate is seemingly a more important parameter when evaluating the merits of MEV pricing auctions than the mechanism’s ability to burn substantial MEV (which this post suggests it can).

1 post - 1 participant

Read full topic

Layer 2 Preconfirmations: On splitting the block, mev-boost compatibility and relays

Published: Jun 17, 2024

View in forum →Remove

Thanks to @FabrizioRomanoGenove, @meridian and Philipp Zahn for helpful comments and feedback on this post.

:question: What is a Preconfirmation?

There have been a lot of variations on the definition of preconfirmation going around recently in the Ethereum community. In this post we will keep the definition as simple and broad as possible in order to generate the least amount of confusion and avoid arguing on semantics as much as possible:

We call a preconfirmation mechanism any mechanism that ensures (non-positional) inclusion of a (bundle of) transaction(s), if execution is successful, in a finite and bounded amount of time from the emission of the preconfirmation.

:mag: XGA-Style Preconfirmations

We will analyze a specific kind of preconfirmation mechanism – as hinted to in this post on ethresearch – that we came up with some time ago and have been building since then:

An XGA-style preconfirmation mechanism is a preconfirmation mechanism that guarantees (non-positional) inclusion of a sized bundle of transactions in the bottom portion of a predetermined block to be minted 2 epochs after the preconfirmation was emitted. Maximum bundle size is determined at the time of emission of the preconfirmation.

:scissors: Splitting the Block

Looking at the previous definition, I assume the first couple of questions that would come to mind is “what do you mean exactly by the bottom portion of a block?” and “how is the block to include the bundle predetermined?”. Our idea is pretty simple: Partition the block in such a way to keep a top-of-the-block (ToB)[1], high-priority section, in which traditional builders do their usual thing and is allocated through a traditional mev-boost auction or whatever the relay running it prefers; and a reserved bottom-of-the-block (BoB) section, which will serve as allocation space for preconfirmations. In this design, preconfirmation bundles will be allocated via a separate auction in the form of forward contracts.

:busts_in_silhouette: A Two-Auction Format

As briefly mentioned above, in the XGA-style split-block design, preconfirmations are allocated in a completely separate way from the traditional mev-boost auction, allowing them to coexist without excessively disrupting the ecosystem. Traditional builders will be able to do their own thing with minimal adjustments, while everyone else can still enjoy the benefits of preconfirmations.

In simple terms: An XGA-style BoB auction is a multi-unit auction selling gas tokens for a specific block B in fixed-size units (e.g. 100 K gas). These tokens can then be used to submit a bundle[2] that is guaranteed inclusion in B if execution is successful.

As an example, picture this scenario:

  • :clock2: At the start of epoch N-2 we know that the validator V, serving XGA-style preconfirmations, will be the proposer for the K-th slot of epoch N.
  • :oil_drum: $5$M gas out of the standard 30 M will be auctioned off into 50 gas tokens, each representing a capacity of 100 K gas.
  • :shopping_cart: At some fixed time t before the start of slot K, a multi-unit auction allocating the tokens is run. Aki manages to win 5 tokens for K, for a combined capacity of 500 K.
  • :alarm_clock: Within the deadline fixed at some time d before the end of K, Aki uses the 5 tokens to submit a bundle of size just over 400 K gas.
  • :outbox_tray: In the meantime, other BoB auction winners submit their own bundles.
  • :dollar: At the start of K, a traditional mev-boost auction for 25 M gas is run as usual by all relays, and is won by Bogdan via relay R.
  • :brick: After deadline d is reached and the mev-boost auction is over, the BoB part is assembled and attached at the bottom of the max-25 M block submitted by Bogdan via relay R.
  • :tada: Since Aki’s bundle contained no reverting transactions, it is included without any problem – together with the non-reverting bundles submitted by the other BoB winners – somewhere after the portion built by Bogdan.
  • :satellite: The block for K gets broadcasted as usual.
  • :x: Excess tokens for K that didn’t get spent can no longer be used.

:brick: Who Builds the Blocks, then?

Block building, in the case of XGA-style preconfirmations, is handled by multiple parties:

  • :package: The ToB part is built by traditional mev-boost builders as usual.
  • :gift: The BoB part is assembled by the party running the BoB auction.
  • :brick: Merging the two parts and sending the block over is handled by the relay.

In this setup, the relay takes on more work and responsibilities than it currently does. We will explore a potentially beneficial approach to this change later.

:money_with_wings: What Are the Economic Advantages of Preconfirmations?

Well… In general, for the whole range of designs that are being discussed right now this is not clear yet! Conjecturally, some of the proposed preconfirmation mechanisms will allow more value to trickle down to validators, but since the preconfirmation design landscape is so broad and confused right now it’s hard to take into account all the possible market effects that could come out of such designs. For example, most of the preconf mechanisms currently being discussed are pretty unfriendly towards what has been one of the main APY-cows for validators since the dawn of mev-boost: competitive builder/searchers.

:game_die: Why Are We Betting on XGA-Style Preconfs?

It seems clear to us that reserving a spot for non-priority-sensitive transactions can offer several benefits:

  • Users and platforms (e.g. rollups) that are not involved in competitive building/searching just doesn’t care about running HFT operations on L1 can greatly benefit from separating their concerns from those of competitive builder/searchers.
  • On the other end, it eases some of the pressure on the competitive builder/searcher side by removing some of the burden of having to include “filler transactions” to keep their blocks competitive. E.g. freeing them from needing to include blob-bearing transactions that could negatively impact latency.
  • It makes actually pricing inclusion preconfirmations simpler, since it is still regulated by the usual gas pricing model, and at the same time the preconf inclusion market is kept separate from the traditional priority market for position-sensitive transactions.
  • Moreover, we believe in gradual change, allowing time for everyone to adapt to and observe the effects of new, potentially disruptive features in a controlled manner. A split-block design compatible with traditional mev-boost block building offers a less intrusive path to adoption.

:bulb: Rethinking Relays

At the moment running a relay naively is mostly a non remunerative gig. Under XGA-style preconfirmations, the relay does significantly more work and takes on more risk than before, e.g. if a block is missed and/or already sold preconfirmation tokens end up not getting included due to the relay malfunctioning, whoever bought them incurs an active loss of assets. While this sounds scary, it is also a good opportunity to rethink the role of relays in the Ethereum ecosystem.

:shield: Insurance and Reward Mechanisms for Relays

What we are proposing is that a relay can subscribe to an XGA-style preconf platform by staking a collateral that could be used to offer the damaged parties a refund in case of the relay malfunctioning, while sharing a percentage of the platform revenue each time it submits a successful block that includes XGA-enabled preconfirmations[3].

:mega: Introducing XGA


XGA – eXtensible Gas Auctions – is the first L2 platform for XGA-style preconfirmations (lol), designed and built by the combined efforts of Manifold Finance and 20Squares. We’re very willing to make this an open and collaborative effort, so if you have any feedback and/or are interested in building this together with us, please reach out!

Right now we have released on mainnet our v1.0 (yes, this is not a beta, we’re ready to go and currently onboarding validators), with the caveat that in v1.0, the ToB mev-boost auction can only be run on a single relay. We’re currently working on shipping v2.0, which will allow a relay-agnostic auction to be run in the ToB part. You can find more about it at docs.xga.com.


  1. We have specific terms for ToB and BoB auctions, namely α and β-auctions respectively. ↩︎

  2. Note that this doesn’t exclude the possibility of overwriting an already submitted bundle, if re-submitted before the deadline. ↩︎

  3. We are already iterating on designs for captive insurance mechanisms for XGA-style platforms. We will upload a new post detailing some of the possible designs soon. ↩︎

4 posts - 2 participants

Read full topic

Networking IPv6 vs Ethereum?

Published: Jun 15, 2024

View in forum →Remove

I started writing this after a few days of unsuccessful attempts to run solo node behind CGNAT, as just a brainbreeze on whether it could be somehow done differently to ease up solo node setup.
So far It does not seem to be an answer, however I want to share some thoughts on analogies seen with ipv6 networking to see if anyone has ideas on how this can be useful . .

ipv6 101

An IPv6 address consists of 128 bits, represented as eight groups of four hexadecimal digits separated by colons. Each group is called a hextet. For example:

2001:0db8:85a3:0000:0000:8a2e:0370:7334

where

  • Global Routing Prefix: 2001:0db8 (Assigned by the Regional Internet Registry)
  • Subnet ID: 85a3:0000 (Identifies a specific subnet within the network)
  • Interface ID: 0000:8a2e:0370:7334 (identify the individual interface or device on the subnet)

This hierarchical structure allows for efficient routing of IPv6 packets. Routers can quickly determine the destination network based on the global routing prefix, then further refine the path based on the subnet ID.

Multiple gateways from ipv6 subnet may exist to public ipv6 space. Addresses within ipv6 sub network may access global ipv6 address space. Routing protocols such as OSPFv3 or BGP may be used.

Subnet Gateway analogy

Just as an IPv6 router directs traffic to devices within its subnet, an RPC node facilitates communication with nodes and smart contracts within its respective blockchain network.

When we consider the concept of Chain IDs. In blockchain, Chain IDs are unique identifiers for different networks (e.g., Ethereum Mainnet has Chain ID 1, while various testnets have different IDs). Similarly, in IPv6, a subnet is identified by its unique prefix, which is a portion of the IPv6 address.

Address analogy

Since Interface Ids in IPv6 are only 64 bits long, they are too small to fit in 160 bits address of Eth.

However, what could be useful is using InterfaceIds to identify the nodes in the P2P network, forming VPC for Ethereum.

In IPv6, organizations or individuals can assign themselves a unique subnet prefix, effectively creating their own independent addressing space.

Cryptography for IPv6 address generation

Secure Neighbor Discovery (SEND) is a security extension to the Neighbor Discovery Protocol (NDP) in IPv6, designed to address the vulnerabilities in the original NDP.

There are several papers and RFCs (Requests for Comments) relevant to cryptography for IPv6 address generation, particularly focusing on enhancing privacy and security:

RFC 3972 - Cryptographically Generated Addresses (CGA): This RFC introduces the concept of CGA, where the interface identifier of an IPv6 address is generated using a cryptographic hash function from a public key and other parameters. This approach aims to bind a public key to an address securely, deterring address theft and enhancing authentication.

RFC 7721 - Security and Privacy Considerations for IPv6 Address Generation Mechanisms: This RFC discusses the security and privacy implications of different IPv6 address generation mechanisms, including SLAAC, privacy extensions, and CGAs. It provides recommendations for mitigating potential risks and improving privacy protection.

IPv6 Cryptographically Generated Address: Analysis, Optimization and Protection: This paper delves into the details of CGAs, analyzing their security and performance characteristics. It proposes optimizations to improve the efficiency of CGA generation and suggests additional security measures to strengthen the protection they offer.

IPv6 Bitcoin-Certified Addresses, Mathieu Ducroux: proposes mechanism for enhancing the security and privacy of IPv6 addresses by leveraging the Bitcoin blockchain.
In essence, BCAs are IPv6 addresses where the interface identifier is derived from a Bitcoin address.

How could this be beneficial?

If we can think of ethereum ecosystem as one big VPN where chains are subnet addressable that potentially solves fragmentation issues, allowing to use already established discovery protocols to route traffic between different nodes, use features like multicast etc.

2 posts - 2 participants

Read full topic

Data Science Slot Inclusion Rates and Blob Market Combinatorics

Published: Jun 14, 2024

View in forum →Remove

Introduction

This post offers a fresh perspective on the current design and constraints of the blob market, presenting additional data (from a blob tracking dashboard created at Primev) on slot inclusion concerning reorg risks, and a combinatorial analysis of the blob market design.

Recent research on the blob market [1], [2], [3] has focused on how larger blobs increase reorg risk due to higher latency. This could incentivize builder censorship to reduce latency by excluding blobs from blocks.

Despite the blob market being under capacity and the base fee remaining at 1 wei, research [4] shows that rollups like Optimism and Base often have high slot inclusion rates, taking more than five slots to be included. Given the underutilized market, this seems counterintuitive, suggesting possible latency censorship. However, the current blob submission strategies and blob market combinatorics suggest that higher slot inclusion rates may indicate increased competition between blob producers rather than builder censorship.

Blob Submission Strategies

The below table from the dashboard shows a 7 day snapshot of the largest blob market participants.

There are now 3 major strategies across the number of blobs:

  • submit the max 5-6 blobs at a time (blast, base, linea, optimism)
  • submit 3-4 blobs at a time (arbitrum, zksync)
  • submit 1-2 blobs at a time (taiko, metal, paradex, scroll)

Aggregating blobs into fewer transactions reduces transaction expenses (base fee, blob fee, priority fee) but increases slot inclusion times. In contrast, smaller blob transactions improve slot inclusion times at the cost of higher transaction expenses.

Slot Inclusion Rates

The next chart displays a time series overlay of base block demand (total transaction fees and base fee in gwei) with the slot inclusion rate for each blob transaction. It shows high slot inclusion rates, up to 30 slots, even during periods of low blockspace demand.

The table mentioned earlier above contains the average slot inclusion rate for each rollup. Base, which submits the largest blobs in each transaction has the highest, averaging 13 slots. Taiko has the lowest average at 1.7 slots and submits only single blobs for each transaction right now.

Base slot inclusion rate:

taiko slot inclusion rate

Builder Slot Inclusion Rates

This table examines slot inclusion rates from the builder’s perspective, including the number of blocks, blob transactions, average blob count, and priority fees collected.

A higher slot inclusion rate means a blob has waited longer to be included in a block. An efficiency metric would be to have the lowest possible slot inclusion rate, indicating that builders are including blobs sooner rather than later.

Builders like Titan and Beaverbuild have more efficient blob slot inclusion rates than vanilla builders. They also have the lowest average blobs per block. This could be due to their efficiency in accepting strategies like Taiko blobs over other block builders.

Combinatorics

This notebook uses dynamic programming to count the number of combinations of blobs for the current blob market. Given the current 6 blob per block capacity and 6 blobs per block, there are 11 possible combinations.

Occurrences of each number:
1: 19
2: 8
3: 4
4: 2
5: 1
6: 1

A trivial observation is that there is only one combination in which a block can fit 5 or 6 blobs. Since 4 out of 10 rollups submit these 5 and 6 blob transactions, there will only be one winner. Additionally, a single 1-blob transaction can “censor” a 6-blob transaction for an entire slot by being accepted first.

The combinatorics of the current blob market size suggest that the small size itself is causing higher slot inclusion problems, rather than blob censorship latency. This indicates that censorship is not from builders but from competition among blob users.

This raises an important question: what is the optimal maximum number of blobs allowed in a block relative to the maximum number that can fit in a block? Would the combinatorics be more favorable if the maximum blob size were 3 instead of 6? Would it be better to allow 9 blobs per block instead of 8? There is an economic incentive to group blobs as large as possible to save on costs, which disproportionately favors larger rollups over smaller ones until blob sharing becomes feasible.

Bidding Strategies

Currently, blobs use static bidding strategies, generally resubmitting their blobs if their bids sit in the mempool for too long. This shows a certain level of insensitivity to slot inclusion for each rollup. If a blob is delayed for 100 slots, there seem to be no consequences or incentives to increase slot inclusion rates at this time.

The two charts below show sample bidding strategies used by Base and Taiko, just two examples of the rollup strategies available on the dashboard. Base averages a priority fee of 4.5 gwei, while Taiko averages 2.9 gwei. There is no correlation between priority bids and base fee fluctuations.

base:
image

taiko:
image

Resubmitting blobs through the mempool is expensive and generally not recommended as a good practice. This creates the problem of how blob producers can become more competitive in their bidding strategies if they need to make their slot inclusion rates more efficient.

One solution is to use preconfirmations. For example, using a protocol such as mev-commit to attach preconf bids to blob transactions would allow rollups to dynamically adjust their bids without having to resubmit blobs into the mempool. A stronger solution would be to receive preconfirmations from proposers to guarantee that builders wouldn’t be able to censor blobs.

Conclusion

Analysis of slot inclusion rates and blob market combinatorics reveals a complex interplay between efficient slot inclusion, competition, and potential censorship. While current data suggests that high slot inclusion rates are primarily driven by competition among blob users, there remain several unanswered questions:

  • What is the optimal maximum number of blobs per block to balance efficiency and fairness?
  • How can blob producers develop more competitive bidding strategies?
  • Could the implementation of dynamic bidding strategies or preconfirmations significantly reduce slot inclusion times?
  • What long-term effects might increased competition and potential latency censorship have on the blob market?

The combinatorics of the blob market are a fundamental factor affecting slot inclusion efficiency and cost. By understanding and optimizing these combinatorial constraints, it is possible to enhance market dynamics, reduce costs, and improve transaction efficiency for all participants. Further research and experimentation are needed to address these questions and optimize the blob market for all participants.

1 post - 1 participant

Read full topic

Layer 2 A simple, small, mev-boost compatible preconfirmation idea

Published: Jun 13, 2024

View in forum →Remove

Disclaimer: This post will not contain any nice images, because I am artistically inept.

The reasons why I’m writing this are the following:

  1. Preconfs are a very hot topic right now and many people are working on them;
  2. As usual, some of the proposed solutions advocate for punching changes all the way into the main Ethereum protocol. I’m personally not a fan of this, since life is already full of oh my God, what have I done?™ moments and more drama™ is the least thing everyone probably needs.
  3. MEV-boost is probably the only thing this community has really almost universally agreed upon since MEV has been a thing. So I’d very much try to preserve backwards-compatibility with MEV-boost and generalize on this than coming up with more innovative ways to balkanize our ecosystem even further.

A primer on MEV-boost

This section exists just so that everyone is on the same page. Feel free to skip it or to insult me if you think I summarised things stupidly.

In layman terms, MEV-boost works like this:

  1. Proposer polls the relayer(s) for their best blocks;
  2. Relayer(s) send their best block headers to proposer;
  3. Proposer picks the best block by comparing the block headers received and the block built in-house.
  4. For an in-house block, proposer just signs and broadcasts. For a mev-boost block, proposer signs the header. Relay will broadcast the complete block revealing the payload.

This mechanism is nice because the only party that builders have to trust is relayer: Proposer cannot unbundle blocks and scam builders.

The actual idea

The idea I have in mind works towards extending mev-boost by allowing for preconfs (and most likely for a lot of other stuff if one wants to). Notably, it does not change points 2,3,4 in the previous section, but only point 1.

Suppose proposer has a stash of preconfed txs on the side. The only thing the idea assumes is the following:

By the time Proposer starts polling, it needs to have a finalized lists of preconfed txs to include.

The reason for this will become clear shortly. Having this list at hand, proposer sends a signed JSON object to the relayer when it polls, containing the preconfed txs. This object could look, for instance, like this:

{
    proposer: address,
    slotNumber: int,
    gasUsed: int,
    blobsUsed: int.
    mergingPolicy: int,
    mustBeginWith: txBundle,
    mustContain: txBundle,
    mustOmit: txBundle,
    mustEndWith: txBundle,
    otherStuff: JSON,
    signature : signature
}

This design is just an idea. It is by no means fixed yet and most likely can be improved upon both in conceptual and performance terms, so take it with a grain of salt.
The fields proposer and slotNumber are obvious. The fields mergingPolicy, mustBeginWith, mustContain, mustOmit, mustEndWith can all be empty: They contain bundles of transactions that must (or must not) be included in the block. These fields are, effectively, the ones that proposer can use to signal relayer that 'hey, I need the block to respect these requirements, because of previous agreement I made with other parties."

How the proposer comes to define this json object is not our concern, and is outside of the scope of this idea. Just for the sake of clarity though, let’s consider some examples: For instance, XGA, one of the projects 20[ ] is contributing to, provides preconfs as tokenized bottom-of-block space. As such, XGA-style preconfs will produce objects where only mustEndWith is not empty.

The fields gasUsed and blobsUsed tell the relay how much gas and blobs the ‘preconf space’ already claimed. otherStuff exists to be able to extend this standard in the future without more drama™.

Merging policies

The mergingPolicy fields instructs the relay about how to deal with all this information. This is fundamental because, in the end, the relay will still run a traditional mev-boost auction for the remaining blockspace. As soon as a block is built by more than one party there’s a risk that different parties may step up on each other’s toes. As such, mergingPolicy serves as a well-defined conflict resolution policy. If you need a mental reference, think about git conflicts and automated ways to solve them if you so like.

How to define merging policies is up for debate. The community could agree on a common repository where merging policies are defined, voted and agreed upon, and where merging algos are explicitly provided. So, for instance, one merging policy could be:

If the payload coming from the builder contains a transaction that also appears in the preconf bundle, deal with it in the following way:

As said above, XGA sells BOB as preconfs, and leaves TOB open for traditional mev-boost auctions. As such, it has already defined and implemented a merging policy for its bottom of the block case, which will hopefully be open sourced soon.

What does the relay do?

This is probably already kinda clear at this point, but to make it explicit: The relay receives this signed JSON object when the proposer polls. What should it do with it? First of all, it should make some of these fields public to the builders, such as mergingPolicy, gasUsed, blobsUsed and mustOmit. This way builders will know what they can build.

When a block from a builder is received, the relayer will unbundle the block and apply the merging policy to merge it with the preconfed txs. The relay will sign the block header, and send it to the proposer.

From the POV of a builder, everything is kinda the same. They create their block using the info provided by the relay (in the simplest case this just means using slightly less gas than limit), and submit it as their bid.

From this point on, everything works as in traditional MEV-boost.

Analysis

Ok, so let’s run a rapid analysis of this thing.

Pros

  1. Changes to MEV-boost proper are really minimal. We just need to define an API that MEV-boost must listen to to build the polling payload, and redefine the polling logic.

  2. Very little work from Proposer’s side. More work may be needed depending on the preconf system a given proposer wants to use, but then again this is out of the scope of this idea.

  3. Very little work from builder’s side unless people go overly crazy with merging policies. I do not think this is necessarily a problem tho as an overly deranged merging policy would result in builders not submitting anything, and most likely in relayers not taking bets in the first place. So I’d bet that this could pretty much evolve as a ‘let the markets decide’ thing.

  4. This idea is straightforwardly backwads-compatible with traditional MEV-boost: If the polling payload is empty, we collapse to a traditional MEV-boost auction with no other requisites.

  5. This idea allows for gradual phasing out of MEV-boost if the community so decides. For instance, proposers may agree to produce bundles where usedGas is a very low parameter in the beginning (it won’t exceed 5M for XGA, for instance), meaning that the majority of blockspace would come from traditional building, with only a tiny part being preconfs or more generally ‘other stuff’. This parameter may then be increasingly crancked up or varied with time if the community so decides, effectively phasing out traditional block building in favor of ‘something else’. In this respect yes, I know I’m being vague here but when it comes to how this thing could be adopted I can only speculate.

  6. This system can be extended in many ways, and it is flexible. Merging policies could be defined democratically, and the polling info could be extended effectively implementing something akin to PEPSI, for instance. Another possible extension/evolution can be using otherStuff to define Jito-style auctions. I mean, there’s really a plethora of ways to go from here.

  7. The polling payload is signed by the proposer, and the block header is signed by the relayer. This keeps both parties in check as we accumulate evidence for slashing both. For instance:

    • Imagine I get some preconf guarantee from proposer and that I have evidence of this. Again how this happens is outside of the scope of this post, as this mechanism is agnostic wrt how preconfs are negotiated.
    • Now suppose furthermore than my preconfed tx does not land in the block.
    • I can use the chain of signed objects to challenge both relayer and proposer. If my tx wasn’t in the polling info signed by proposer, that’s proposer’s fault. On the other hand, if it was, but it wasn’t in the block, then it’s relayer’s fault. I think this is enough to build a slashing mechanism of sorts, which could for instance leverage some already available restaking solution.

    Note: If there’s enough interest in this idea, we as 20[ ] can throw some open games at it and simulate the various scenarios. Let me know!

  8. Ethereum protocol doesn’t see any of this. So if it fucks up, we just call it a day and retire in good order without having caused the apocalypse: Relays will only accept empty payloads, proposers will only send empty payloads, and we’ll essentially revert to mev-boost without anyone having to downgrade their infra. I think this is the main selling point of this idea: The amount of ways to make stuff explode in mev-related infraland are countless, so this whole idea was built with a ‘it has to be failsafe’ idea in mind.

Cons

  1. Relayer must unbundle builder blocks to do the merging. I do not think this creates a huge trust issue as relayer can already do this as of now: In general, a relayer that scams builders is a relayer that won’t be used again, and will go out of business quickly.

  2. Relayer must do computational work. This is probably the major pain point. This idea entails slightly more latency, as an incoming bid cannot be relayed instantly because mergingPolicy has to be applied. The computational penalty is furthermore heavily dependent on how deranged the merging policy is. As a silver lining, this computational work is provable as both the merging info and the resulting block are signed. The result is that we have provable evidence to remunerate a relay for its work if we want to, possibly solving a major pain point for relayers in traditional mev-boost.

  3. Relayer is slashable if it screws up. Again, how this should be implemented is outside of the scope of this idea as this mechanism only accounts for the needed trail of evidence to implement slashing, but does not deal with the slashing per sé. Anyway, it is still worth reasoning on the possible consequences of this: If slashing policies are implemented, Relayers will most likely need to provide some collateral or implement some form of captive insurance. Again, this may signify more complexity on one hand but also opportunity on the other, as relayers may for instance decide to tokenize said collateral and develop mechanisms to make money out of these newly created financial instruments. As relayers are private enterprises I’ll leave these considerations to the interested parties.

  4. Polling info must stay fixed. This is related to point 3 above and point 6 of the Pros subsection: If the polling info changes all the time, this means huge computational stress for the relayer, and it furthermore allows for malicious behavior from the proposer: For instance, a proposer could send two different polling payloads, and include a given preconfed tx only in one of them. How to resolve these inconsistencies is an open question. In my opinion, the wisest and simplest thing to do would be requiring the polling info to be fixed, meaning that if proposer signs conflicting payloads for the same slot this should be considered akin to equivocation, and thus a slashable offence.

    By the way, the consequence of this is that the idea proposed here necessarily excludes some preconf use cases. This is related to my comment here and I think it is unavoidable if we want to keep MEV-boost around. As the majority of revenue from MEV comes precisely from the bids of very refined, high-time frame searchers, and as I am quite sure that validators don’t want to give this money up at least for now, ‘leaving these players be’ by ruling out such preconf use-cases is in my opinion the most practical option, and exactly the rationale motivating this idea.

Closing remarks

That’s it. If the idea is interesting enough let me know, I’ll be happy to start a discussion around it. The 20[ ] team will also be around at EthCC if you want to discuss this in person.

8 posts - 5 participants

Read full topic

Block proposer One-bit-per-attester inclusion lists

Published: Jun 13, 2024

View in forum →Remove

Inclusion lists are a technology for distributing the authority for choosing which transactions to include into the next block. Currently, the best idea for them is to have an actor that is from a set that is likely to be highly decentralized (eg. consensus block proposers) generate the list. This authority is decoupled from the right to order (or prepend) transactions, which is an inherently economies-of-scale-demanding and so likely to be highly concentrated in practice.

But what if we could avoid putting the responsibility onto a single actor, and instead put it on a large set of actors? In fact, we can even do it in such a way that it’s semi-deniable: from each attester’s contribution, there is no clear evidence of which transaction they included, because one individual piece of provided data could come from multiple possible transactions.

This post proposes a possible way to do this.

Mechanism

When the block for slot N is published, let seed be the RANDAO_REVEAL of the block. Suppose for convenience that each transaction is under T bytes (eg. T = 500); we can say in this initial proposal that larger transactions are not supported. We put all attesters for that slot into groups of size 2 * T, with k = attesters_per_slot / (2 * T) groups.

Each attester is chosen to be the j’th attester of the i’th group. They identify the highest-priority-fee-paying valid transaction which was published before the slot N block, and where hash(seed + tx) is between 2**256 / k * i and 2**256 / k * (i+1). They erasure-code that transaction to 2T bits, and publish the j’th bit of the erasure encoding as part of their attestation.

When those attestations are included in the next block, an algorithm such as Berlekamp-Welch is used to try to extract the transaction from the provided attester bits.

The Reed-Solomon decoding will fail in two cases:

  1. If too many attesters are dishonest
  2. If attesters have different views about whether a particular transaction was published before or after the block, and so they are split between providing bits for two or more different transactions.

Note that in case (2), if the transactions are sufficiently small, advanced list decoding algorithms may nevertheless be able to recover several or all of the transactions!

The next block proposer will be able to see which transactions the attestations imply, and so they will be able to block transactions from the list by selectively failing to include attestations. This is an unavoidable limitation of the scheme, though it can be mitigated by having a fork choice rule discount blocks that fail to include enough attestations.

Additionally, the mechanism can be modified so that if a transaction has not been included for 2+ slots, all attesters (or a large fraction thereof) attempt to include it, and so any block that fails to include the transaction would lose the fork choice. One simple way to do this is to score transactions not by priority_fee, but by priority_fee * time_seen, and at the same time have a rule that a transaction that has been seen for k slots is a candidate not just for attester group i, but also for attester group i...i+k-1 (wrapping around if needed).

8 posts - 7 participants

Read full topic

Execution Layer Research Torrents and EIP-4444

Published: Jun 12, 2024

View in forum →Remove

Torrents and EIP-4444

Introduction

EIP-4444 aims to limit the historical data that Ethereum nodes need to store. This EIP has two main problems that require solutions: Format for history archival and Methods to reliably retrieve history. The client teams have agreed on a common era files format, solving one half of the problem. The second half of the problem, i.e Method to reliably retrieve history will likely not rely on a single solution. Some client teams may rely on the Portal network, some rely on torrents, others might rely on some form of snapshot storage.

Torrents for EIP-4444

Torrents offer us a unique way to distribute this history, torrents as a technology have existed since 2001 and have withstood the test of time. Some client teams, such as Erigon already include a method to sync via torrents that has run in production systems.

In order to make some progress on the Torrent approach of history retrieval, the files would first be required. So an era file export was made on a geth running version v1.14.3 . To explore the initial idea, the torrent approach chose pre-merge data as a target. The merge occurred at block height 15537393, meaning all pre-merge data could be archived by choosing a range of 0 to block 15537393. The era files were then created using the command geth --datadir=/data export-history /data/erafiles 0 15537393.

Once the era files were created, they were verified using the command era verify roots.txt, with the source of the roots.txt file being this. The entire process has been outlined in this PR comment. The verification output was found to be this log message: Verifying Era1 files verified=1896, elapsed=5h21m49.184s

The output era files were then uploaded onto a server and a torrent was created using the software mktorrent. An updated list of trackers was found using the github repo trackerslist. The trackers chosen were a mix of http/https/udp in order to allow for maximal compatibility. The chunk size of the torrent was chosen to be 64MB, which was the max allowed and recommended value for a torrent of this size.

The result of this process is now a torrent of size 427GB. This torrent can be imported with this magnet link and a torrent client would be able to pull the entire pre-merge history as era files.

Tradeoffs

There are of course some tradeoffs with torrents, as with many of the other EIP-4444 approaches:

  • Torrents rely on a robust set of peers to share the data, there is however no way to incentivise or ensure that this data is served by peers
  • A torrent client would need to be included in the client releases and some client languages might not have a torrent library
  • Torrents would de-facto expect the nodes to also seed the content they leech, this would increase node network requirements if they choose to store history
  • The JSON-RPC response needs to take into account that it may not have the data to return a response in case the user decides to not download pre-merge data

Conclusion

A client could potentially include this torrent into their releases and avoid syncing pre-merge data by default, which could then be fetched via torrent if a user requests it (perhaps with a flag similar to --preMergeData=True). The client could also hardcode the hash of the expected data, ensuring that the data retrieved matches what they expect.

Instructions for re-creating torrent:

  • Sync a geth node using the latest release
  • Stop the geth node and run geth --datadir=/data export-history /data/erafiles 0 15537393 to export the data in a folder called data/erafiles(Warning, this will use ~427GB of additional space)
  • Use the mktorrent tool or the rutorrent GUI to create a torrent. Choose the /data/erafiles/ folder as the source for the data. Next, obtain the latest open trackers from this github repository. Choose a healthy mix of udp/http/https trackers and choose the chunk size of the torrent to be 64MB.
  • The tool should output a .torrent file, the GUI will also allow you to copy a magnet link if that is required

Instructions for download and verification of torrent data:

  • Download the torrent data with this magnet link and in a torrent client of your choice: link
  • Clone the latest release of geth and install the dependencies
  • Run make all in the geth repository to build the era binary
  • Fetch the roots.txt file with the command: wget https://gist.githubusercontent.com/lightclient/528b95ffe434ac7dcbca57bff6dd5bd1/raw/fd660cfedb65cd8f133b510c442287dc8a71660f/roots.txt
  • Run era verify roots.txt in the folder to verify the integrity of the data

12 posts - 4 participants

Read full topic

Sharding Blobs, Reorgs, and the Role of MEV-Boost

Published: Jun 11, 2024

View in forum →Remove

Blobs, Reorgs, and the Role of MEV-Boost

The TL;DR is:

  • Builders might have an incentive to not include blobs because of the higher latency they cause.
  • Non-MEV-Boost users include, on average, more blobs in blocks than MEV-Boost builders.
  • MEV-Boost users show a significantly lower probability of being reorged than Non-MEV-Boost users (see section MEV-Boost and Reorgs for details).
  • Rsync-Builder and Flashbots have a lower average number of blobs per block than other builders.

In a recent analysis on big blocks, blobs and reorgs, we could see the impact of blobs on the reorg probability.

In the following, I want to expand on this by taking the MEV-Boost ecosystem into account.

The fundamental question is…
-> Does MEV-Boost impact reorgs, and if so, by how much?

Blobs are “big” and big objects cause higher latency. Thus, one might expect builders to not include blobs into their blocks in scenarios in which:

  • The builder is submitting its block late in the slot to minimize latency (see timing games).
  • The builder wants to capture a high MEV opportunity and doesn’t want to risk unavailable blobs invalidating its block.
  • The proposer is less well connected (because the gossiping starts later in the slot).

Builders might demand to be compensated through priority fees for including transactions which might cause blocks to be propagated with higher latency. Until 4844, such transactions have been those with a lot of calldata. As of 4844, blobs are the main drivers of latency.

As visible in the above chart, blob transactions don’t tip as much as regular Type-2 transactions.
Based on that, blobs don’t give builders a significant edge over other builders competing for the same slot.
Another explanation could be private deals between builders and rollups to secure timely inclusion of blob transactions for a fee paid through side channels.

MEV-Boost and Reorgs

The MEV-Boost ecosystem consists of sophisticated parties, builders and relays, that are well connected and specialized in having low-latency connections to peers.
Thus, it is expected that proposers using MEV-Boost should be reorged less often than ‘Vanilla Builders’ (i.e., users not using MEV-Boost).

This expectation holds true when looking at the above chart.
We can see that the reorg probability increases with the number of blobs. However, the reorg probability for MEV-Boost users is much lower than the one for Non-MEV-Boost users (Vanilla Builders).

In this context it’s important to not confuse correlation and causation:
-> Non-MEV-Boost users are on average less sophisticated entities which also contributes to the effect we observe in the above chart.

In this context it is interesting to compare the average number of blobs per block of MEV-Boost users vs. Non-MEV-Boost users.

As visible in the above chart, proposers not using MEV-Boost included on average more blobs into their blocks than MEV-Boost users.
This might point towards MEV-Boost ecosystem participants (relays and builders) applying strategies that go beyond the “include it if there’s space” strategy.

First, let’s look at the builders more closely.

Vanilla Builders (Non-MEV-Boost proposers) are the ones that have the highest blob inclusion rate, followed by Beaverbuild and Titan Builder.

Rsync-Builder seems to include way less blobs in their blocks.
The same applies to the Flashbots builder that seems to have changed its behavior in early May, with the average number of blobs per block approaching zero.

“Is it fair to say 'Builder XY censors blobs!?”
> No

Different builders follow different strategies. For example a builder such as Rsync-Builder that is generally competitive in slots where low latency and speed matters might end up with winning those blocks where there are no blobs around (c.f. selection bias)


Next, let’s shift the focus to the relays:

As visible above, Vanilla Builders have on average the highest blob inclusion rate.
The Ultrasound and Agnostic Gnosis relays are second and third, followed by the relays of BloXroute.
The Flashbots relay seems to include the lowest number of blobs.

Importantly, relays are dependent on builders and ultimately it’s the builders that impact the above graph.

Next Steps

In the context of PeerDAS, the network will have to rely on nodes that are stronger than others and able to handle way more than 6 blobs per block. Therefore, it’d be super valuable to see more research on that topic happening.

  • Call for reproduction: It’d be great if someone could verify my results by reproducing this analysis.
  • Investigate the reasons why certain builders have a significantly lower blob inclusion rate than others.
  • Reduce reorg rate for Non-MEV-Boost users: Relays could offer Non-MEV-Boost users their block propagation services to ensure that fewer of their blocks get reorged.

The blob market is still under development and a stable blob price is yet to be discovered. With increasing demand for blob space, tips from blob transaction will likely catch up to regular transactions.

3 posts - 3 participants

Read full topic

Block proposer Block Proposing & Validating Timelines for 1.) MEV-Boost, 2.) ePBS, and 3.) ePBS with MEV-Boost

Published: Jun 11, 2024

View in forum →Remove

This writeup summarizes the timeline differences between ePBS and MEV-Boost using inequalities. We analyze three models: 1) MEV-Boost, 2) ePBS, and 3) MEV-Boost with relayers on ePBS. We show that MEV-Boost with relayers on ePBS is slower than ePBS alone, which could lead to reorgs.

Definitions

VT^{CL}: Consensus layer validation time. The time taken by a node to verify the consensus portion of a block.
VT^{EL}: Execution layer validation time. The time taken by a node to verify the execution portion of a block.
RT^{mevboost}: Mev-boost block release time. The time when a block is released from a node or relayer, assuming the MEV-boost setting.
RT^{epbs,cl}: ePBS consensus block release time. The time when a consensus block is released from a node or relayer, assuming the ePBS setting.
RT^{epbs,el}: ePBS execution block release time. The time when an execution block is released from a node or relayer, assuming the ePBS setting.
PT^{mevboost}: Mev-boost block propagation time. The time taken for a block to propagate across the network, assuming the mev-boost setting.
PT^{epbs,cl}: ePBS consensus block propagation time. The time taken for a consensus block to propagate across the network, assuming ePBS setting.
PT^{epbs,el}: ePBS execution block propagation time. The time taken for an execution block to propagate across the network, assuming ePBS setting.
Attestation\_RT^{beacon}: Beacon attestation release time. The time when a beacon attestation is released from a node.
Attestation\_RT^{ptc}: PTC attestation release time. The time when a payload attestation is released from a node, assuming the ePBS setting.
BBT: Proposer build block time. The time taken for a proposer to build consensus portion of a block.
GHT: Proposer get header time. The time taken for a proposer to obtain a header from a relayer (MEV-boost) or builder (ePBS).
GPT: Proposer get payload time. The time a proposer takes to obtain a payload from a relayer (MEV-boost).
SPT: Builder submit payload time. The time taken for a relayer to receive a payload from the builder (MEV-boost).
SBBT: Proposer submit blind block time. The time a proposer takes to submit blind block to the relayer (MEV-boost).

Proposing a mev-boost block

In Mev-Boost, proposing a block involves two parts. First, the builder sends the block to the relayer. Second, the proposer requests the header and returns the signed block to the relayer. We break down the time it takes in the following subsections, starting with the non-optimistic relayer and then the optimistic relayer. We also assume that everything starts at the 0-second mark of the slot, including the builder sending the execution block to the relayer.

Non optimistic relayer

BRT defines builder to relayer time. This is how much time takes for a builder to submit a block (ie bid) to the relayer and the relayer verifies the block is valid.
BRT = SPT + VT^{EL}

PRT defines proposer to relayer time. This is how much time takes for a proposer to build block, request header, request payload, and submit blind block.
PRT = BBT + GHT + GPT + SBBT

RT^{mevboost} = BRT + PRT

This assumes everything happens after the slot start because bids become more valuable. Another model is to assume BRT happens before the slot. Then RT^{mevboost} = PRT.

Optimistic relayer

Relayer receives builder block time

BRT = SPT

PRT is the same as before

RT^{mevboost} = BRT + PRT

Using optimistic relayer is faster than non-optimistic relayer by: VT^{EL}

Validating a mev-boost block

In MEV-Boost, the block must be processed before Attestation\_RT^{beacon} to be considered canonical. The following equation shows the conditions that need to be met for the block to be considered canonical from the perspective of all nodes.

For a beacon block to be canonical, it should satisfy:
RT^{mevboost} + PT^{mevboost} + VT^{CL} + VT^{EL} < Attestation\_RT^{beacon}

Proposing an ePBS block

In ePBS, proposing the consensus block and the execution block are pipelined, where the consensus block commits to the execution block’s header. Block release time becomes two parts 1.) CL block release time and 2.) EL block release time.

Proposing the consensus block

We assume the proposer uses the builder’s RPC to get the header. The proposer could also self-build or use P2P to obtain the header, which is arguably faster. Therefore, there is no need for proposer get header time anymore.

RT^{epbs,cl} = GHT + BBT

Using ePBS is faster than mev-boost by: SPT+VT^{EL}+GPT + SBBT

Proposing the execution block

RT^{epbs,el} is when fork choice accumulates sufficient weight (~40%) or 6 seconds into the slot. The builder could propose a “withhold” block to try to reorg consensus layer block so builder does not have to pay the proposer.

Validating an ePBS block

In ePBS, validating the consensus block and the execution block are pipelined in different stages. The beacon attestation cutoff time has been moved from 4 seconds into the slot to 3 seconds into the slot. However, we can assume that the CL block propagation time is shorter than the block propagation time. EL block validation can be delayed until the subsequent slot, as shown in the equations.

Validating the consensus block

PT^{epbs,cl} < PT^{mevboost}
Attestation\_RT^{beacon,epbs} < Attestation\_RT^{beacon,mevboost}

For a consensus block to be canonical, it should satisfy:
RT^{epbs,cl} + PT^{epbs,cl} + VT^{CL} < Attestation\_RT^{beacon}

Using ePBS is faster than mev-boost by: PT^{mevboost}-PT^{epbs,cl}+VT^{EL}

Validating the execution block

As a PTC voting for execution block’s presence

RT^{epbs,el} + PT^{epbs,el} < Attestation\_RT^{ptc}

As a proposer proposing the next slot’s consensus block

RT^{epbs,el} + PT^{epbs,el} + VT^{EL} < Next\_Slot\_Start\_Time

Everyone else

RT^{epbs,el} + PT^{epbs,el} + VT^{EL} < Next\_Slot\_Attestation\_RT^{beacon}

Proposing an ePBS block using mev-boost

BRT = SPT + VT^{EL}
PRT = BBT + GHT
RT^{epbs,cl} = BRT + PRT

Using MEV-Boost for ePBS is slower than ePBS by: SPT + VT^{EL}
The additional latency occurs because the trusted party must receive and verify the execution block before releasing it to the proposer.

Validating the consensus block

RT^{epbs,cl} + PT^{epbs,cl} + VT^{CL} < Attestation\_RT^{beacon}

Given Attestation\_RT^{beacon} is shorter than ePBS, an extra SPT + VT^{EL} could lead to additional reorgs.

1 post - 1 participant

Read full topic

Multiparty Computation SGX as 2FA for FHE/MPC

Published: Jun 11, 2024

View in forum →Remove

About me: I am Wenfeng Wang, a builder and researcher at Phala Network, put this topic here and hope to have a comprehensive discussion with the community.

TLDR: Involving SGX introduces a safeguard against the collusion risk inherent in current MPC and FHE systems.

Continuing from Justin Drake’s well-articulated post about SGX as a 2FA for zk-rollups, I aim to expand on the potential of SGX as 2FA in FHE projects, specifically in their MPC encryption management. Despite their distinct applications, both leverage some fundamental features of SGX.

MPC is the bottleneck of FHE

Lately, the interest in FHE (Fully Homomorphic Encryption) technologies has rejuvenated, especially in the context of Ethereum Virtual Machines (EVMs). What was once merely a concept is now a tangible tool developers can use to write privacy-preserving smart contracts. Interested readers can refer to Vitalik’s early 2020 post about FHE. Now, let’s look at the general architecture of most current FHE projects.

I will not dive too deep into FHE itself here, but you can find a notable challenge most FHE designs encounter today lies in the MPC node’s key management. Due to the practice of writing an FHE application, the key is globally used by all users to encrypt the data they send to the FHE server, which will execute under an encryption state. Thus, the whole security of the system relies on the security of the MPC network, and as we all know the truths of the MPC network are:

  • The more nodes you have, the more latency you get
  • The fewer nodes you have, the more trust assumptions you need

TEE as a 2FA to MPC

We don’t want to give full trust to MPC nodes because of the possibility of collusion if it is run by humans. Instead, we can add SGX as 2FA to hedge the risk by moving the key management to TEE (Trusted Execution Environments, a technology to run the program in an isolated zone inside CPU, prove program immutable and limited-accessible).

As illustrated above, MPC nodes of the FHE system are now running inside TEE, instead of producing TEE proof when acting as 2FA for zk-rollups, here SGX is used to protect the key generation progress in the MPC network, and the whole lifecycle of the key is kept inside TEE and never gonna reveal to the outside world, more importantly, the key can not be touched by human even a single piece. TEE itself can guarantee the program it runs is verifiable, it’s impossible for someone can manipulate the state. Also, the data passing between TEE and the client is secured by TLS communication.
With TEE as a 2FA, it can help reduce the risk in an economic way that:

  • If SGX is not compromised, there is no chance that collusion can happen;
  • If SGX gets compromised, only when collusion happens between nodes that the system is broken.

Advantages/Disadvantages of SGX as 2FA for FHE

  • Advantages

    • Security: Remove the possibility of collusion, trust is built on top of machinehood + cryptography instead of humanity.
    • Safety: By running MPC inside SGX, even a small MPC network can be reasonably secure. Even if TEE is broken, e.g. have bugs in SGX or Intel being malicious, we still fall back to ordinary MPC.
    • Latency: Using SGX, we can get higher security without introducing more workers. This gives more confident to users to run latency sensitive operations on MPC.
    • Liveness: SGX didn’t provide extra liveness naturally, but projects like Phala have built a decentralized TEE network that can help make it easy to build an unstoppable network.
    • Scalability: Scaling the MPC network is hard, but there are a bunch of existing TEE networks that are ready to deploy MPC nodes. So it lowers the cost to build a larger MPC network.
    • Throughout: There also is no throughput lost, but considering the optimization of latency, throughput can be improved theoretically.
    • More advantages that can be brought by SGX were well addressed by Justin’s post.
  • Disadvantage

    • It’s worth mentioning that SGX also has its own problems, a quote from Justin’s post:
    • SGX has a bad reputation, especially within the blockchain space. Association with the technology may be memetically suboptimal.
    • false sense of security: Easily-broken SGX 2FA (e.g. if the privkey is easily extractable) may provide a false sense of security.
    • novelty: No Ethereum application that verifies SGX remote attestations on-chain could be found.
    • As for the last one that SGX remote attestation on-chain doesn’t exist, the latest state is we have a couple of projects working on it, including Puffer, Automata, and also Phala’s zk-dcap-verifier. But considering it hasn’t been deployed on the mainnet, I kept it on the list.

Special thanks Justin Drake for his research of 2FA zk-rollups using SGX and Andrew Miller for this research of TEE in Multi-Proof system, check his presentation.

1 post - 1 participant

Read full topic

Layer 2 Solutions to the Preconf Fair Exchange Problem

Published: Jun 11, 2024

View in forum →Remove

tldr

Solutions for dealing with the fair exchange problem in leader-based preconfirmation setups.

Reputation can incentivize preconfers to act honestly.

Alternatively, use order to dictate who gets the PER tip. One can invalidate a PER by sending it to a preconfer with higher priority.

Fair Exchange?

The fair exchange problem can be summarized as two untrusted players blindly giving up something in hopes that the other party will do the same. The goal is to try to find a method to ensure that both will cooperate. In the context of preconfirmations, the requesting party (gateway) has no guarantee that their preconfirmation enforcement request (PER) will receive a signed commitment. The preconfer has every right to not return a commitment, hold onto the PER until the last second, and include it if profitable (pocketing the tip for free).

Solution 1: Reputation

One solution to this is by tracking reputation. More specifically, leveraging the promise of future PERs to incentivize preconfers to respond promptly via either commitments or non-commitments (slash-able promises to NOT include). The gateway can throttle or simply ignore preconfers if they misbehave.

Reputation is a tried method and exists today in mev-boost relays (see Switchboard’s Sauna Appendix). While this might work, it still requires certain economic conditions for security. If for whatever reason it becomes really profitable to behave dishonestly, the guarantees fall apart.

Can we do better?

In an ideal scenario, without any limitations of technology, one would simply invalidate the PER if the preconfer takes too long to respond. With blockchains, this is complicated, and time-based approaches require some sort of additional consensus, breaking the based paradigm. However, we can indirectly access “time” by using order. Blocks are ordered, so preconfers can be as well. If we take advantage of this, we arrive at a new solution that avoids the Fair Exchange problem altogether.

Solution 2: Last Right

Determine an order for preconfers. This can be done per block (or even intra-block). Send the PER optimistically to the first preconfer. If they commit, then great. If they return a non-commitment, or do not respond, then send the PER to the next preconfer.

But wait, they can still include my PER and pocket my tip! Yes, they can but they won’t be able to keep the tip. This is due to the central idea of this solution: the last preconfer to include the PER has the right to the tips. If two preconfers attempt to include the PER, the second preconfer has the right to the preconf tip. For example, the last preconfer submits a proof and transfers the PER tip to their balance. Other mechanisms are also possible and should be explored.

One consideration here is the cost. If claiming the tip is more expensive than the tip itself, then the model falls apart. The good news is this cost is directly tied to the technology and should decrease exponentially (e.g. zk proof). Preconfirmation tips on the other hand are tied to the value of the transaction itself, which is not as dependent on the tech. So perhaps this mechanism will become more and more economically favorable.

One great side effect of this method is that it preserves the possibility of execution promises. If the first preconfer acts honestly, then it can guarantee the execution state for the PER. Execution guarantees fall apart if there’s any dishonesty (same as Solution 1).

Solution 3: First Right

If we are willing to forgo execution promises, then the gateway can instead request commitments from preconfers in reverse order. Forward the PER to a preconfer down the list, and then move up until one commits. The first preconfer to include the PER gets the tip. In the case where L1 proposers are preconfers, this is enforced by the L1 replay protection. This is a much simpler version of Solution 2.

One downside is the “real” latency before the transaction is actually included since the default preconfer is not the current one. But one could argue that for important transactions where L1 settlement is important (e.g. buying a house), preconfirmations in general are probably not a priority.

Note that execution promises are technically still possible if all the state transitions up to the point of inclusion has already been determined. (e.g. All block space has already been filled by PERs or similar.)

Final Thoughts

We can even perhaps use these Solutions in tandem. For smaller preconf tips, we can rely on Solution 1, let the first preconfer pocket it and “slash” their reputation. For larger preconf tips, we can fallback to Solution 2 and let the next preconfer steal it back. Or just use them at the same time.

Thanks to @mteam for getting me up to speed and providing feedback. We at Spire Labs are actively researching preconfirmations and related topics, and building towards a better, unified Ethereum.

4 posts - 3 participants

Read full topic

Proof-of-Stake Inactivity Leak unveiled

Published: Jun 10, 2024

View in forum →Remove

We summarize here the article that presents the first theoretical analysis of the inactivity leak, designed to restore finalization during catastrophic network failures. This work is accepted at DSN2024.

TL;DR

  • The inactivity leak is intrinsically problematic for the safety of the protocol. It favors the constant finalization of blocks (liveness) at the expense of having conflicting finalized blocks (safety).
  • The presence of Byzantine validators -validators that deviate from the protocol- can accelerate the loss of safety.

The Ethereum PoS blockchain strives for the continuous growth of the finalized chain. In consequence, the protocol incentivizes validators to finalize blocks actively. The inactivity leak is the mechanism used to regain finality. Specifically, the inactivity leak is initiated if a chain has not undergone finalization for four consecutive epochs. The inactivity leak happened for the first time on the mainnet in May 2023.

A good introduction to the inactivity leak is available thanks to the excellent work of Ben Eddington here (which motivated this work). We formalize the inactivity leak starting by the inactivity score.

Inactivity Score

During an inactivity leak, at epoch t, the inactivity score, I_i(t), of validator i is:

\begin{cases} I_i(t) = I_i(t-1)+4, \text{if $i$ is inactive at epoch $t$} \\ I_i(t) = \max(I_i(t-1)-1, 0), \text{ otherwise.} \end{cases}

Thus, a validator’s inactivity score increases by 4 if it is inactive and decreases by 1 if it is active. The inactivity score is always positive and will be used to penalize validators during the inactivity leak.

Inactivity Penalties

Let s_i(t) represent the stake of validator i at epoch t, and let I_i(t) denote its inactivity score. The penalty at each epoch t is I_i(t-1)\cdot s_i(t-1)/2^{26}. Therefore, the evolution of the stake is expressed by:

s_i(t)=s_i(t-1)-\frac{I_i(t-1)\cdot s_i(t-1)}{2^{26}}.

Stake during the Inactivity Leak

In this work, we model the stake function s as a continuous and differentiable function, yielding the following differential equation:

s'(t)=-I(t)\cdot s(t)/2^{26}.

With this equation, we can determine a validator’s stake according to the time by fixing the evolution of its inactivity score. And that is exactly what we do. We define two types of behavior: Active and Inactive.

  • Active validators: they are always active.
  • Inactive validators: they are always inactive.

Validators with these behaviors experience different evolutions in their inactivity scores: (a) Active validators have a constant inactivity score I(t)=0; (b) Inactive validators’ inactivity score increases by 4 every epoch, I(t)=4t. The stake of each type of validator during an inactivity leak:

  • Active validator’s stake: s(t) = s_0 = 32.
  • Inactive validator’s stake: s(t) = s_0e^{-t^2/2^{25}}.

The graph shows the evolution of the stake of validators depending on their activity during the inactivity leak. The expulsion limit is set by the protocol to eject validators that have accumulated too many penalties.

What is an active validator? (click for more details)

This was the formalization of the protocol. Now we make the analysis of the protocol’s property of safety. To do so, we use the following model.

Model

  • Network: We assume a partially synchronous system, which transitions from an asynchronous state to a synchronous state after an apriori unknown Global Stabilization Time (GST).
  • Fault: Validators are either honest or Byzantine (deviating from the protocol). A Byzantine validator can deviate arbitrarily from the protocol.
  • Stake: Each validator starts with 32 ETH.

There is no bound on message transfer delay during the asynchronous state.

Bound for safety

With only honest validators

By construction, the inactivity leak will breach safety if a partition occurs for long enough. The question is, how quickly?

Any network partition lasting longer than 4686 epochs (about 3 weeks) will result in a loss of Safety because of conflicting finalization. This is an upper bound for Safety on the duration of the inactivity leak with only honest validators.

Detailed Analysis

Let us analyze the scenario in which the validators (which are all honest) are partitioned in two. (We are in the asynchronous state according to our model).
The partition will necessarily create a fork, each partition building on the only chain they see. The chains will finalize once the proportion of active validators returns to 2/3rd.

In this case, by understanding the distribution of the validators across the partitions, we can compute the time it takes for the proportion of active validators’ stake to return to 2/3 of the stake on each branch, thus finalizing and breaking safety.

For the analysis, we make the following notations. At the beginning of the inactivity leak:

  • n is the total number of validators
  • n_B is the total number of Byzantine validators
  • n_H is the total number of honest validators
  • n_{H_1} is the number of honest validators on branch 1
  • n_{H_2} is the number of honest validators on branch 2

There are no Byzantine validators for the first part of our analysis, which implies that n=n_H. Honest validators are only partitioned in two, thus n_H=n_{H_1}+n_{H_2}.

Our goal is to determine when the proportion of honest validators on branch 1 will be superior to 2/3rd of the total stake. Which is to say that we look at when the ratio:

\frac{\text{stake of validator in branch 1}}{\text{stake of validator in branch 1 + stake of validator in branch 2}},

is superior to 2/3. With our notation, the ratio can be rewritten as:

\frac{n_{\text H_1}s_{\text H_1}(t)}{n_{\text H_1}s_{\text H_1}(t)+n_{\text H_2}s_{\text H_2}(t)} ,

s_{\text H_1} and s_{\text H_2} are the stakes of honest active and inactive validators, respectively. Since the n_{\text H_1} validators on branch 1 are always active on branch 1, and the n_{\text H_2} validators are always inactive on branch 1 (they are active on branch 2); we know that s_{\text H_1}(t)=s_0 and s_{\text H_2}(t)=s_0e^{-t^2/2^{25}}.
Using the notation p_0=n_{\text H_1}/n_H, the ratio of active validators over time is:

\frac{p_0}{p_0+(1-p_0)e^{-t^2/2^{25}}}.

This graph shows the ratio of active validators on branch 1 over time. If finalization hasn’t occurred by epoch t=4685, inactive validators are ejected, causing a jump to 100% active validators.

Byzantine validators

We now add Byzantine validators.

These Byzantine validators can send messages to each partition without restriction. (click for more details)

The situation we analyze is now as such:

  • Less than one-third of the stake is held by Byzantine validators (\beta_0=n_{\rm B}/n<1/3).
  • Honest validators are divided into branches 1 and 2; a proportion p_0=n_{\rm H_1}/n_{\rm H} on branch 1 and 1-p_0=n_{\rm H_2}/n_{\rm H} on branch 2.
  • Byzantine validators can communicate with both branches.

Byzantine validators can be active on both branches simultaneously, breaching safety faster. The ratio of active validators on branch 1 is:

\frac{p_0(1-\beta_0)+\beta_0}{p_0(1-\beta_0)+\beta_0+(1-p_0)(1-\beta_0)e^{-t^2/2^{25}}}.

This table shows the time it takes to break safety depending on the initial proportion of Byzantine validators (\beta_0):
image

Byzantine validators can expedite the loss of Safety. If their initial proportion is 0.33, they can make conflicting finalization occur approximately ten times faster than scenarios involving only honest participants.


The original paper provides more details on the assumptions, scenarios, protocol, and other aspects such as:

  • Ways for Byzantine validators to breach safety without committing slashable behavior.
  • Methods for Byzantine validators to exceed the 1/3 threshold on both branches of the fork.
  • An analysis of the probabilistic bouncing attack while considering the inactivity leak. Spoiler alert: this aggravates the attack slightly, but the conditions for the attack to start and persist in time make it highly improbable to be a real threat.

For an additional quick peek at the paper’s findings, here is a graphic that presents how quickly Byzantine validators can break safety depending on their initial proportion and whether their behavior is slashable or not. As you can see, they can have a strong impact even without slashable behavior.

Conclusion

Our findings highlight the importance of penalty mechanisms in Byzantine Fault Tolerance (BFT) analysis. By identifying potential issues in protocol design, we aim to provide insights for future improvements and tools for further investigation.

1 post - 1 participant

Read full topic

Architecture The contention between preconfs and ePBS

Published: Jun 09, 2024

View in forum →Remove

This quick note is motivated by a question of @Hasu.research regarding the compatibility of ePBS with the different mechanisms for preconfirmations that are being proposed by independent groups 1 2 3 4 5. The only purpose of this note is to leave a quick written record of the fundamental contention between the enshrinements of preconfirmations and the current proposal for ePBX.

Overloading inclusion lists.

Even in the very first post on based preconfirmations, the idea of using forced inclusion lists was put forward as a way for proposers to signal their intent of honoring preconfirmations, forcing builders to include these transactions. An extrapolation of this idea led, in one of the original designs for ILs, to propose that inclusion lists may essentially include a complete list of transactions the proposer has in its current mempool. One of the problems with these ideas is that the full list of transactions would need to be broadcast over the P2P network twice: once when the inclusion list is broadcast, and the second time within the payload itself. In all known designs for inclusion lists, validators attest for the existence of the full executable transaction list. This implies in particular that

  1. The list must be available at the beacon block validation time.
  2. The list must be executed at the beacon block validation time.

This section is not meant to be read as inclusion lists aren’t compatible with ePBS but rather any preconfirmation system (and next block forced inclusion lists by definition are such a system) that relies on the execution and distribution of the transactions at the consensus block validation time, necessarily clashes with the main optimization from ePBS.

ePBS validation optimization

The above two points are in direct opposition with the main optimization that ePBS brings to block processing, that is that the only hot path to validation is that of the consensus block that has to be fully verified before the attestation deadline. All other validations, like transaction execution, data availability, etc. are deferred to the remainder of the slot and into the next slot.

While ePBS is compatible with inclusion lists, their addition inherently stresses this optimization. Broadcasting a small list of 16 transactions that can be immediately executed in microseconds is not the same as broadcasting a full block, and presumably, even blob transactions as some based rollups would require.

The centralized nature of preconfs

There is no current design (that I am aware of) of preconfirmations, that does not rely on a centralized entity. This is natural to expect in the absence of an encrypted public mempool, users can’t send their transactions in the open to the next proposer (although they could encrypt the transactions to the public BLS address of the next proposer), and we can’t enshrine an RPC provider, all systems thus make use on existing centralized entities (for example relays) to act as a preconfer. Decentralization comes in that it is ultimately the proposer who enforces these preconfirmations, by forcing the builder to fullfil them.

Thus, in all proposed systems for preconfirmations, either of L1 transactions or for based rollups, there exist a centralized entity that at the very least is responsible for gathering the transactions and giving out the preconfirmations. Systems differ on how is that these preconfirmations are enforced, they range from new L1 slashing proposals, to restaking proposals (moving the slashing to a separate layer), etc. The point is that preconfirmations can be enforced by the protocol itself, or by a somewhat decentralized party like the subset of validators participating in the preconfirmation scheme. In summary, there is a plethora of options for enforcing the (or penalizing the lack of) inclusion of preconfirmations, in decreasing level of trustlessness:

  • The L1 protocol itself enforces inclusion. For example, forced ILs, with proposer level slashings on missed slots, preconf equivocations, etc.
  • Some separate committee enforces them. For example a subset of the L1 validators also participate in a sidechain by restaking, and the enforcement/punishment is carried in that sidechain.
  • A centralized entity enforces them. For example the relay itself only sends bids from builders that have satisfied the required preconfs.

A viable way compatible with ePBS: staked builders as preconfirmers.

Any approach with a full payload being broadcast with the consensus block for preconfirmation enforcement clashes directly with the main scaling optimization of ePBS with regard to block validation. As thus, it seems difficult to expect a working design in which the proposers are in charge of sending and enforcing preconfirmations. The second and third approaches above are fully compatible with ePBS.

One of the features that preconfirmation systems can leverage when ePBS is in place, is that builders themselves are staked validators, thus they can be subject to the same rules that these systems currently require from proposers. For example, those systems that rely on slashings on a restaking scheme could simply add conditions on participating builders. That is, the proposer set participating in the scheme only take bids from builders that are participants of the scheme. The builders and proposers are required to be restaked. There are new penalty conditions for

  • A proposer that does not include a block.
  • A proposer that includes a block with a commitment to a non-participating builder.
  • A builder that does not include the payload
  • A builder that includes a payload does not satisfies the preconf list.

A separate note on restaking

ePBS also presents a challenge on any restaking scheme: builders can transfer funds in the same payload that they commit a slashable offense. L1 protocol can deal with this by immediately deducting the bid from the builder’s balance at the time of CL block processing, but delaying the credit to the proposer. In case the builder commits a slashable offense, the buffer allows the L1 protocol to implement penalization procedures that can impact those delayed funds accordingly. If the builder is restaked however, the restaking chain does not have access to these funds.

7 posts - 4 participants

Read full topic

Block proposer On block-space distribution mechanisms

Published: Jun 08, 2024

View in forum →Remove

On block-space distribution mechanisms


^p.s. yes, we anthropomorphize the protocol as a ghost because Casper.
^^p.p.s. not sure why the auctioneer ghost looks like he is conducting an orchestra, but here we are ¯\_(ツ)_/¯.
^^^ p.p.p.s. by the way, if you haven’t seen Maestro, it’s great.

\cdot
by Mike, Pranav, & Dr. Tim Roughgarden – June 8, 2024.
\cdot
Acknowledgements
Special thanks to Barnabé, Julian, Jonah, Davide, Thomas, Terence, Potuz, & Nate for comments and discussions.
\cdot
tl;dr; Block space, the capacity for transaction inclusion, is the principal resource exported by blockchains. As the crypto ecosystem scales up and professionalizes, the value produced by efficient usage of block space (MEV) has come to play a significant role in the economics of permissionless consensus mechanisms. An immense amount of ink has been spilled by the research community considering what, if anything, protocols should enshrine in response to MEV (see Related Work). Indeed, the past few years resemble a Blind Men and the Elephant narrative arc, where many different perspectives, solutions, and theories have been propounded, but each angle can feel disjoint and difficult to compare. The first half of this article aims to present a broad-strokes painting of the “MEV-ephant” by distilling the design space into a core set of questions and exploring how existing proposals answer them. The second half hones in specifically on allocation mechanisms enabled by execution tickets, demonstrating an important new insight – there is a trade-off between the quality of the in-protocol MEV oracle and the fairness of the mechanism.

Organization: Section 1 motivates the need for an in-protocol mechanism to handle block-space distribution as part of the “endgame” for Proof-of-Stake. Section 2 enumerates five axes along which block-space distribution mechanisms may be measured, using a familiar set of questions: who, what, when, where, how (abbr. the W^4H questions). Section 3 interrogates how the block builder is selected, focusing on the execution tickets model. Section 4 extrapolates by concluding and raising open questions that follow from the framework established.

Structural note: This article is rather long for this format and has some technical elements. We encourage the reader to focus on the portion of the article they are most interested in:

  • Sections 1, 2, & 4 provide a broader perspective on the existing proposals and our proposed methodology for analyzing them.
  • Section 3 (which is \approx 44\% of the content, but 100\% of the math) provides a detailed analysis of allocation mechanisms enabled by the execution tickets design. This section can be read in sequence, in isolation, or skipped altogether – up to you!

\cdot
Contents

  1. Motivation
    1) What
    Block-space distribution today through mev-boost
  2. Enumeration
    The elements of block-space distribution
    Execution tickets and other animals
    Applying W^4H: a comparative analysis
    Motivational interlude
  3. Interrogation
    Preliminaries
    Model
    Familiar allocation mechanisms
    Comparing the outcomes
    Aside #1: Calculating equilibrium bids
    Aside #2: Tullock Contests
  4. Extrapolation

\cdot

Related work

  1. mev-boost & relays
  2. mev-burn / mev-smoothing
  3. enshrined Proposer-Builder Separation (ePBS)
  4. block-space futures
  5. execution tickets

(1) – Motivation

Before descending into this murky rabbit hole, let’s start by simply motivating the necessity of a block-space distribution mechanism. Validators in Proof-of-Stake protocols are tasked with producing and voting on blocks. The figure below, from Barnabé’s excellent “More pictures about proposers and builders,” describes these as “proposing” and “attesting” rights, respectively.

1) What

(\uparrow important cultural ref.)

A block-space distribution mechanism is the process by which the protocol determines the owner of the “proposing” or “block construction” rights. Proof-of-Stake protocols typically use some version of the following rules:

  • block-space (proposing) rights – A random validator is elected as the leader and permitted to create the next block.
  • voting (attesting) rights – All validators vote during some time window for the block they see as the canonical head.

Validators perform these tasks because they receive rewards for doing so. We categorize the rewards according to their origin in either the consensus layer (the issuance from the protocol – e.g., newly minted ETH) or the execution layer (transaction fees and MEV):

  1. Consensus layer
    a. Attestation rewards – see attestation deltas.
    b. Block rewards – see get_proposer_reward.
  2. Execution layer
    a. Transaction fees – see gas tracker.
    b. MEV (transaction ordering) – see mevboost.pics.

Rewards 1a, 1b, & 2a are well understood and “in the view” of the protocol. MEV rewards present a more serious challenge because fully capturing the value realized by transaction ordering is difficult. Unlike the other rewards, even the amount of MEV in a block is unknowable for all intents and purposes (as a permissionless and pseudonymous system, it’s impossible to trace who controls each account and any corresponding offchain activity that may be profitable in tandem). MEV also changes dramatically over time (e.g., as a function of price volatility), resulting in execution layer rewards having a much higher variance than the consensus layer rewards. Further, the Ethereum protocol, as implemented, has no insight into the MEV being produced and extracted by its transactions. To improve protocol visibility into MEV, many mechanisms try to approximate the MEV in a given block; we refer to these as MEV oracles. Block-space distribution mechanisms generally have the potential to produce such an oracle, making the protocol “MEV-aware.”

This suggests the question, why does the protocol care about being MEV-aware? One answer: MEV awareness may increase the protocol’s ability to preserve the homogeneity of validator rewards, even if validators have varying degrees of sophistication. For example, if the protocol could accurately burn all MEV, then the validator incentives would be fully in the protocol’s view (just like 1a, 1b, & 2a above). Alternatively, a mechanism that shares all MEV among validators regardless of their sophistication (e.g., mev-smoothing) would seem to promote a larger, more diverse and decentralized validator set, while keeping the MEV rewards as an extra incentivization to stake. Without MEV awareness, the validators best equipped to extract or smooth MEV (e.g., due to relationships with block builders, proprietary algorithms/software, access to exclusive order flow, & economies of scale) may earn disproportionately high rewards and exert significant centralization pressures on the protocol.

Ethereum protocol design strives to keep a decentralized validator set at all costs. It probably goes without saying, but for completeness: the protocol’s credible neutrality, censorship resistance, and permissionlessness are directly downstream of a decentralized validator set.

Block-space distribution today

In Ethereum today, mev-boost accounts for \approx 90\% of all blocks. Using mev-boost, proposers (the validator randomly selected as the leader) sell their block-building rights to the highest paying bidder through an auction. The figure below demonstrates this flow (we exclude the relays because they functionally serve as an extension of the builders).


.
Proposers are incentivized to outsource their block building because builders (the canonical name for MEV-extracting agents specializing in sequencing transactions) pay them more than they would have earned had they built the block themselves. Circling back to our goal of “preserving the homogeneity of validator rewards in the presence of MEV,” we see that mev-boost allows access to the builder market for all validators, effectively preserving near-equivalent MEV rewards among solo stakers and professional staking service providers – great! But…

Of course, there is a but… mev-boost has issues that continue to rankle some of the Ethereum community. Without being exhaustive, a few of the negative side-effects of taking the mev-boost medicine are:

  • Relays – These trusted-third parties broker the sale of blocks between proposers and builders. The immense reliance on relays increases the fragility of the protocol as a whole, as demonstrated through repeated, incidents, involving, relays. Further, since relays have no inherent revenue stream, more exotic (and closed-source) methods of capturing margins (e.g., timing games as a service and bid adjustments) are being implemented.
  • Out-of-protocol software is brittle – Beyond the relays, participation in the mev-boost market requires validators to run additional software. The standard suite for solo staking now involves running four binaries: (i) the consensus beacon node, (ii) the consensus validator client, (iii) the execution client, and (iv) mev-boost. Beyond the significant overhead for solo stakers, reliance on this software also provides another potential point of failure during hard forks. See the Shapella incident and the Dencun upgrade for an example of the complexity induced by having more out-of-protocol software.
  • Builder centralization and censorship – While this is likely inevitable, builder centralization was accelerated by the mass adoption of mev-boost. Three builders account for \approx 95\% of mev-boost blocks (85\% of all Ethereum blocks). mev-boost implements an open-outcry, first-price, winner-takes-all auction, leading to high levels of builder concentration and strategic, bidding. Without inclusion lists or another censorship-resistance gadget, builders have extreme influence over transaction inclusion and exclusion – see censorship.pics.
  • Timing games – While timing games are known to be a fundamental issue in Proof-of-Stake protocols, mev-boost pushes staking service providers to compete on thin margins. Additionally, relays (who conduct mev-boost auctions on the proposer’s behalf) serve as sophisticated middlemen facilitating timing games. Thus, we have seen marketing endorsing playing timing games to boost the yield from staking with a specific provider.

“OK, OK … blah blah … we have heard this story before … tell me something I don’t know.” (\leftarrow h/t Barnabé for the aptly-named, 14k-views on youtube, musical reference.)

(2) – Enumeration

Obligatory ‘stage-setting’ out of the way, let’s look a little more carefully at the ~essence~ of a block-space distribution mechanism.


^ “Is that what I think it is?

The elements of block-space distribution

Consider the game of acquiring block space; MEV incentivizes agents to participate, while the combination of in-protocol and out-of-protocol software defines the rules. When designing this game, what elements should be considered? To answer this question, we use a familiar rhetorical pattern of “who, what, when, where, & how” (hopefully Section 1 sufficiently answered “why”), which we refer to as the W^4H questions. (\leftarrow h/t Barnabé pt. 2 for the connection to “Who Gets What – and Why”).

  • Who controls the outcome of the game?
  • What is the good that players are competing for?
  • When does the game take place?
  • Where does the MEV oracle come from?
  • How is the block builder chosen?

These questions might seem overly simplistic, but when considered in isolation, each can be viewed as an axis in the design space to measure mechanisms. To demonstrate this, we highlight a few different species from the block-space distribution mechanism genus that have been explored in the past. While they may feel disjointed and unrelated, their relationship is clarified by understanding how they answer the W^4H questions.

Execution tickets and other animals


^ fantastic book.

We present a compendium of many different proposed mechanisms. Note that this is only a subset of the rather substantial literature around these designs – cf. infinite buffet. For each of the following, we summarize only the key ideas (see related work for more).

  • Execution tickets
    • Key ideas – Block building and proposing rights are sold directly through “tickets” issued by the protocol. Ticket holders are randomly sampled to become block builders with a fixed lookahead. The ticket holder has the authority to produce a block at the assigned slot.
  • Block-auction PBS
    • Key ideas – The protocol bestows block production rights through a random leader-election process. The selected validator can sell their block outright to the builder market or build it locally. The builder must ~commit to a specific block~ when bidding in the auction. mev-boost is an out-of-protocol instantiation of block-auction PBS; enshrined PBS (ePBS), as originally presented, is the in-protocol equivalent.
  • MEV-burn/mev-smoothing
    • Key ideas – A committee is tasked with enforcing a minimum value over the bid the proposer selects in an auction. By requiring the proposer to choose a “large enough” bid, an MEV oracle is created. The MEV is either smoothed between committee members or burned (smoothed over all ETH holders).
  • Slot-auction PBS
    • Key ideas – Similar to block-auction PBS but instead sells the slot to the builder market ~without~ requiring a commitment to a specific block – sometimes referred to as block space futures. By not requiring the builders to commit to a particular block, future slots may be auctioned off ahead of time rather than waiting until the slot itself.
  • Partial-block auction
    • Key ideas – Allows a more flexible unit for selling block-space. Instead of selling the full block or slot, allow proposers to sell some of their block, e.g., the top-of-block (which is the most valuable for arbitrageurs), while retaining the rest-of-block construction. Live in other Proof-of-Stake networks, e.g., Jito’s block engine and Skip MEV lane.
  • APS-burn a.k.a. Execution Auction (nomenclature in flux & the EA acronym has a bit of … baggage)
    • Key ideas – A brand new proposal from Barnabé which compels a proposer to auction off the block building and proposing rights ahead of time. The slot is sold ex-ante (a fixed amount of time in advance) without requiring a commitment to a specific block; a committee (à la mev-burn/smoothing) enforces the winning bid is sufficiently large.

We know, we know – it’s a lot to keep track of; it’s nearly a full-time job just to stay abreast of all these acronyms. But by comparing these proposals along the axes laid out by the W^4H questions, we can see how they all fit together as different parts of the same design space.

Applying W^4H: a comparative analysis

For each of the five W^4H questions, we describe different trade-offs made by the aforementioned proposals. For brevity, we don’t analyze each question for each proposal; we instead focus on highlighting key differences arising from each line of questioning.

  • Who controls the outcome of the game?
    • With execution tickets, the protocol dictates the winner of the game by randomly choosing from the set of ticket holders.
    • With block-auction PBS, the proposer (protocol-elected leader) unilaterally chooses the winner of the game.
    • With mev-burn, the proposer still chooses the winner, but the winning bid is constrained by the committee, reducing the proposer’s agency.
  • What is the good that players are competing for?
    • With block-auction PBS, the entire block is sold, but bids must commit to the block contents.
    • With slot-auction PBS, the entire block is sold, but without any specific block commitment.
    • With partial-block PBS, a portion of the block is sold.
  • When does the game take place?
    • With block-auction PBS, the auction takes place during the slot.
    • With slot-auction PBS, the auction may take place many slots (e.g., 32) ahead of time because there is no block-content commitment.
    • With execution tickets, the tickets are assigned to slots at a fixed lookahead after being sold ex-ante by the protocol (more on the ticket-selling model we use below).
  • Where does the MEV oracle come from?
    • With mev-burn/smoothing, a committee enforces that a sufficiently large bid is selected as the winner; this bid size is the oracle.
    • With execution tickets, the total money spent on tickets serves as the oracle.
  • How is the block builder chosen?
    • In block-auction PBS, any outsourced block production has a winner-take-all allocation, with the highest bidder granted the block-building rights.
    • Within execution tickets, many different allocation mechanisms can be implemented. In the original proposal, for example, where a random ticket is selected, the mechanism is ‘proportional-to-ticket-count’; in this case, the highest paying bidder (whoever holds the most tickets) merely has the highest probability of being selected, meaning they are not guaranteed the block building rights.
    • If that (^) seems opaque, don’t worry. The entire following section is a deep dive into these different allocations.

Motivational interlude

Before continuing, let’s review our original motivation for block-space distribution mechanisms:

Block-space distribution mechanisms aim to preserve the homogeneity of validator rewards in the presence of MEV.

This is a great grounding, but if that is our only goal, why not just continue using mev-boost? Well, remember that mev-boost has some negative side effects that we probably want the endgame protocol to be resilient against. We highlight four other potential design goals of a block-space distribution mechanism:

  1. Encouraging a wider set of builders to be competitive.
  2. Allow validators and builders to interact trustlessly.
  3. Incorporating MEV-awareness into the base layer protocol.
  4. Removing MEV from validator rewards altogether.

Note that while (1, 2, & 3) appear relatively uncontroversial (*knock on wood*), (4) is more opinionated (and requires (3) as a pre-condition). The protocol may hope to eliminate MEV rewards from validator rewards as a means to ensure that the consensus layer rewards (what the protocol controls) more accurately reflect the full incentives of the system. This also ties into questions around staking macro-economics and the idea of protocol, issuance – a much more politically-charged discussion. On the other hand, MEV rewards are a byproduct of network usage; MEV could instead be seen as a value capture mechanism for the native token. We aren’t trying to address these questions here but rather explore how different answers to them would shape the design of the mechanism.

What can we do at the protocol-design level to align with these desiderata? As laid out above, there are many trade-offs to consider, but in the following section, we examine “How is the block builder chosen?” to improve on some of these dimensions.

(3) – Interrogation

Editorial note: As mentioned earlier, this section is longer and more technical than the others – feel free to skip to Section 4 if you are time (or interest) constrained!

Section goal: To demonstrate the quantitative trade-off between MEV-oracle quality and the “fairness” of the two most familiar approaches to allocating block proposer rights, which we call Proportional-all-pay and Winner-take-all.

We aim to accomplish this with the following subsections:

Let’s dig in.

Preliminaries

Before diving into the space of allocation mechanisms made possible with execution tickets, we must first set up the model. Consider a protocol that sells execution tickets with the following rules:

  1. the price is fixed at 1 WEI, and
  2. unlimited tickets can be bought and sold from the protocol.

Note: this version of execution tickets is effectively equivalent to creating two disjoint staking mechanisms – one each for attesting and proposing. Small changes in the design, e.g., not allowing tickets to be resold to the protocol, may have massive implications for how the market plays out, but that isn’t the focus of this article. Instead, we narrowly explore the question of block-space allocation, given an existing ticket holder set.

Notably, the set of block producers is disjoint (from the protocol’s perspective) from the set of attesters – individuals must select which part of the protocol they participate in by deciding whether to stake or buy tickets. The secondary ticket market may evolve as a venue for selling the building rights just in time to the builder market (as is done in mev-boost today).


\cdot
Separately, builders may choose to interact directly with the protocol by buying execution tickets themselves, but their capital may be better utilized as active liquidity, capturing arbitrage across trading venues. Thus, they may prefer buying block space on the secondary market during the just-in-time auction instead.

Why restrict ourselves to this posted-price-unlimited-supply mechanism? Two reasons:

  1. It’s not clear that a sophisticated market could even be implemented in the consensus layer. The clients are optimized to allow any validator with consumer-grade hardware to participate in the network. This desideratum may be incompatible with fast auctions, bonding curves, or other possible ticket-selling mechanisms. Questions around how many tickets are sold, the MEV around onchain ticket-sale inclusion (meta-MEV?!), and the timing (and timing games) of ticket sales seem closer to execution layer concerns than something that could reasonably be implemented by Ethereum consensus while keeping hardware requirements limited.

One may imagine the inclusion of ET market-related transactions to possibly induce MEV, whether these transactions are included in the beacon block or the execution payload.” – Barnabé inMore pictures about proposers and builders.”

  1. Even if (a big if) the protocol ~could~ implement a more rigid ticket-selling market, the design space for such a mechanism is immense. Many potential pricing mechanisms have been discussed, e.g., bonding curves, 1559-style dynamic pricing, auctions, etc.; making general claims about these remains outside the scope of this post.

Therefore, we focus on the “unlimited, 1 WEI posted-price” version of execution tickets, where the protocol internalizes minimal complexity. With this framing, we can ask the question that is probably burning you up inside, “given a set of execution ticket holders, how should the winner be selected?” … sounds easy enough, right? Turns out there is a good deal we can say, even with such a seemingly simple question; let’s explore a few different options.

Model

Consider the repeated game of buying execution tickets to earn MEV rewards for your investment.

  • During each period, each player effectively submits a bid, which is the number of tickets they buy. Denote the vector of bids by \mathbf{b}, where b_i is the bid of the i^{th} player.
  • Each player has a valuation for winning the block production rights. Denote the vector of valuations by \mathbf{v}, where v_i is the value of the i^{th} player.
  • At each time step, an allocation mechanism determines each player’s allocation based on the vector of bids. Assuming bidders are risk-neutral (i.e., don’t care between winning 2 ETH with probability 0.5 vs. 1 ETH with probability 1), we can equivalently say that they are each allocated “some portion” of the block, which can be alternatively be interpreted as “the probability that they win a given block.” In an n player game, let x: \mathbf{b} \rightarrow [0,1]^n denote the map implementing an allocation mechanism, where x_i(\mathbf{b}) is the allocation of the i^{th} player, under the constraint that \sum_i x_i(\mathbf{b}) =1 (i.e., the mechanism fully allocates).
  • Each player’s payment is collected at each round. Let p: \mathbf{b} \rightarrow \mathbb{R}_{\geq 0}^n denote the payment rule determined by the set of bids, where p_i(\mathbf{b}) is the payment of the i^{th} player.
  • The utility function of each player in the game is, U_i(\mathbf{b}) = v_i x_i(\mathbf{b}) - p_i(\mathbf{b}). The intuition is that “a player’s utility is their value for winning multiplied by the amount they won, less their payment.”

Familiar allocation mechanisms

Consider two (quite different) possible mechanisms.

Proportional-all-pay (a slight modification to the original execution tickets proposal)

  • During each round, all players submit a bid. Denote the vector of bids by \mathbf{b}.
  • The probability that a bid wins the game is the value of the bid divided by the sum of all the values of the bids,
x_i(\mathbf{b}) = \frac{b_i}{\sum_j b_j}.
  • Each player pays their bid, no matter the outcome of the game (hence “all-pay”), p_i(\mathbf{b}) = b_i.^{[1]}

Winner-take-all (the current implementation of PBS)

  • During each round, all players submit a bid. Denote the vector of bids by \mathbf{b}.
  • The highest bidder wins the game, so x_i(\mathbf{b}) = 1 if \max(\mathbf{b}) = b_i and x_i(\mathbf{b}) = 0 otherwise (where ties are broken in favor of the lower index bidder, say).
  • Only the winning player pays the value of their bid, so p_i(\mathbf{b}) = b_i if \max(\mathbf{b}) = b_i and p_i(\mathbf{b}) = 0 otherwise (same tie-breaking as above).^{[2]}

Comparing the outcomes

To demonstrate the different outcomes from these two mechanisms, consider the two-player game where Player 1 has a valuation of v_1 = 4 and Player 2 has a valuation of v_2 = 2. (We consider a complete information setting in which the individual values are common knowledge. To see how the equilibria bid is calculated and for extended discussion, see Aside 1.)

  • Proportional-all-pay outcome:
    • Equilibrium Bids: \qquad\,\,\,\;\;\; b_1 = 8/9, \,b_2 = 4/9
    • Equilibrium Allocations: \;\;\; x_1 = 2/3, x_2 = 1/3
    • Equilibrium Payments: \;\;\;\; p_1 = 8/9, \,p_2 = 4/9

This all should feel intuitively correct; with v_1 = 2 \cdot v_2 (Player 1 has 2x the value for the block), Player 1 bids, receives and pays twice as much as Player 2.

  • Winner-take-all outcome:
    • Equilibrium Bids: \qquad\,\,\,\;\;\; b_1 = 2+\epsilon, b_2 = 2
    • Equilibrium Allocations: \;\;\; x_1 = 1, \quad\;\; x_2 = 0
    • Equilibrium Payments: \;\;\;\,\, p_1 = 2+\epsilon, p_2 = 0

This is pretty different. Player 1 bids and pays just over Player 2’s value (we use \epsilon to denote a small amount), receiving the entire allocation. Player 2 receives nothing and pays nothing.^{[3]}

Now consider the “revenue” (or the sum of the bids collected by the mechanism) generated from each case:

  • Proportional-all-pay revenue: b_1 + b_2 = 4/3
  • Winner-take-all revenue: \qquad\quad\,\,\,\;\;\;\; b_1 = 2+\epsilon

Winner-take-all has better revenue, corresponding to a more accurate MEV oracle (and thus more MEV burned or smoothed by the protocol) than Proportional-all-pay. Intuitively, by allocating block-production rights to players with lower values (as Proportional-all-pay does), we forgo revenue we would have received had we simply allocated the entire rights to the player with the highest value. We point the interested reader to Aside 1 for a more complete treatment.

Another factor to consider is the “fairness” or “distribution” of the allocation mechanism. For example, suppose we agree on the metric: \text{fairness} = \sqrt{x_1 \cdot x_2} (we use the geometric mean because if x_1 + x_2 has a fixed sum, the geometric mean is maximized at x_1 = x_2 and zero if either x_1,x_2 is zero). Now, let’s look at the fairness outcomes of the two candidate mechanisms:

  • Proportional-all-pay fairness: \sqrt{1/3 \cdot 2/3} \approx 0.471
  • Winner-take-all fairness: \qquad\qquad\;\,\;\sqrt{1 \cdot 0} = 0

Here, the “performance” of the two mechanisms flips – the Winner-take-all is less fair because Player 2 has no chance of winning the game with a lower value. In the Proportional-all-pay, Player 2 can hope to win some blocks despite bidding a lower value. As another example, consider the case where v_1=v_2+\epsilon. The Winner-take-all mechanism allocates all the rights to Player 1, while the Proportional-all-pay splits the rights approximately in half.

Brief note: why might the protocol care about fairness? In a decentralized protocol, a single actor having too much power undermines the credible neutrality of the system. As such, the protocol may be willing to “pay” (in the form of reduced revenue) to ensure that a resource is more evenly distributed among players. Alternatively, we could consider this a measure of “entropy” or even simply randomness being injected into the outcome of the game to try to reduce the influence the most dominant player can have.

This leads to the punchline from this small example: a fundamental trade-off exists between MEV-oracle quality and fairness. The Proportional-all-pay mechanism (and hence the original execution tickets proposal) is fairer because both players win the game with some probability, incentivizing them each (but more importantly, the higher value player) to shade their bid accordingly, lowering the revenue, and thus the MEV-oracle accuracy, of the mechanism. The first price mechanism elicits higher bids since bidders only pay if they win the entire block production rights, increasing the revenue, but this Winner-take-all dynamic makes the allocation less fair.

Open question: is Proportional-all-pay an “optimal” Sybil-proof mechanism? In the permissionless setting, we only consider Sybil-proof mechanisms, where a player doesn’t benefit from splitting their bid into multiple identities. We posit that the Proportional-all-pay mechanism sits in the Goldilock’s Zone of a Sybil-proof mechanism that gets both good revenue/MEV-oracle accuracy and fairness. We leave as an interesting open problem to determine the extent to which the Proportional-all-pay mechanism’s “optimality” (e.g., we were unable to find another Sybil-proof mechanism that dominates it in both revenue and fairness).

Aside #1 – Calculating equilibrium bids

Convenience link to skip to the conclusion for the less-keen reader :wink:

In the numerical example above, we provide the equilibrium bids for the Winner-take-all and Proportional-all-pay mechanisms without proof. How can these be determined generally (e.g., continuing to assume that bidders’ values are common knowledge)?^{[4]}

The Winner-take-all is the familiar First Price Auction setting. In such auctions, the complete information Pure-Nash equilibrium has the two highest-value bidders, each bidding the second-highest bidder’s value, with every other agent bidding below this. In effect, we expect that the highest-value bidder always wins while paying the second highest bidder’s value (we represent this simply as b_1=b_2+\epsilon, though you could equivalently tie-break in favor of the higher-value player).

In the Proportional-all-pay setting, each player has the utility,

\begin{align} U_i (\mathbf{b}) &= v_i \cdot x_i(\mathbf{b}) - b_i \\ &= v_i \cdot \frac{b_i}{\sum_j b_j} - b_i. \end{align}

To determine the existence of a Pure Nash Equilibrium, we consider each player’s first- and second-order conditions. Let \mathbf{b}^* denote the candidate equilibrium set of bids.

  1. First-order condition: \partial U_i / \partial b_i (\mathbf{b^*}) = 0 (or \partial U_i / \partial b_i (\mathbf{b^*}) \leq 0, \;\forall i \text{ s.t. } b^*_i=0.)
    • Intuitively, this condition checks a non-zero-bidding player is (to first order) locally indifferent to small changes in its bid.
  2. Second-order condition: \partial^2 U_i / \partial b_i^2 < 0
    • Intuitively, this condition ensures that the utility function is concave, implying that locally best responses are globally best for all players.

In our simple two-player example in the Proportional-all-pay setting, we have the following.

\begin{align} \frac{\partial U_1}{\partial b_1}(\mathbf{b}) = \frac{v_1 b_2}{(b_1 + b_2)^2} - 1 = 0 \; , \quad \frac{\partial U_2}{\partial b_2}(\mathbf{b}) = \frac{v_2 b_1}{(b_1 + b_2)^2} - 1 = 0 \end{align}

This system can be solved to find the equilibrium bids, \mathbf{b}^*,

\begin{align} b^*_1 = \frac{v_1^2 v_2}{(v_1 + v_2)^2}\; , \quad b^*_2 = \frac{v_2^2 v_1}{(v_1 + v_2)^2}. \end{align}

For our toy example, we have v_1=4, \; v_2=2 \implies b_1^* = 32/36, \; b_2^* = 16/36. We can verify our first-order conditions

\begin{align} \frac{4 \cdot 16/36}{16/9} - 1 = 0 \; , \quad \frac{2 \cdot 32/36}{16/9} - 1 = 0 \quad \checkmark \end{align}

The second-order conditions can also be verified – this is left as an exercise for the reader :wink:

Aside #2 – Tullock Contests

Last chance to skip to the conclusion. (If you continue, by definition, you are the “interested reader” – congrats.)

The model described above is established in the algorithmic game theory literature as a Tullock Contest – named for Gordon Tullock, who explored the idea in his seminal work, “Efficient Rent Seeking.” He motivates this study by considering situations where investment is made before the outcome is known and where the investments might not transfer easily between participants, e.g., political spending.

Suppose, for example, that we organize a lobby in Washington for the purpose of raising the price of milk and are unsuccessful. We cannot simply transfer our collection of contacts, influences, past bribes, and so forth to the steel manufacturers’ lobby. In general, our investments are too specialized, and, in many cases, they are matters of very particular and detailed goodwill to a specific organization. It is true that we could sell the steel lobby our lobbyists with their connections and perhaps our mailing list. But presumably, all these things have been bought by us at their proper cost. Our investment has not paid, but there is nothing left to transfer.” – Gordon Tullock (1980)

This allocation mechanism has been applied in the previous crypto literature as well. Back in 2018 (ancient history in crypto-terms), Arnosti and Weinberg wrote “Bitcoin: A natural oligopoly,” which demonstrates that even small operating cost advantages among miners in a Proof-of-Work system lead to surprisingly concentrated equilibria. Similarly, Bahrani, Garimidi, and Roughgarden (these names sound familiar :D) explored the centralization effects of heterogeneity in block building skill in “Centralization in Block Building and Proposer-Builder Separation.” There appears to be a deep relationship between permissionless crypto-economic systems, where anti-Sybil mechanisms typically require financial investment for participation, and Tullock Contests – more on this Soon™ (maybe).

(4) – Extrapolation

Phew, thanks for hanging in there; let’s take stock of what we learned. Section 3 demonstrates the fundamental trade-off between MEV-oracle accuracy and fairness of an instantiation of an execution ticket mechanism. A protocol may be willing to *pay* (in the form of reduced revenue) for more distribution and entropy with the goal of improving and maintaining the protocol’s credible neutrality. Further, using the model to derive equilibrium bids helps inform how we may expect agents to respond to various allocation and payment rules. Neat – our framework led to some interesting and hopefully helpful insights! Maybe we can extend it to other problems in the space as well?

Further questions that this specific model may help answer (returning to three of our W^4 questions):

  • What is the good that players are competing for?
    • Can we extend the model dimensionality, allowing different players to have different values for portions of the block (e.g., an arbitrageur may disproportionately value the top of a block but have zero value for the remainder)?
  • When does the game take place?
    • How does the MEV-oracle accuracy change if the game takes place far ahead of time versus during the slot itself (e.g., pricing future expected MEV versus present realizable MEV)?
  • How is the block builder chosen?
    • Are there other Sybil-proof mechanisms that dominate Proportional-all-pay in both revenue and fairness?
    • Can we more formally characterize the fundamental trade-offs between revenue and fairness?
    • Given the Sybil-proofness constraint, what alternative allocation and payment rules should be explored (e.g., Tullock contests where the allocation rule is parameterized by \alpha>1 where x_i = b_i^\alpha / \sum_j b_j^\alpha), and can we identify the optimal choice?

Zooming back out, other versions of the W^4H questions may require different models to reason about.

  • Who controls the outcome of the game?
    • In the committee-enforced version of these mechanisms, how could collusive behavior emerge?
    • If the just-in-time block auction continues to take place out-of-protocol, should we explicitly describe the secondary market?
  • When does the game take place?
    • How critical is network latency when considering lookahead block-space sales versus same-slot? Is it worth modeling the partially-synchronous setting?
    • How do block builder valuations change if multi-slot MEV is feasible?
  • Where does the MEV oracle come from?
    • If it comes from the committee, are there incentives for committee members to behave dishonestly?
    • Do such incentives depend on whether protocol-captured MEV is burned or smoothed?

As per usual, open questions abound, but we hope (a) W^4H questions help expand the understanding of block-space distribution mechanisms and (b) the deep dive into allocation mechanisms helps inform the potential design space of execution tickets.


^ The world once we figure out MEV.

Excited to be here with y’all.

— made with :heart: by mike, pranav, & dr. tim roughgarden.


footnotes

^{[1]}: The “all-pay” feature is made possible by burning the price paid for each ticket. :leftwards_arrow_with_hook:

^{[2]}: The “winner-pay” version could be done by refunding all non-winning ticket holders their payment at the end of each round. :leftwards_arrow_with_hook:

^{[3]}: As mentioned earlier, simply refunding the non-winning tickets instantiates the “winner-pays” property. :leftwards_arrow_with_hook:

^{[4]}: This is primarily for tractability in calculating equilibria analytically. Although a strong assumption, it’s not unreasonable in the context of lookahead auctions where bidders might have established prior distributions on their competitor’s valuations. We also view the insights from studying the complete-information equilibria as valuable heuristics for how we may expect these mechanisms to behave in practice. :leftwards_arrow_with_hook:

3 posts - 2 participants

Read full topic

Architecture The Preconfirmation Sauna

Published: Jun 07, 2024

View in forum →Remove

The Preconfirmation Sauna

We are Switchboard, a Nethermind-backed team dedicated to answering the most pressing preconfirmation questions. Thanks to @swapnilraj,@tkstanczak,@Julian,@linoscope, and Michal Zajac for your helpful comments.

TL;DR: Decentralized preconfirmers come with significant tradeoffs that will likely make them impractical for the forseeable future. The preconfirmation sauna creates a repeated game amongst permissionless preconfirmers which incentivizes trust, specialization within these tradeoffs, innovation, as well as user-focused preconfirmation policies. More than this, the sauna provides a common interface for all entities in the preconfirmation supply chain to interact, simplifying the preconfirmation process. We believe this set of sauna-related health benefits can unlock preconfirmations.

In the short time that we as an industry have been thinking about preconfirmations, more or less since November 2023, several teams have emerged to compete for the title of “the preconfirmation protocol”. This is despite almost no shared understanding or appreciation for how a preconfirmer should behave, or how this behaviour should be incentivized.

This article outlines Switchboard’s vision for how this preconfirmation competition should play out: specifically, how it can be harnessed to encourage innovation without siloing off parts of the transaction supply chain, and how it can address technological barriers to preconfirmations including fair exchange (see Appendix for more details).

Goal: Leverage this emergent preconfirmation protocol competition to enable preconfirmations in what we call the preconfirmation sauna.

Non-goals:

  • Engineer the preconfirmation sauna immediately. The sauna needs preconfirmation protocols just like the preconfirmation protocols need the sauna. Until these protocols emerge, the sauna remains a vision. The engineering specifications will develop as the needs of everyone in the preconfirmation supply chain becomes clearer. In this sense, we are waiting for the rocks to heat up before adding water.
  • Confuse the preconfirmation sauna with commit-boost. Commit-boost is a mev-boost rewrite to allow L1 proposers to commit to certain restrictions on their blocks while still outsourcing partial block-building rights. We share their vision for a unified interface for proposer commitments, preconfirmations being a perfect example of a proposer commitment. The sauna isn’t just focused on a unified interface though; it brings competition, pricing policies, preconfirmer registration & slashing conditions, trust, etc. In that sense, commit-boost will depend on the sauna to make preconfirmations a reality.

The Multi-headed Hydra of Preconfirmations

The problem of offering preconfirmations is a multi-headed hydra. What makes a preconfirmer “good” or “bad” will depend on many factors including, but not limited to:

  • Latency
  • Throughput
  • (Trusted) execution guarantees
  • Privacy
  • Cost
  • Interoperability with other protocols
  • Decentralization
  • General trust
  • Liveness/censorship resistance
  • Ability to express intents

Most of these factors have trade-offs, so deciding on a single preconfirmation protocol which somewhat specializes in a subset of these criteria would be a mistake at this early stage of preconfirmation development.

Encouraging Competition

Beyond the warm fuzzy feelings we get from many permissionless preconfirmers, we need competing preconfirmers to incentivize competition and keep preconfirmation strategies honest and transparent. Although some preconfirmation guarantees require tradeoffs, e.g. cost vs latency, the benefits of repeated honest behaviour and its long-term access to the preconfirmation market/revenue has almost no trade-off considerations that come into play. Even in the fringe cases where malicious behaviour may have significant immediate profits for a preconfirmer, proposers can always opt-in to choosing entities whose reputational value dominates any such profits, e.g. Flashbots, Vitalik, etc. (even within this set of “trusted” entities, there will be other tradeoffs that must be considered).

This out-of-the-sauna trust will be necessary as the demand for high-throughput, low-latency preconfirmations likely means we need to delegate preconfirmations temporarily to high-resource and/or centralized entities/committees. In the sauna, misbehaving preconfirmers can be reported and deprioritized by users and proposers, creating a significant disincentive for preconfirmers to act maliciously. As such, users and proposers can immediately begin to depend on the subjective parts of the preconfirmation supply chain such as pricing policies and expected response times that are necessary for preconfirmations to become a reality now.

The flaw in existing approaches

The community has been semi-publicly focused on the infrastructure that enables a proposer to offer preconfirmations. There are 2 broad solutions in this regard:

  • The based proposer commits to a specific preconfirmation protocol ahead of time. Most projects are looking at this. On its current trajectory, this is probably a “winner-takes-most” approach with each project trying to establish itself as the only game in town.
  • The based rollup commits to a specific preconfirmation protocol ahead of time and this preconfirmation protocol is trusted to enable preconfirmations. If the preconfirmation protocol fails, liveness breaks.

The endgame for both of these approaches requires the user and the proposer to trust a single preconfirmation protocol indefinitely. We see it as unacceptable to depend on the honest behaviour of a single preconfirmation protocol preconfirming all transactions for Ethereum.

If all of the preconfirmation flow is coming through one specific preconfirmation protocol, this removes competition and places an entity that can rent-capture between Ethereum and its users. More than this, it is unlikely that an emergent preconfirmation protocol could/should be trusted without strong technical guarantees, or viable alternatives for users and validators to switch to. In this sense, trust is a powerful property for emerging technologies that can bridge technological barriers to key problems.

With respect to preconfirmations, one such problem that trust can address is the fair exchange of preconfirmation requests and responses. In the Appendix, we discuss how the trust generated through the preconfirmation sauna offers a viable solution to the fair exchange problem.


The current path to preconfirmations involves many protocols building by themselves, competing for a critical mass of validator adoption. Until that point, preconfirmation infrastructure will fragment until one emerges as dominant. This game ends with the best-backed/-connected protocol winning. This leads to an indefinite monopoly on the preconfirmation supply chain, which harms both competition and trust.

Enter the Sauna.

From Wikipedia: “Saunas are strictly egalitarian places: no titles or hierarchies are used in the sauna.” Business is often done in saunas. If someone is ever caught misbehaving, people can choose not to sauna with them again.

The preconfirmation sauna is completely agnostic to the preconfirmation protocols that exist within the sauna framework. To enter the preconfirmation sauna, preconfirmers and their respective protocols register as prospective preconfirmers in the system. Proposers then choose whichever preconfirmer(s) they like to delegate preconfirmation rights to.

Although the intuition is that proposers would choose the preconfirmer paying the most for the opportunity, there are other factors at play. Proposers must trust the preconfirmer to broadcast all of the preconfirmations to the network ahead of/at block building time, depending on whether or not the preconfirmer is also the one building the final block. This is because preconfirmations almost certainly need proposer slashing conditions to prevent safety or liveness faults. As such, a proposer risks being slashed by choosing an untrusted preconfirmer.


To enable preconfirmations we need healthy competition among protocols. Importantly, this competition should not be over fragmenting users and validators, but over the quality of preconfirmations being offered. Competition over improved preconfirmation guarantees not only creates an incentive to act honestly, but also an incentive to innovate. The vision of the sauna is to ensure permissionless access to the preconfirmation game, and to leverage the competition and trust that this permissionlessness brings. For this to happen, switching between protocols needs to be seamless for everyone in the supply chain. Although this probably requires some standards, standards aren’t a necessity.

Everyone to the Sauna.

If we want proposers and users to engage in specific preconfirmation protocols, we need trust. More than trust, there needs to be clear incentives for everyone to engage with one another.

  1. Proposers want to use the preconfirmer. Proposers need to know that committing to a preconfirmation protocol will generate higher revenue than simply outsourcing building through something like mev-boost. This requires the preconfirmation protocols to be paying competitively for the right to preconfirm. Almost by definition, this can only be achieved through competition among independent protocols. Proposers also need to know with a high degree of confidence that a preconfirmer will maintain liveness and forward all preconfirmations to the network, and avoid any proposer slashing conditions. With a sauna full of preconfirmers, proposers can choose more robust preconfirmers with less failure and slashing risk at the expense of lost revenue. Proposers can also opt for riskier preconfirmers offering higher revenue. The choice is theirs!
  2. Users want to use the preconfirmer. Fundamentally, preconfirmation revenue can only be generated through user submission of preconfirmation requests. This requires user to trust in the preconfirmation protocol. With only a single preconfirmation protocol, this likely leads to preconfirmation policies that favour the relayer and not the user or proposer. User adoption of minimally trusted preconfirmations (no/limited guarantees of inclusion, selection biases on inclusion, etc.) will likely be limited. In that sense, we need competing preconfirmation protocols to incentivize trust in the preconfirmer and policies that favour the users.

Trust breeds trust. With a sauna full of competing preconfirmation protocols, there are clear incentives to optimize preconfirmation policies and guarantees, including trust (recall the Repeated Preconfirmation Sauna Game), to secure selection from the proposer and the users.

Life after Sauna

Importantly, the use of the sauna isn’t necessary. Proposers can communicate directly with, and commit to, their preferred preconfirmer outside of the sauna. We still imagine an endgame where there is some preconfirmer that trustlessly provides “perfect” preconfirmations.

Conclusion

Until the perfect preconfirmation protocol emerges, protocols need to compete, experiment and innovate while still enabling preconfirmations. More than this, the ability to delegate, send, and receive preconfirmations should be as simple and standardized as possible. To do this, a credibly neutral platform is essential. This is what we envision for the sauna. We look forward to seeing you in there.

Call to Action

At Switchboard, we aren’t just building therapeutic wooden structures. We’re also working on improving the preconfirmation protocols themselves. Although we believe preconfirmations could probably be offered along Justin’s timeline of this year, with help from the sauna, the multi-headed hydra of preconfirmations must be tackled. Preconfirmation RnD, to which Switchboard is committed, will be vital in this regard.

Appendix. The Fair Exchange Problem: A Sauna Case Study

There are many issues that a preconfirmation protocol must address to be considered viable, some of which are outlined here through a strawman framing of a preconfirmation protocol. One of the key components needed for preconfirmations is the fair exchange of preconfirmation requests from users and responses, the preconfirmations, from the preconfirmers.

Fair exchange is a hard problem. Users want to get the strongest possible guarantees of inclusion/execution when they submit a request, while preconfirmers want to give the weakest guarantees of inclusion/execution, retaining as much optionality as they can until the block must be built. The most well-known solution we have for fair exchange, albeit for block building, is mev-boost. This solution hinges on trust in the relayers that execute the fair exchange of blocks between block builders and block proposers. This game started out with a single trusted Flashbots relayer, but now many trusted relayers exist. Some builders even run their own relayer, most notably Titan. Doesn’t this break the fair exchange guarantees of a trust intermediary? No!

Titan relay have a significant incentive to never betray the trust of L1 proposers, and indirectly, builders using the Titan relay. With the Titan relay, Titan builder has a clear advantage compared to submitting blocks to any other relay, as Titan builder bids can be updated much quicker at the Titan relay than if the bids needed to first be relayed to an external relay. Furthermore, if any Titan misbehaviour is ever detected (or even reasonably suspected), proposers will stop trusting, listening to, and selecting the Titan relayer. Consequently, Titan would probably lose the edge they get from relaying their own blocks forever if a single misbehaviour is ever detected. This is incredibly powerful. This is the Repeated Sauna Game.

The Repeated Sauna Game can also be played between preconfirmers. Some protocols may pride themselves on trustless approaches to the fair exchange problem: something like a censorship-resistant input tape with strong data-availability guarantees. Unfortunately, this will come at latency, throughput, and cost tradeoffs. In that sense, users and proposers are probably happy to accept guarantees from the Titan of preconfirmations, even if that currently means a centralized entity with no formal guarantees of fair exchange. Thanks to the Repeated Preconfirmation Sauna Game, Titan preconfirmer has just as much to lose, if not more, from misbehaving when offering preconfirmations. With preconfirmations, the preconfirmer will be able to capture revenue from preconfirming, albeit within the bounds of the ordering and execution policies that they commit to offering users and proposers. Thus, the Repeated Preconfirmation Sauna Game has tangible incentives to keep preconfirmers behaving honestly, enabling fair exchange, among many other services where some amount of trust can bridge current technological barriers.

To summarize, the Repeated Preconfirmation Sauna Game establishes dependable economic trust among rational preconfirmers through:

  • The existence of alternative rational and competitive preconfirmers.
  • The monitoring of preconfirmer behaviour by many independent observers; users, wallets, proposers, other preconfirmers, etc.
  • Reputation and eligibility for future, likely increasing, revenue outweighing short-term incentives to deviate.

1 post - 1 participant

Read full topic

Block proposer Censorship Resistance Through Game Theory

Published: Jun 07, 2024

View in forum →Remove

tldr; relays should divulge more metadata about the block so that validators can make more informed decisions that align with their preferences

Thanks to @simbro @mmp for early feedback

Censorship resistance is a fundamental property that underpins the decentralised nature of blockchain networks and ensures the integrity and accessibility of transactions within the system. While the threat of “hard censorship” is well understood, the challenges faced by preventing “soft censorship” resistance are more nuanced.

“Soft censorship” involves delaying or slowing down the propagation and inclusion of valid transactions. It could be accidental or deliberate; either way, it disrupts the normal flow of transactions, negatively impacts execution quality and harms user experience.

“Hard censorship” means the complete and permanent prevention or blocking of valid transactions from landing on-chain (e.g., OFAC compliance). The majority of research on censorship resistance has focused on this topic.

Previous approaches to mitigating censorship have focused on inclusion lists and tend towards some form of unconditionality. A proposer must include a transaction, or it is a protocol violation.

By applying game theory principles to PBS, incentives can be leveraged to foster etherum aligned outcomes. Our goal is to create a scenario where including all valid transactions and resisting censorship attempts becomes the dominant strategy for block builders. This approach not only addresses the challenges of soft censorship but also mitigates the risks of hard censorship. It has the potential to improve the integrity of ethereum significantly.

This proposal is not meant to replace inclusion lists but should be considered a progressive move towards more robust forms of censorship resistance.

What motivates validators

Under mev-boost, the most profitable block always wins; this assumes that all actors in the PBS auction are rational. However, it fails to account for the fact that some validators are motivated by factors other than profit. PBS has impacted censorship resistance and pushed the network towards centralisation. Many validators (over 8% at last count) refrain from using mev-boost, perhaps because of concerns around censorship and centralisation. Validators may perceive themselves as being more ethereum aligned if they are not running mev-boost because they accept transactions that block builders censor.

Block Builders

The current implementation of PBS is designed so that block builders will pay the maximum possible to validators, to the detriment of all other factors. One reason PBS was introduced was the idea that block builders would be better at building blocks than validators, especially solo validators. They would be better connected and have access to private order flow. The problem is that better was assumed to mean more profitable. The current optimum strategy is to be a vertically integrated searcher builder that can pay the most for a block. Anything that gets included in that block is secondary and only there to reduce the cost burden on the purchase of the block.

We need to change the game, so the optimum strategy is to fill the block with as many transactions and blobs as possible within the protocol’s constraints and pay the validator for the privilege. Put another way, block builders should be better at building blocks than validators. To achieve this new optimum strategy, they must be well connected to the far reaches of the network, accept transactions from any source, and include all valid transactions.

In short we need to force the transaction supply chain to be more representative of validators wishes.

chart showing 10% validators censor - 39% builders censor

Source: https://censorship.pics/

How to introduce a dilemma

Currently, block proposers are missing out on priority fee rewards. Let’s assume an OFAC censored transaction is in the mempool tc, and a block builder creates a ‘censored’ block bc. The total amount of rewards available is bc + tc. However, the block proposer only gets bc. The block proposer is choosing to accept bc because the rewards are greater than what they could build themselves (see builder_boost_factor). The block builder knows it can build a better block than the block proposer as they have more order flow. The only thing that keeps the block builder from bidding just above the fees in a block containing mempool transactions is healthy competition between block builders.

Previous works have explored the possibility of a multi-proposer system or partial block auctions requiring significant protocol and infrastructure changes.

Proposer inclusion lists

To introduce some form of dilemma for the block builder, the block proposer should have better visibility of what they are delegating. Currently, the relay only exposes information about fee rewards and gas used. We could extend builder_boost_factor to include an opinion about gas used that would only require changes on the consensus layer client. But let’s go much further; the relay should publish the transaction hashes of the winning bid. The consensus layer client can then check against its view of the mempool and decide if it wants to delegate block building to that relay.

To frame this in more formal game theoretical terms and notation, we can consider the interaction between the block proposer and the block builder as a strategic game. This game involves decisions on whether to include certain transactions (t1 and t2) in the block, based on the incentives (tips and MEV) associated with each transaction. The players in this game are the block proposer and the block builder, and their strategies involve deciding which transactions to include or censor.

Game Setup

Players: Block Proposer (P) and Block Builder (B).

Strategies:

For Block Proposer (P): Accept (A) or Censor (C) transactions.

For Block Builder (B): Accept (A) or Censor (C) transactions.

Payoffs: Defined in terms of gwei received from tips and MEV.

Transactions

t1: A censored public mempool transaction paying 1 gwei in tips.

t2: A private transaction paying 2 gwei in tips and 2 gwei in MEV.

Payoff Matrix

The outcomes of the game can be represented in a payoff matrix where the rows represent the Block Builder’s strategies, and the columns represent the Block Proposer’s strategies. The payoffs are represented as tuples, with the first element being the payoff for the Block Proposer and the second element being the payoff for the Block Builder.

B \ P Accept (A) Censor (C)
Accept (A) (3, 2) (3, 2)
Censor (C) (1, 0) (2, 2)

Analysis

Builder censors and proposer censors (C, C): The block contains t2. The proposer receives 2 gwei in tips, and the builder receives 2 gwei in MEV. This is a Win-Lose scenario in the original description, but in game-theoretical terms, it’s a Nash Equilibrium if the builder’s payoff for censoring is higher than accepting without proposer enforcement.

Builder censors and proposer accepts (C, A): The proposer chooses to build a block with t1, receiving 1 gwei in tips. The validator is “eth aligned” and wishes to forgo the additional rewards. It is a Lose-Lose scenario, indicating a misalignment of strategies leading to suboptimal payoffs for both players.

Builder accepts and proposer accepts (A, A): The block contains both t1 and t2. The proposer receives 3 gwei in tips, and the builder receives 2 gwei in MEV. This is a Win-Win scenario, representing a Pareto optimal outcome where no player can be made better off without making the other player worse off.

Builder accepts and proposer censors/doesn’t enforce (A, C): The block contains both t1 and t2 because it is more profitable. The proposer receives 3 gwei in tips, and the builder receives 2 gwei in MEV. This is also a Win-Win scenario, similar to (A, A), indicating that the proposer’s decision to censor does not change the outcome due to the builder’s acceptance.

In this strategic game, the optimal outcomes for both players are when both accept the transactions, leading to a Win-Win situation. The game illustrates the importance of alignment in strategies between the block proposer and the block builder to maximize their respective payoffs.

Summary

Some validators need to be irrational (10% already are) whilst using mev-boost. Solo Validators Can Stay Retarded Longer Than Block Builders Can Stay Solvent.

The threat of not accepting a block builder’s block should be enough incentive for them to choose not to censor transactions.

Which validators censor and which do not will become apparent over time. This may mean that block builders will change their tactics depending on who is proposing the next block.

Smaller transactions and participants are less likely to be sidelined by profit-maximising behaviour. This will lead to priority fees trending downwards.

Block builders will help strengthen the network rather than centralise it.

No hard fork required. The only changes needed are in the relay spec, the consensus layer client and some minor changes in the execution client.

The only downside I can foresee is that relays may be leaking some information to other block builders.

Minor changes to clients

Minor changes to relay infrastructure

Conclusion

In summary, by leveraging game theory principles, this proposal aims to significantly reduce transaction censorship by block builders, fostering a more inclusive and decentralized network. In the best case, it will lead to the cessation of censorship practices among block builders, enhancing the network’s integrity. In the worst case, given the relatively minor changes required, it might represent a learning opportunity with minimal loss. At the very least, it promises to stimulate more competition and differentiation within the block builder market, contributing to the overall health and diversity of the ecosystem.

1 post - 1 participant

Read full topic

zk-s[nt]arks 1st completely trustless Blockchain, using pure math and zk-snarks for validation. = High censorship resistance

Published: Jun 06, 2024

View in forum →Remove

Private Scalable Fully Decentralized Payment Network (Without Compromise)

Brandon “Cryptskii” Ramsay

Abstract

This research paper presents a novel decentralized payment network model that leverages zero-knowledge proofs (ZKPs) to ensure transaction validity and balance consistency without relying on validators or consensus mechanisms. The network features a fixed token supply, airdropped to participants at inception, eliminating the need for mining and associated costs. The core design focuses on direct mutual verification between sender and receiver, with an extensive exploration of the underlying mathematical foundations, formal proofs, algorithms, and data structures underpinning this system.

The proposed payment network aims to address key challenges faced by existing decentralized payment systems, such as high transaction costs, scalability limitations, and privacy concerns. By employing ZKPs and a unilateral payment channel architecture, the network enables efficient, secure, and privacy-preserving transactions without the need for intermediaries or complex consensus protocols. The paper provides a comprehensive analysis of the system’s security, privacy, and scalability properties, along with detailed comparisons to alternative approaches. The underlying mathematical framework and formal proofs are rigorously defined, ensuring the robustness and correctness of the proposed model.

Introduction

Decentralized payment systems have garnered significant attention due to their potential to provide secure, transparent, and efficient financial transactions without intermediaries. However, existing solutions often face challenges related to high transaction costs, scalability limitations, and privacy concerns. This research introduces a novel decentralized payment network that leverages zero-knowledge proofs (ZKPs) and unilateral payment channels to address these issues.

The proposed network architecture is designed to address specific challenges faced by existing decentralized payment systems:

  1. The absence of mining and associated costs solves the issue of high transaction fees and energy consumption in traditional proof-of-work-based systems.
  2. The elimination of validators and consensus mechanisms tackles the scalability limitations and potential centralization risks in proof-of-stake and delegated systems.
  3. The use of storage partitioning and off-chain payment channels addresses the scalability and privacy concerns associated with storing all transactions on-chain.

By distributing a fixed token supply to participants at the network’s inception, the system eliminates the need for mining and its associated costs. The network focuses on enabling direct mutual verification of transactions between senders and receivers, ensuring the validity of transactions and the consistency of account balances without relying on validators or consensus mechanisms. By leveraging zk-SNARKs, the network allows for direct proof of validity between sender and receiver, as the succinct zero-knowledge proofs inherently prove the correctness of transactions.

To enhance efficiency and scalability, the network uses a multi-tier Merkle tree system with Merkle proofs, ensuring that only a constant succinct size (O(1)) of data is submitted to the blockchain. This design minimizes on-chain storage requirements and ensures data availability.

At the core of this novel payment network lies a comprehensive mathematical framework that leverages zero-knowledge proofs, particularly zk-SNARKs, to validate transactions and generate wallet state proofs. These proofs enable efficient verification of transaction validity and balance updates while preserving user privacy.

The network’s architecture is composed of several key components, including unilateral payment channels, hierarchical smart contracts, and partitioned storage nodes. These components work together to enable scalable, secure, and privacy-preserving transactions, while minimizing on-chain storage requirements and ensuring data availability.

To ensure the robustness and correctness of the proposed model, the paper presents formal algorithms, theorems, and proofs for crucial aspects of the system, such as the Balance Consistency Theorem and the dispute resolution mechanism. These mathematical formalisms provide a solid foundation for the security and reliability of the payment network.

Furthermore, the paper includes an in-depth analysis of the network’s security, privacy, and scalability properties, highlighting its advantages over alternative approaches, such as traditional blockchain-based payment systems and centralized payment networks. The analysis also acknowledges potential limitations and challenges, such as the complexity of zk-SNARK implementations and the need for ongoing optimizations.

The main contributions of this research can be summarized as follows:

  1. A comprehensive mathematical framework for ensuring transaction validity and balance consistency using zero-knowledge proofs, particularly zk-SNARKs.
  2. A detailed description of the network’s architecture, including unilateral payment channels, hierarchical smart contracts, and partitioned storage nodes.
  3. Formal algorithms, theorems, and proofs for key components of the system, such as the Balance Consistency Theorem, zk-SNARK proof generation, smart contract verification, and dispute resolution.
  4. An in-depth analysis of the network’s security, privacy, and scalability properties, along with detailed comparisons to alternative approaches.
  5. An exploration of promising use cases that leverage the enhanced privacy features of the proposed system.

The proposed decentralized payment network presents a promising approach to enabling secure, private, and scalable transactions in a decentralized setting, paving the way for more efficient and accessible financial services on the blockchain. The extensive mathematical formalism and rigorous analysis provided in this paper contribute to the growing body of research on decentralized payment systems and demonstrate the potential of zero-knowledge proofs in enhancing the security, privacy, and scalability of blockchain-based financial applications.

Background

This section introduces the key concepts and technologies used in the proposed decentralized payment network, providing a solid foundation for understanding the system’s design and functionality.

Zero-Knowledge Proofs (ZKPs)

Zero-knowledge proofs (ZKPs) are cryptographic protocols that enable one party (the prover) to prove to another party (the verifier) that a statement is true without revealing any information beyond the validity of the statement itself. ZKPs have numerous applications in blockchain technology, particularly in privacy-preserving transactions and scalable off-chain solutions.

One prominent type of ZKP is zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge). zk-SNARKs enable the generation of succinct proofs that can be verified quickly, making them well-suited for blockchain applications where proofs need to be stored on-chain and verified by multiple parties.

Definition: A zero-knowledge proof for a statement S is a protocol between a prover P and a verifier V such that:

  • Completeness: If S is true, V will be convinced by P with high probability.
  • Soundness: If S is false, V will not be convinced by P except with negligible probability.
  • Zero-Knowledge: If S is true, V learns nothing beyond the fact that S is true.

In the proposed decentralized payment network, zk-SNARKs are employed to prove the validity of transactions and generate wallet state proofs, ensuring the privacy and security of user balances while enabling efficient verification.

System Architecture

The proposed decentralized payment network consists of several key components: off-chain payment channels, hierarchical smart contracts, and partitioned storage nodes. This section provides a detailed description of each component and their interactions within the overall system.
Certainly! Here is the document reformatted in Markdown with low-level LaTeX math included:

Unilateral Payment Channels

Payment channels are a key component of scalable blockchain solutions, enabling off-chain transactions between parties without the need to record every transaction on the main blockchain. In the proposed network, each user has a unilateral payment channel associated with their wallet contract, which holds their tokens off-chain. This design choice simplifies channel management and enables cross-partition transactions.

Definition: A unilateral payment channel between a user U and their wallet contract W is a tuple (B, T_1, T_2, \ldots, T_n), where:

  • B is the initial balance of U in the payment channel.
  • T_1, T_2, \ldots, T_n are the transactions executed within the payment channel.

The final state of the payment channel is determined by the cumulative effect of all transactions T_1, T_2, \ldots, T_n on the initial balance B.

To set up a unilateral payment channel, a user creates a wallet contract on the blockchain and transfers the desired amount of tokens to the contract. The wallet contract manages the user’s off-chain balance and state updates through zk-SNARK proofs. When a user wants to transfer tokens to another user, they generate a zk-SNARK proof that verifies the validity of the transaction and includes the necessary metadata for the recipient to generate the next transaction proof. This design enables instant transaction finality and eliminates the need for on-chain confirmation.

Example 1: Unilateral Payment Channel Setup and Transactions

Suppose Alice wants to set up a unilateral payment channel with an initial balance of 100 tokens. She creates a wallet contract W_A on the blockchain and transfers 100 tokens to it. The wallet contract initializes Alice’s off-chain balance to 100 tokens.

Later, Alice decides to send 30 tokens to Bob. She generates a zk-SNARK proof \pi_1 that verifies the validity of the transaction, including the availability of sufficient funds and the correctness of the updated balances. Upon generating the proof \pi_1, the wallet contract immediately locks 30 tokens, reducing Alice’s available balance to 70 tokens. Alice sends the transaction details and the proof \pi_1 to Bob.

Bob verifies the proof \pi_1 to ensure the transaction’s validity. If the proof is valid, Alice and Bob update their local off-chain balances accordingly. Alice’s balance remains 70 tokens, while Bob’s balance increases by 30 tokens. Both parties sign the proof \pi_1 to authorize the future rebalancing of their respective payment channels.

If Bob does not accept the proof within a specified timeout period, the smart contract automatically releases the locked funds back to Alice’s available balance, ensuring no funds are indefinitely locked.

This example demonstrates how unilateral payment channels enable secure, off-chain transactions between users while preserving privacy and scalability.

Off-Chain Payment Channel Operations

As introduced in Section 2, off-chain payment channels form the foundation of the proposed network’s scalability. Each user has a unilateral payment channel associated with their wallet contract, which holds their tokens off-chain. The channel setup and transaction process can be formally described as follows:

Channel Setup

\begin{algorithm}[H]
\caption{Unilateral Payment Channel Setup}
\begin{algorithmic}[1]
\Procedure{SetupChannel}{$U, W, B$}
\State $U$ creates a wallet contract $W$ on the blockchain
\State $U$ transfers $B$ tokens to $W$
\State $W$ initializes the off-chain balance of $U$ to $B$
\State \textbf{return} $W$
\EndProcedure
\end{algorithmic}
\end{algorithm}

Here, U represents the user, W is the wallet contract, and B is the initial balance in the payment channel.

Off-Chain Transactions

\begin{algorithm}[H]
\caption{Off-Chain Transaction}
\begin{algorithmic}[1]
\Procedure{OffchainTransaction}{$S, R, T$}
\State $S$ generates a zk-SNARK proof $\pi$ for the transaction $T$
\State $S$'s wallet contract locks the transaction amount
\State $S$ sends $(T, \pi)$ to $R$
\State $R$ verifies $\pi$ to ensure the validity of $T$
\If{$\pi$ is valid}
\State $S$ updates their local off-chain balance
\State $R$ updates their local off-chain balance
\State $S$ and $R$ sign $\pi$ to authorize rebalancing
\Else
\State $S$'s wallet contract releases the locked amount after timeout
\EndIf
\State \textbf{return} $(T, \pi)$
\EndProcedure
\end{algorithmic}
\end{algorithm}

Here, S represents the sender, R is the receiver, and T is the transaction. The zk-SNARK proof \pi verifies the validity of the transaction, including the availability of sufficient funds and the correctness of the updated balances. If the proof is valid, both parties update their local off-chain balances and sign the proof to authorize the future rebalancing of their payment channels. If the proof is not accepted within a specified timeout period, the smart contract automatically releases the locked funds back to the sender’s available balance.

Hierarchical Smart Contracts

The hierarchical smart contract structure is a key component of the proposed network, enabling efficient rebalancing of payment channels and management of cross-partition transactions. The structure consists of three layers: root contracts, intermediate contracts, and wallet contracts.

  • Root Contracts:

    Responsibilities:

    • Serve as the entry point for users and maintain a mapping of intermediate contracts.
    • Aggregate Merkle roots from intermediate contracts and submit a single, final aggregated Merkle root.
    • Submit the final Merkle root to the blockchain at regular intervals, ensuring the global state is verifiable on-chain with minimal frequency and cost being a constant size O(1).
  • Intermediate Contracts:

    Responsibilities:

    • Manage liquidity pools for specific transaction types or user groups.
    • Maintain a mapping of wallet contracts and are responsible for rebalancing payment channels based on the transaction proofs submitted by users.
    • Collect Merkle roots from wallet contracts and aggregate them into a single Merkle tree within their partition.
    • Periodically submit the aggregated Merkle root to the root contract.
    • Ensure the state within their partition is verifiable on-chain with minimal frequency and cost.
  • Wallet Contracts:

    Responsibilities:

    • Represent individual user payment channels and hold the users’ off-chain balances.
    • Generate zk-SNARK proofs for their state and submit these proofs to the storage nodes.
    • Store proofs to the storage nodes for data availability.

The hierarchical structure allows for efficient liquidity management and reduced on-chain transaction costs, as rebalancing operations are performed at the intermediate level, and the root contracts only need to process periodic updates.

Example 3: Hierarchical Smart Contract Interaction

Continuing with the previous examples, suppose Alice, Bob, Carol, and David belong to the same user group managed by an intermediate contract IC_1. The intermediate contract IC_1 is mapped to a root contract RC.

  • When Alice sends 30 tokens to Bob (transaction T_1), she generates a zk-SNARK proof \pi_1. Upon generating the proof \pi_1, Alice’s wallet contract immediately locks 30 tokens, reducing her available balance accordingly. The transaction proof \pi_1 is then submitted to the intermediate contract IC_1. The intermediate contract verifies the proof and updates the balances of Alice’s and Bob’s wallet contracts accordingly.

  • Similarly, when Alice receives 50 tokens from Carol (transaction T_2), Carol generates a zk-SNARK proof \pi_2. Upon generating the proof \pi_2, Carol’s wallet contract immediately locks 50 tokens, reducing her available balance. The transaction proof \pi_2 is then submitted to IC_1, which verifies the proof and updates the balances of Alice’s and Carol’s wallet contracts.

  • Periodically, the intermediate contract IC_1 submits a summary of the balance updates to the root contract RC, which maintains a global view of the network’s state by submitting a single aggregated Merkle root to the blockchain.

This hierarchical structure, with the immediate balance locking mechanism, ensures that all transactions are secure and funds are not double spent, even if there are delays in transaction acceptance or verification.

Storage Nodes and Blockchain Interaction

To ensure data availability and scalability, the proposed network employs storage nodes that store the off-chain transaction history and wallet state proofs. Each storage node maintains a copy of the entire off-chain data, ensuring redundancy and decentralization.

Storage Node Operations:

  • Storing Proofs: Storage nodes store zk-SNARK proofs for individual wallet states. Each wallet maintains its own Merkle tree that includes these proofs.
  • Aggregating Data: At regular intervals, storage nodes aggregate the off-chain data into a single Merkle root, representing the state of all payment channels they manage. This Merkle root is then submitted to the intermediate contracts.
def store_proof(proof, user, wallet):
    # Store the proof for user and wallet contract
    # Update the local Merkle tree with the proof

def submit_merkle_root():
    # Generate the Merkle root for all stored proofs
    # Submit the Merkle root to the intermediate contract

The blockchain acts as a secure and immutable ledger, storing the Merkle roots submitted by the root contract. This allows for efficient

verification of the network’s global state, as any discrepancies between the off-chain data and the on-chain Merkle roots can be easily detected and resolved.

This hierarchical structure enables efficient verification of individual payment channels and the entire network state without storing the full transaction history on-chain. By leveraging the security and immutability of the blockchain while keeping the majority of the data off-chain, the proposed network achieves a balance between scalability, data availability, and security.

Example 4: Storage Node Operation and Blockchain Interaction

Following the previous examples, suppose storage node SN_1 is responsible for storing the transaction proofs and wallet state proofs for Alice, Bob, Carol, and David.

  • When Alice generates a wallet state proof \pi_s after transactions T_1, T_2, and T_3, she submits the proof to the storage node SN_1. The storage node stores the proof and updates its local Merkle tree with the new proof.
  • Similarly, when Bob, Carol, and David generate their wallet state proofs, they submit them to SN_1, which stores the proofs and updates its local Merkle tree accordingly.
  • At the end of each epoch, SN_1 generates a Merkle root R that represents the state of all payment channels it manages. The storage node then submits the Merkle root R to the intermediate contract, providing a compact and tamper-evident snapshot of the network’s state.
  • The intermediate contract aggregates the Merkle roots from all storage nodes within its partition and submits a single final Merkle root to the root contract.
  • The root contract aggregates the Merkle roots from all intermediate contracts and submits a single final Merkle root to the blockchain.
  • The blockchain stores the submitted Merkle root, allowing for efficient verification of the network’s global state. If any discrepancies arise between the off-chain data and the on-chain Merkle roots, they can be easily detected and resolved using the dispute resolution mechanism described in the following section.

This hierarchical structure, combined with immediate balance locking and zk-SNARK proofs, ensures secure, efficient, and scalable off-chain transactions, maintaining the integrity and security of the overall network.

Transaction Validity and Balance Consistency

To ensure the validity of transactions and the consistency of account balances, the proposed payment network employs a combination of zero-knowledge proofs and formal mathematical proofs. This section presents the core theorems and algorithms that underpin the system’s security and correctness. (as stated in the abstract, we have a fixed supply released in full at genesis)

Transaction Validity

Each transaction in the proposed network is accompanied by a zk-SNARK proof that verifies the following conditions:

  • The sender has sufficient balance to cover the transaction amount.
  • The sender’s updated balance is correctly computed.
  • The receiver’s updated balance is correctly computed.

Let T_i be a transaction in which sender S transfers \Delta_i tokens to receiver R. The accompanying zk-SNARK proof \pi_i ensures the following conditions:

\begin{align*} B_S &\geq \Delta_i \\ B'_S &= B_S - \Delta_i \\ B'_R &= B_R + \Delta_i \end{align*}

where B_S and B_R are the initial balances of S and R, respectively, and B'_S and B'_R are the updated balances after the transaction.

Balance Consistency

To prove the consistency of account balances in the presence of valid transactions, we present the following theorem:

Theorem (Balance Consistency): Given a series of valid transactions T_1, T_2, \ldots, T_n between two parties S and R, the final balances B'_S and B'_R satisfy:

B'_S + B'_R = B_S + B_R

where B_S and B_R are the initial balances of S and R, respectively.

Proof: We prove the theorem by induction on the number of transactions n.

Base case: For n = 1, we have a single transaction T_1 with amount \Delta_1. The updated balances after the transaction are:

\begin{align*} B'_S &= B_S - \Delta_1 \\ B'_R &= B_R + \Delta_1 \end{align*}

Adding the above equations yields:

B'_S + B'_R = B_S + B_R

Inductive step: Assume the theorem holds for n = k transactions. We prove that it also holds for n = k + 1 transactions.

Let B^{(k)}_S and B^{(k)}_R be the balances after the first k transactions. By the induction hypothesis, we have:

B^{(k)}_S + B^{(k)}_R = B_S + B_R

Now, consider the (k+1)-th transaction T_{k+1} with amount \Delta_{k+1}. The updated balances after this transaction are:

\begin{align*} B^{(k+1)}_S &= B^{(k)}_S - \Delta_{k+1} \\ B^{(k+1)}_R &= B^{(k)}_R + \Delta_{k+1} \end{align*}

Adding the above equations and substituting the induction hypothesis yields:

\begin{align*} B^{(k+1)}_S + B^{(k+1)}_R &= (B^{(k)}_S - \Delta_{k+1}) + (B^{(k)}_R + \Delta_{k+1}) \\ &= B^{(k)}_S + B^{(k)}_R \\ &= B_S + B_R \end{align*}

Therefore, the theorem holds for n = k + 1 transactions.

By the principle of mathematical induction, the theorem holds for any number of valid transactions
n \geq 1. \blacksquare

The Balance Consistency theorem ensures that the total balance of the system remains constant throughout a series of valid transactions, providing a fundamental property for the security and correctness of the proposed payment network.

Fraud Prevention Mechanisms

The proposed decentralized payment network integrates multiple layers of fraud prevention mechanisms through its hierarchical smart contract system and the use of zk-SNARKs. These measures ensure the integrity and consistency of transaction states, inherently preventing the submission of outdated or fraudulent states. This section outlines how these mechanisms work in detail.

ZK-SNARK Proofs and State Updates

The network leverages zk-SNARKs to validate each transaction. The key elements include:

  • Proof of Validity: Each transaction within the network must be accompanied by a zk-SNARK proof. This proof verifies several critical aspects:

    • The sender has sufficient balance to cover the transaction.
    • The sender’s updated balance after the transaction is correctly computed.
    • The receiver’s updated balance after the transaction is correctly computed.
  • Consistent State Management: Each user’s wallet contract maintains a Merkle tree of state proofs. Each state update (i.e., each transaction) is validated through zk-SNARKs, ensuring it is consistent with the previously recorded state. This cryptographic validation prevents unauthorized or incorrect state changes.

Prevention of Old State Submission

The design of the proposed network inherently prevents the submission of outdated or fraudulent states through the following mechanisms:

  • Proof Consistency: Each zk-SNARK proof submitted for a state update must be consistent with the latest recorded state. Intermediate contracts aggregate data from wallet contracts, and root contracts submit these aggregated roots to the blockchain. Any attempt to submit an old state would be detected as it would not match the current aggregated Merkle root.

  • On-Chain Verification: The final aggregated Merkle root submitted by the root contract is stored on the blockchain, providing a tamper-evident record of the global state. During dispute resolution, the submitted state proofs are verified against this on-chain Merkle root to ensure only the most recent valid state is considered.

Mitigated Need for Watchtowers

Due to the robust fraud prevention mechanisms built into the proposed system, the traditional need for watchtowers entities that monitor the blockchain for malicious activities and act on behalf of users is significantly reduced. The hierarchical structure and the use of zk-SNARKs ensure that:

  • Each state update is cryptographically verified, preventing unauthorized changes.
  • The aggregated Merkle roots provide a consistent and tamper-evident record of the network’s state.
  • Dispute resolution is handled efficiently and fairly based on the most recent valid state proofs.

The comprehensive fraud prevention mechanisms of the proposed decentralized payment network ensure high levels of security and integrity without the need for external monitoring entities like watchtowers. The hierarchical smart contract system and zk-SNARKs work together to maintain consistent and verifiable transaction states, providing a secure and efficient framework for decentralized financial transactions.

Role of the DAO

While the built-in mechanisms provide robust security and minimize the need for watchtowers, there are scenarios where manual involvement might be necessary. To address these situations, a Decentralized Autonomous Organization (DAO) can be implemented to manage and oversee the network’s operations. The DAO would play a crucial role in:

  • Handling Exceptional Cases: Situations that require manual intervention beyond the automated fraud prevention and dispute resolution mechanisms.
  • Balancing Automation and Trust: Ensuring the right mix of automated processes, cryptographic proofs, and trust mechanisms to maintain network integrity.
  • Democratic Decision-Making: Leveraging community governance to make decisions on critical issues, such as protocol upgrades, handling disputes that the automated system cannot resolve, and other governance matters.

DAO Functions

  1. Manual Dispute Resolution: For disputes that cannot be resolved through automated proofs, the DAO can step in to review and make a final decision based on community consensus.
  2. Protocol Upgrades: The DAO can propose and vote on protocol upgrades to enhance the system’s functionality and security.
  3. Network Oversight: Providing ongoing oversight and making strategic decisions to ensure the network remains secure and efficient.

The combination of zk-SNARKs, hierarchical smart contracts, and the DAO creates a robust framework for fraud prevention and network governance. The minimized need for watchtowers is achieved through advanced cryptographic verification and efficient dispute resolution mechanisms. However, the DAO ensures that any issues requiring manual involvement are handled with a balance of automation, trust, rigorous mathematical verification, and democratic decision-making. This comprehensive approach provides a secure, scalable, and trustworthy decentralized payment network.

Dispute Resolution

In the event of a dispute between parties, the proposed network employs a dispute resolution mechanism based on the submitted zk-SNARK proofs and Merkle roots. The dispute resolution process can be formally described as follows:

\begin{algorithm}[H]
\caption{Dispute Resolution}
\begin{algorithmic}[1]
\Procedure{ResolveDispute}{$S, R, \pi_S, \pi_R$}
\State $S$ submits their final state proof $\pi_S$
\State $R$ submits their final state proof $\pi_R$
\State Verify $\pi_S$ and $\pi_R$ against the submitted Merkle roots
\If{$\pi_S$ is valid and $\pi_R$ is invalid}
\State Resolve the dispute in favor of $S$
\ElsIf{$\pi_R$ is valid and $\pi_S$ is invalid}
\State Resolve the dispute in favor of $R$
\Else
\State Resolve the dispute based on the most recent valid state proof
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}

Here, S and R represent the disputing parties, and \pi_S and \pi_R are their respective final state proofs. The dispute resolution mechanism verifies the submitted proofs against the Merkle roots stored on-chain and resolves the dispute based on the validity of the proofs. This ensures that the resolution is based on the most recent valid state of the payment channel, preventing fraud and maintaining the integrity of the system.

The dispute resolution process follows these steps:

  1. Dispute Initiation: Either party can initiate a dispute by submitting a dispute request to the relevant smart contract (e.g., the intermediate contract managing their user group).
  2. Evidence Submission: Both parties are required to submit their final state proofs (\pi_S \text{ and } \pi_R) within a predefined timeframe (e.g., 24 hours). These proofs represent the latest state of their respective payment channels and include the relevant transaction history.
  3. Proof Verification: The dispute resolution mechanism verifies the submitted proofs against the Merkle roots stored on-chain. This verification process ensures that the proofs are valid and consistent with the global state of the network.
  4. Resolution: The dispute is resolved based on the validity of the submitted proofs:
    • If \pi_S is valid and \pi_R is invalid, the dispute is resolved in favor of party S.
    • If \pi_R is valid and \pi_S is invalid, the dispute is resolved in favor of party R.
    • If both proofs are valid, the dispute is resolved based on the most recent valid state proof, determined by the timestamp or sequence number associated with the proofs.
    • If neither proof is valid or if one party fails to submit their proof within the required timeframe, the dispute can be escalated to a higher-level contract (e.g., the root contract) or a trusted third party for manual review and resolution.
  5. Outcome Enforcement: Once the dispute is resolved, the smart contracts automatically enforce the outcome by updating the balances of the involved parties according to the resolution decision. This may involve redistributing tokens between the parties’ payment channels or applying penalties for fraudulent behavior.

To incentivize honest behavior and discourage frivolous disputes, the network can implement additional mechanisms:

  • Dispute Bond: Parties initiating a dispute may be required to post a bond (in the form of tokens) that is forfeited if their submitted proof is found to be invalid or if they fail to submit their proof within the required timeframe. This bond serves as a deterrent against malicious actors and ensures that disputing parties have a stake in the resolution process.
  • Reputation System: The network can maintain a reputation score for each user based on their history of successful transactions and dispute resolutions. Users with a high reputation score may be given preference in case of ambiguous disputes or may enjoy reduced dispute bond requirements. Conversely, users with a history of fraudulent behavior or frivolous disputes may face higher bond requirements or even temporary suspension from the network.

By combining cryptographic proofs, smart contract automation, and economic incentives, the proposed dispute resolution mechanism ensures that conflicts are resolved fairly and efficiently while maintaining the integrity of the payment network.

Example 5: Dispute Resolution

Suppose a dispute arises between Alice and Bob regarding the final state of their payment channel. Alice claims that her final balance is 100 tokens, while Bob claims that Alice’s final balance is 80 tokens.

  1. Dispute Initiation: Alice initiates a dispute by submitting a dispute request to the intermediate contract IC_1 that manages their user group. She deposits the required dispute bond of 10 tokens.
  2. Evidence Submission: Alice and Bob submit their respective final state proofs, \pi_A and \pi_B, to the dispute resolution mechanism within the 24-hour timeframe. Alice’s proof \pi_A shows her balance as 100 tokens, while Bob’s proof \pi_B shows Alice’s balance as 80 tokens.
  3. Proof Verification: The dispute resolution mechanism verifies the submitted proofs against the Merkle roots stored on-chain. It finds that Alice’s proof \pi_A is consistent with the on-chain state, while Bob’s proof \pi_B is invalid.
  4. Resolution: As Alice’s proof \pi_A is valid and Bob’s proof \pi_B is invalid, the dispute is resolved in favor of Alice. The resolution confirms that Alice’s final balance is indeed 100 tokens.
  5. Outcome Enforcement: The intermediate contract IC_1 automatically updates the balances of Alice and Bob’s payment channels according to the resolution decision. Alice’s balance remains at 100 tokens, while Bob’s balance is adjusted based on the discrepancy. Additionally, Bob’s dispute bond of 10 tokens is forfeited and distributed as a reward to Alice for submitting a valid proof.

This example demonstrates how the dispute resolution mechanism ensures the integrity of the payment network by resolving conflicts based on the validity of the submitted zk-SNARK proofs and the Merkle roots stored on-chain, while also incentivizing honest behavior through the use of dispute bonds.

Comparison to Alternative Approaches

The proposed decentralized payment network offers several advantages over alternative approaches, such as traditional blockchain-based payment systems and centralized payment networks.

Compared to traditional blockchain-based payment systems, the proposed network provides higher scalability and privacy. The use of off-chain payment channels and zk-SNARKs enables faster and more private transactions, while the hierarchical smart contract structure and partitioned storage nodes enable more efficient processing and storage of transaction data.

Compared to centralized payment networks, the proposed system offers greater security, transparency, and censorship resistance. By leveraging the security and immutability of blockchain technology and the privacy-preserving properties of zk-SNARKs, the network can provide a more secure and transparent payment infrastructure that is resistant to censorship and control by central authorities.

Example 6: Comparison to Centralized Payment Networks

Suppose a centralized payment network relies on a single trusted entity to process transactions and manage user balances. While this approach may offer high transaction throughput, it also presents several risks and limitations:

  1. Single point of failure: If the central entity experiences technical issues or becomes compromised, the entire payment network may become unavailable or vulnerable to fraud.
  2. Lack of transparency: Users must trust the central entity to manage their funds honestly and securely, without the ability to independently verify the state of their balances or the validity of transactions.
  3. Censorship risk: The central entity may choose to block or reverse transactions based on their own criteria, censoring users or restricting access to the payment network.

In contrast, the proposed decentralized payment network addresses these issues through its use of blockchain technology, zk-SNARKs, and a decentralized architecture:

  1. Decentralization: The network is maintained by a distributed network of storage nodes and smart contracts, eliminating the single point of failure and ensuring the availability and resilience of the system.
  2. Transparency and verifiability: Users can independently verify the state of their balances and the validity of transactions using the zk-SNARK proofs and the Merkle roots stored on-chain, providing a high level of transparency and trust in the system.
  3. Censorship resistance: The decentralized nature of the network and the use of zk-SNARKs ensure that transactions cannot be easily censored or reversed by any single entity, preserving the freedom and autonomy of users.

This example highlights the significant advantages of the proposed decentralized payment network over centralized alternatives, providing a more secure, transparent, and censorship-resistant payment infrastructure for users.

Analysis

This section provides a comprehensive analysis of the security, privacy, and scalability properties of the proposed decentralized payment network, and compares it to alternative approaches.

We delve into the technical details of the zk-SNARK implementation, discuss potential challenges and trade-offs, explore additional privacy-enhancing techniques, and consider the governance aspects of the system.

Security Analysis

The security of the proposed network relies on the soundness and completeness of the zk-SNARK proofs, as well as the integrity of the hierarchical smart contract structure. We employ the state-of-the-art zk-SNARK construction proposed by Groth, which offers succinct proofs and efficient verification. The zk-SNARK scheme is built upon the q-Power Knowledge of Exponent (q-PKE) assumption and the q-Decisional Diffie-Hellman (q-DDH) assumption in bilinear groups.

Let \mathcal{G}_1, \mathcal{G}_2, and \mathcal{G}_T be cyclic groups of prime order p, and let e : \mathcal{G}_1 \times \mathcal{G}_2 \rightarrow \mathcal{G}_T be a bilinear map. The q-PKE assumption states that for any polynomial-size adversary \mathcal{A}, the following probability is negligible in the security parameter \lambda:

\Pr\left[ \begin{array}{c} g \xleftarrow{\$} \mathcal{G}_1, \quad \alpha, s \xleftarrow{\$} \mathbb{Z}_p, \quad g_2 \leftarrow g^{\alpha},\\ (c_1, \ldots, c_q) \xleftarrow{\$} \mathbb{Z}_p^q, \quad t_i \leftarrow g^{c_i} \cdot g_2^{s \cdot c_i},\\ (h, \hat{h}) \leftarrow \mathcal{A}(g, g_2, \{t_i\}_{i=1}^q) : \\ h = g^s \wedge \hat{h} \neq \prod_{i=1}^q t_i^{c_i} \end{array} \right]

The q-DDH assumption states that for any polynomial-size adversary \mathcal{A}, the following probability is negligible in the security parameter \lambda:

\Pr\left[ \begin{array}{c} g \xleftarrow{\$} \mathcal{G}_1, \quad \alpha, s, r \xleftarrow{\$} \mathbb{Z}_p, \quad g_2 \leftarrow g^{\alpha},\\ (c_1, \ldots, c_q) \xleftarrow{\$} \mathbb{Z}_p^q, \quad t_i \leftarrow \begin{cases} g_2^{c_i}, & \text{if } b=0\\ g_2^{c_i + s}, & \text{if } b=1 \end{cases},\\ b \xleftarrow{\$} \{0, 1\}, \quad b' \leftarrow \mathcal{A}(g, g_2, \{t_i\}_{i=1}^q) : \\ b = b' \end{array} \right]

Under these assumptions, the zk-SNARK construction ensures that the proofs are sound and complete, meaning that a prover cannot create a valid proof for a false statement (soundness) and that a valid proof always verifies successfully (completeness). Consequently, transactions are guaranteed to be valid, and balances are correctly updated, preventing double-spending and other fraudulent activities.

Attack Vector Prevention

The hierarchical smart contract structure, combined with the storage nodes, ensures that the network’s global state remains consistent and verifiable, even in the presence of malicious actors. The smart contracts are implemented using the Solidity language and are formally verified using the Oyente and Zeus tools to ensure their correctness and security.

1. Collusion during the trusted setup ceremony:

  • Mitigated by the use of secure multi-party computation (MPC) protocols like ZEXE, ensuring a distributed setup process.
  • Involvement of diverse participants reduces the likelihood of successful collusion.

2. Collusion among users:

  • Prevented by the use of unforgeable and computationally binding zk-SNARK proofs (PLONK), making it infeasible for users to create valid proofs for fraudulent transactions.
  • Smart contracts verify proofs before executing transactions, ensuring only legitimate transactions are processed.

3. Collusion among storage nodes:

  • Mitigated by the distributed storage architecture with multiple nodes maintaining data copies, making it difficult for nodes to collude and provide false data without detection.
  • The use of Merkle trees and hash-based commitments allows smart contracts to verify data authenticity.

4. Smart contract vulnerabilities:

  • Addressed by formal verification tools, independent security audits, secure coding practices, access controls, and error handling mechanisms.
  • Upgradability and emergency stop mechanisms allow for deploying security patches and freezing contracts in case of severe vulnerabilities.

5. Privacy leaks:

  • Mitigated by the use of zk-SNARKs, ensuring transaction privacy.
  • Mixing techniques, anonymity networks, metadata obfuscation, and regular security assessments further enhance privacy protection.

6. Sybil attacks:

  • Inherently resistant due to the use of zk-SNARK proofs, smart contract verification, and the underlying blockchain’s consensus mechanism.
  • The system’s design, including proof validity and economic disincentives, makes it infeasible for attackers to create and manipulate multiple identities or payment channels.
  • The requirement of fees to set up payment channels and execute transactions further discourages Sybil attacks by making them financially costly for attackers.

7. Denial-of-Service (DoS) attacks:

  • Inherently mitigated by the computational cost of generating zk-SNARK proofs for each transaction, making it impractical for attackers to flood the network with a large number of transactions.
  • The decentralized architecture and the resilience of the underlying Ethereum blockchain provide additional protection against DoS attacks.

Scalability Analysis

The proposed decentralized payment network exhibits significant scalability potential due to its innovative use of zero-knowledge proofs (ZKPs), particularly zk-SNARKs, and the absence of traditional consensus mechanisms, which together enable instant finality. In this section, we will provide a detailed mathematical assessment and real-world benchmarks to validate the network’s scalability potential.

Mathematical Formalism of TPS Scalability

Let us define the total time per transaction T_{tx} as the sum of the time for proof generation T_{pg}, network latency T_{nl}, and the overhead for contract execution and state updates T_{oh}. Given that we aim for high scalability, we will leverage parallel processing capabilities of nodes to handle multiple channels efficiently.

T_{tx} = T_{pg} + T_{nl} + T_{oh}

Assuming average values for these times, such as:

\begin{align*} T_{pg} & \approx 50 \text{ ms} \\ T_{nl} & \approx 50 \text{ ms} \\ T_{oh} & \approx 50 \text{ ms} \\ \end{align*}

The total time per transaction can be approximated as:

T_{tx} = 50 \text{ ms} + 50 \text{ ms} + 50 \text{ ms} = 150 \text{ ms}

Thus, the transactions per second (TPS) per node can be calculated as:

TPS_{\text{per node}} = \frac{1}{T_{tx}} = \frac{1}{150 \text{ ms} / 1000 \text{ ms/s}} \approx 6.67 \text{ TPS}

If we consider the network scaling linearly with the number of nodes, the total TPS for n nodes can be expressed as:

TPS_{\text{total}} = TPS_{\text{per node}} \times n = 6.67 \times n

For example, with 100 nodes, the network could achieve:

TPS_{\text{total}} = 6.67 \times 100 = 667 \text{ TPS}

TPS Within Channels

To further detail the TPS within individual payment channels, consider that each node can manage multiple channels. Let c denote the number of channels a node can handle, and let T_{channel} represent the time to process a transaction within a channel.

T_{channel} = T_{pg} + T_{nl} + T_{oh} = 70 \text{ ms} \quad (\text{considering optimized conditions})

Thus, the TPS per channel:

TPS_{\text{per channel}} = \frac{1}{T_{channel}} = \frac{1}{70 \text{ ms} / 1000 \text{ ms/s}} \approx 14.29 \text{ TPS}

If each node can handle c channels, the total TPS per node considering channels would be:

TPS_{\text{node, channels}} = TPS_{\text{per channel}} \times c

Assuming a node can handle 10,000 channels:

TPS_{\text{node, channels}} = 14.29 \times 10,000 = 142,900 \text{ TPS}

For a network with n nodes, the total TPS could be:

TPS_{\text{total, channels}} = 142,900 \times n

Real-World Micro Benchmarks

To validate these theoretical calculations, we consider benchmarks from existing state channel implementations:

  1. Celer Network: Claims to handle up to 15,000 TPS per node.
  2. Raiden Network: Aims for several thousand TPS per node.
  3. Lightning Network: Estimates around 1,000 TPS per node in practical scenarios.

Given these benchmarks, our assumption of handling 10,000 channels per node with approximately 14.29 TPS per channel, resulting in 142,900 TPS per node, is ambitious but within a reasonable range for a highly optimized implementation leveraging zk-SNARKs and efficient contract management.

Potential Bottlenecks

Despite the promising scalability, several bottlenecks could impact performance:

  1. Proof Generation and Verification: While zk-SNARKs are efficient, the complexity of proofs can increase with advanced use cases.
  2. Network Latency: Global transactions can introduce delays that affect overall throughput.
  3. Smart Contract Efficiency: Inefficiencies in smart contracts can create processing delays.
  4. Storage and Data Management: Managing large numbers of channels and associated data could become challenging.
  5. Node Reliability and Security: Ensuring the reliability and security of each node is critical.

Addressing these bottlenecks through ongoing optimization and robust infrastructure will be crucial to achieving the theoretical TPS and ensuring the network’s scalability and robustness.

Summary

The proposed decentralized payment network, leveraging zk-SNARKs and instant finality mechanisms, exhibits significant scalability potential. The mathematical formalism and real-world benchmarks indicate that the network can achieve high TPS by efficiently managing multiple channels per node. Continuous optimization and addressing potential bottlenecks will be essential to realizing this potential in practice.

Scalability Comparison with Existing Layer 2 Solutions

Key Features of the Proposed Network

  1. Unilateral Payment Channels: Enables high transaction throughput by facilitating off-chain transactions.
  2. Zero-Knowledge Proofs (zk-SNARKs): Ensures privacy and efficient transaction validity.
  3. Instant Finality: Transactions achieve instant finality without on-chain confirmations.
  4. Partitioned Storage Nodes: Manages off-chain data efficiently, reducing on-chain storage requirements.

Existing Layer 2 Solutions

State Channels (e.g., Lightning Network, Raiden Network):

  • Scalability: High throughput off-chain.
  • Finality: Near-instant off-chain finality.
  • Challenges: Requires channel monitoring and on-chain closures.

Plasma:

  • Scalability: High throughput with off-chain child chains.
  • Finality: Periodic on-chain commitments.
  • Challenges: Complex exit management and data availability.

Optimistic Rollups:

  • Scalability: Batches transactions off-chain.
  • Finality: Delayed due to fraud proof periods.
  • Challenges: Requires fraud proof monitoring.

ZK-Rollups:

  • Scalability: High throughput with off-chain transaction bundling.
  • Finality: Near-instant with zk-SNARKs.
  • Challenges: Complex proof generation.

Comparative Analysis

Throughput and Finality:

  • The proposed network achieves high throughput and instant finality, comparable to state channels and ZK-Rollups and superior to Optimistic Rollups.

Efficiency and Cost:

  • More cost-efficient by reducing on-chain transactions and eliminating mining, outperforming state channels and Plasma.

Data Management:

  • Efficient off-chain data management through partitioned storage nodes, similar to Plasma and rollups.

Security and Privacy:

  • Robust security and privacy with zk-SNARKs, comparable to ZK-Rollups and superior to solutions relying on fraud proofs.

Implementation Details

The proposed decentralized payment network is implemented using a combination of Rust, TypeScript, and Solidity. The core components, such as the zk-SNARK proof generation and verification, are written in Rust for performance and security reasons. The smart contracts are developed using Solidity, while the frontend and client-side interactions are built with TypeScript.

Specific zk-SNARK Construction

The system employs the PLONK zk-SNARK construction, which offers universality, updatability, and efficient proof generation and verification. PLONK allows for the creation of a universal and updateable structured reference string (SRS) that can be reused across multiple circuits or applications, reducing the complexity and coordination overhead associated with repeated trusted setups.

The PLONK circuits are designed using the arkworks library in Rust, which provides a set of tools and primitives for building zk-SNARK circuits compatible with the PLONK proving system. The library supports efficient constraint generation, witness computation, and proof generation, making it well-suited for the development of the decentralized payment network.

Challenges and Optimizations

One of the main challenges in implementing PLONK is the complexity of designing and optimizing the circuits to take advantage of the universal SRS. This requires a deep understanding of the PLONK framework and the techniques for constructing efficient and secure circuits.

To address this challenge, the implementation leverages various optimization techniques, such as:

  1. Constraint system optimization: Minimizing the number of constraints in the circuit by using efficient gate design and layout techniques, such as gate aggregation and constant folding.
  2. Witness compression: Reducing the size of the witness by using compact data representations and eliminating redundant information.
  3. Proof aggregation: Batching multiple proofs together to reduce the overall proof size and verification cost.

These optimizations help to improve the performance and scalability of the PLONK-based zk-SNARK circuits, ensuring that the decentralized payment network can handle a high volume of transactions efficiently.

Integration with Ethereum

The smart contracts for the payment network are implemented using Solidity and deployed on the Ethereum blockchain. The contracts interact with the PLONK proofs generated by the Rust components through a verification contract that is optimized for the PLONK proving system.

The verification contract is designed to be gas-efficient and supports batch verification of PLONK proofs, allowing multiple proofs to be verified in a single transaction. This helps to reduce the overall cost and improve the throughput of the system.

Trusted Setup Ceremony

As PLONK requires a trusted setup for the universal SRS, a multi-party computation (MPC) ceremony is conducted to generate the SRS. The ceremony involves multiple participants from different organizations and backgrounds, ensuring that no single party has control over the entire setup process.

The MPC ceremony is organized and facilitated using secure computation frameworks, such as the ZEXE library, which provides a set of tools and protocols for conducting distributed key generation and parameter setup.

Concise Example: Private Asset Transfer

In a private asset transfer, Alice can transfer assets to Bob without revealing the transaction details to the public. Using PLONK, Alice generates a proof π that verifies the validity of the transfer and her sufficient balance without disclosing the transaction amount Δ.

Transfer T = (A, B, \pi) where \pi is the PLONK proof

  1. \pi \leftarrow \text{generateProof}(A, B, \Delta)
  2. Submit (A, B, \pi) to the transfer contract
  3. Contract verifies \pi
    • If \pi is valid, execute transfer from A to B
    • Else, reject transfer

The proof \pi ensures the following conditions:

  • B_A \geq \Delta (Alice’s balance is sufficient)
  • B_A' = B_A - \Delta (Alice’s updated balance)
  • B_B' = B_B + \Delta (Bob’s updated balance)

The smart contract executes the transfer only if the proof is valid, ensuring the transfer’s legitimacy without revealing the transaction details.

By leveraging PLONK, the proposed decentralized payment network achieves a balance between privacy, scalability, and ease of implementation. The universal and updateable nature of PLONK, combined with the optimization techniques and secure trusted setup ceremony, provides a solid foundation for building a privacy-focused and efficient payment system.

Use Cases for Privacy

Confidential Voting Systems

Confidential voting systems are a critical use case for enhanced privacy in decentralized networks. Voting systems must ensure that each vote is anonymous and secure while maintaining the integrity and transparency of the election process. By leveraging zk-SNARKs, our network can provide a solution that guarantees the confidentiality of votes while allowing for public verification of election results.

In a confidential voting system built on the proposed network, voters would cast their votes through private transactions, with zk-SNARKs proving that each vote is valid and belongs to an eligible voter without revealing the voter’s identity. The votes would be tallied through a series of confidential transactions, with the final result verifiable through the aggregated Merkle roots stored on-chain. This approach ensures that the voting process is transparent and auditable while preserving the privacy of individual voters.

Private Asset Transfers

Private asset transfers are another significant use case for enhanced privacy in a decentralized network. These transfers require confidentiality to protect the financial privacy of users, ensuring that transaction details remain private while the integrity of the transfer is verifiable.

With the proposed network, users can transfer assets through confidential payment channels, with zk-SNARKs proving the validity of the transactions without revealing the amounts or the identities of the parties involved. This feature is particularly valuable for businesses and individuals who wish to keep their financial transactions private, such as in the case of sensitive business deals, wealth management, or personal remittances.

Secure Health Records Management

Secure health records management is an essential use case for enhanced privacy, where sensitive health information must be kept confidential while ensuring that authorized parties can verify the records. Using zk-SNARKs, the proposed network can enable the secure storage and sharing of health records while maintaining patient privacy.
In this use case, health records would be stored off-chain, with zk-SNARKs proving the authenticity and integrity of the records without revealing their contents. Patients can grant access to their records to authorized parties, such as healthcare providers or insurance companies, through private transactions. The authorized parties can then verify the records’ authenticity using the zk-SNARK proofs, ensuring that the records have not been tampered with while preserving patient confidentiality.

Global Payment System

A global payment system is perhaps the most scalable and impactful use case for a decentralized network with enhanced privacy. Such a system must provide sufficient privacy to protect user transactions while ensuring transparency and scalability to facilitate mass adoption. By leveraging zk-SNARKs, the proposed network can achieve a balanced privacy level that ensures transaction confidentiality without hindering scalability or regulatory compliance.

In a global payment system built on the proposed network, users can transact through confidential payment channels, with zk-SNARKs proving the validity of transactions without revealing the amounts or the identities of the parties involved. This privacy level can be customized based on the specific requirements of different jurisdictions, ensuring compliance with local regulations while still preserving user privacy.

To facilitate cross-border transactions and enable seamless interoperability with existing payment systems, the network can integrate with traditional financial institutions and payment processors through secure off-chain communication channels. These channels can leverage zk-SNARKs to prove the authenticity of transactions and balances without revealing sensitive information, enabling a hybrid approach that combines the benefits of decentralized privacy with the reach and stability of established financial networks.

By leveraging zk-SNARKs in these use cases, the proposed decentralized payment network can provide enhanced privacy and scalability, making it suitable for a wide range of applications. These examples illustrate how the network can achieve a balance between privacy and transparency, facilitating mass adoption while maintaining the necessary confidentiality.

Conclusion

The proposed decentralized payment network offers:

  1. Higher Throughput: Comparable to or exceeding state channels and rollups.
  2. Instant Finality: Superior to Optimistic Rollups.
  3. Cost Efficiency: Reduces on-chain interactions and eliminates mining.
  4. Enhanced Privacy: Matches or surpasses ZK-Rollups.

The unique combination of features in the proposed network makes it a potentially more scalable and private solution compared to existing Layer 2 systems.
By leveraging zk-SNARKs in these use cases, we can provide enhanced privacy and scalability, making our decentralized network suitable for a wide range of applications. These examples illustrate how the network can achieve a balance between privacy and transparency, facilitating mass adoption while maintaining the necessary confidentiality.

References

  1. Poon, J., & Dryja, T. (2016). The Bitcoin Lightning Network: Scalable Off-Chain Instant Payments. Lightning Network Whitepaper. https://lightning.network/lightning-network-paper.pdf
  2. Buterin, V., & Poon, J. (2017). Plasma: Scalable Autonomous Smart Contracts. Plasma Whitepaper. https://plasma.io/plasma.pdf
  3. Raiden Network Team. (2017). Raiden Network: Fast, Cheap, Scalable Token Transfers for Ethereum. Raiden Network
  4. Celer Network. (2019). Celer Network: Bring Internet Scale to Every Blockchain. Celer Network Whitepaper. https://www.celer.network/doc/CelerNetwork-Whitepaper.pdf
  5. PLONK Documentation. (n.d.). ZK-SNARKs: PLONK. Retrieved from https://docs.plonk.cafe/
  6. Ben-Sasson, E., Chiesa, A., Tromer, E., & Virza, M. (2014). Scalable Zero-Knowledge via Cycles of Elliptic Curves. In International Cryptology Conference (pp. 276-294). Springer, Berlin, Heidelberg.
  7. Groth, J. (2016). On the Size of Pairing-based Non-interactive Arguments. In Annual International Conference on the Theory and Applications of Cryptographic Techniques (pp. 305-326). Springer, Berlin, Heidelberg.
  8. Zhang, F., Cecchetti, E., Croman, K., Juels, A., & Shi, E. (2016). Town Crier: An Authenticated Data Feed for Smart Contracts. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (pp. 270-282).
  9. Ben-Sasson, E., Chiesa, A., Genkin, D., Tromer, E., & Virza, M. (2015). SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge. In Annual Cryptology Conference (pp. 90-108). Springer, Berlin, Heidelberg.
  10. Hioki, L., Dompeldorius, A., & Hashimoto, Y. (2024). Plasma Next: Plasma without Online Requirements. Ethereum Research. Retrieved from Plasma Next: Plasma without Online Requirements

2 posts - 1 participant

Read full topic

Economics Is it worth using MEV-Boost?

Published: Jun 06, 2024

View in forum →Remove

Is it worth using MEV-Boost?

To answer that question from an economic perspective, we will look into the APYs.
> For simplicity, we assume a total of 1 million active validators and ignore sync-committee rewards.
> The underlying data ranges from November 2023 - 6 June 2024 and includes all slots.

First, let’s check the difference between local block building and using MEV-Boost.
We can see that the block reward is higher for MEV-Boost users:

The median block reward increases from 0.0076 to 0.0380 ETH (400% more).

What does that mean on an annual basis?

The statistical 2.6 blocks a validator gets to propose per year yield a total of 0.0199 ETH in block rewards.
For MEV-Boost blocks, the 2.6 blocks yield a total of 0.0998 ETH per year.

When shown in a pie chart, we can see that the share of the block reward (green) grows from 2.96% to 13.4%, compared to the total expected rewards per year.

What does that mean for the APY?

For validators not using MEV-Boost, the expected annual revenue is 0.929 ETH.
For validators using MEV-Boost, the expected annual revenue is 1.009 ETH.
These are additional ~8.6% of revenue.

Using MEV-Boost increases the APR from 2.93% to 3.24%.

For the APY (compounding every epoch):

\text{APY}_{local\ builder} = \left(1 + \frac{\text{APR}}{n} \right)^n - 1 = \left(1 + \frac{\text{0.0297}}{365 \times 225} \right)^{365 \times 225} - 1 = 2.97\%

\text{APY}_{mevboost} = \left(1 + \frac{\text{APR}}{n} \right)^n - 1 = \left(1 + \frac{\text{0.0324}}{365 \times 225} \right)^{365 \times 225} - 1 = 3.29\%

Finally, using MEV-Boost increases the APY from 2.97% to 3.29%.


Find the code used for this analysis here.

10 posts - 7 participants

Read full topic

Networking Gossipsub Network Dynamicity through GRAFTs and PRUNEs

Published: Jun 06, 2024

View in forum →Remove

Summary & TL;DR

The ProbeLab team (https://probelab.io ) is carrying out a study on the performance of Gossipsub in Ethereum’s P2P network. Following from our previous post on the “Effectiveness of Gossipsub’s gossip mechanism”, in this post we investigate the frequency of GRAFT and PRUNE messages and the dynamicity in terms of session duration and network stability that results from these protocol primitives. For the purposes of this study, we have built a tool called Hermes (GitHub - probe-lab/hermes: A Gossipsub listener and tracer. ), which acts as a GossipSub listener and tracer. Hermes subscribes to all relevant pubsub topics and traces all protocol interactions. The results reported here are from a 3.5hr trace.

Study Description: The purpose of this study is to determine the frequency between GRAFT and PRUNE messages as a distribution and standard deviation per topic. We measure the session duration between our peer and other peers, split per client and attempt to identify any anomalies (e.g., too short connections) and potential patterns.

TL;DR: Overall, we conclude that Gossipsub is keeping a stable mesh as far as the MeshDegree goes, barely exceeding the DLow and DHigh thresholds of 6 and 12, respectively, despite increased dynamicity at times (i.e., increased numbers of GRAFTs and PRUNEs). Teku nodes always tear down connections to our node within a few seconds (or less) and fail to keep any connection running for longer periods of time (even minutes). This fact is likely to lead to some instability, which, however, doesn’t seem to be impacting the correct operation of the rest of the network.

For more details and results on Ethereum’s network head over to https://probelab.io for discv5 Weekly Network Health Reports.

Background

GossipSub is the most widely used libp2p PubSub routing mechanism. Gossipsub includes enhanced PubSub routing that aims to reduce the bandwidth by maintaining fewer connections per subscribed topic (through the mesh) and sending some sporadic message metadata for resilience purposes (through the gossip mechanism). This ensures that despite the message being broadcasted using the shortest latency path (mesh), the peers still have the backup of sharing msg_ids (gossip) to prevent missing messages that have already propagated through the network.

Because GossipSub reduces the overall number of connected peers by topic, known as meshes, having enough peers on these is crucial for efficient message broadcasting in any network. These meshes work as a connectivity sub-layer under the libp2p connections, as there is no direct map between 1_libp2p_peer_connection ↔ 1_mesh_peer_connection.
This differentiation provides efficiency to the protocol, as a peer doesn’t need to “waste” bandwidth sending full messages to more peers than the needed ones while avoiding to spam peers who already shared the same message.

However, this means that the routing mechanism has to control how many peers it is connected to, or how many it needs to GRAFT (add) or PRUNE (remove) for each mesh, complicating things a little more.

This report provides insights into the GRAFT and PRUNE events (Network Dynamicity), and RPC calls (Session Duration) that our Hermes-based node [link to Hermes GH repo] could track over 3.5 hours while participating in the Holesky testnet.

Results

Add and Remove peer events

The stability of the network’s mesh relies heavily on the connections that the libp2p host keeps open. The following graph shows the number of ADD_PEER and REMOVE_PEER that the Hermes node tracked during the 3.5 hour run.

From the plot, we don’t see any particularly inconsistent behaviour or pattern that stands out. The number of connections and disconnections at the libp2p host remains relatively stable around ~40 events per minute.

GRAFT and PRUNE events

GRAFT and PRUNE messages define when a peer is added or removed from a mesh we subscribe to. Thus, they directly show the dynamics of the peer connections within a topic mesh.

The following graph shows the number of GRAFTs and PRUNEs registered by the Hermes node.

We have also split this down by topic, and produced related plots, which however, we don’t include here, as the split is roughly equal among topics.

We observe that the number of recorded events spikes after the ~2.5 hours of Hermes operation, jumping from peaks of ~100 events per minute to peaks of up to 700-800 events per minute.

Correlation with GRAFT and PRUNE RPC events

After 2 hours of the node being up online, we can see that the GossipSub host tracks mesh connectivity at a higher frequency than during the first hours. Since the RPC calls can include multiple topic GRAFTs or PRUNEs, these events correspond to the sum of original sent and received RPC calls.

Taking a closer look at the origin of those messages with the following plot, we can see that the most significant number of events belong to RECV_PRUNE and SENT_GRAFTs.

As before, we have also split this down by topic, and produced related plots, which however, we don’t include here, as the split is roughly equal among topics.

We can see that there is some perturbation to the stability of the meshes due to increased number of GRAFT and PRUNE events at the end of the trace period. There are two possible explanations for this:

  • Remote nodes are dropping our connection for whatever reason (low peer score, or just sporadic PRUNE messages because their mesh is full), and our Hermes node counters this drop of peers by sending more GRAFT messages to keep it up with the MeshDegree = 8 .
  • We are sending too many GRAFT messages to rotate our peers and test the connection to other nodes in the network, which is countered by the remote peers by sending PRUNE messages as their mesh might be full already. Note that Hermes has an upper limit of 500 simultaneous connections, so we expect it to have a higher range of connected peers than other nodes in the network.

In any case, we do not consider this to be an alarming event and the increased number of events might just be due to increased network activity. We will verify that this is not an alarming case through further experiments in the near future, where we’ll collect traces for a longer period of time.

Mesh connection times

It is important to have a mix between steady connections and some rotation degree of these ones within each of the topic meshes. In fact, this level of rotation is the one that can guarantee that a node doesn’t end up eclipsed by malicious actors (although GossipSub-v1.1 does have the gossiping feature as well to overcome eclipse attempts).
The following graphs show the average connection times per peer and per mesh.

Measuring which is the average connection stability at the node level can showcase weird behaviours from the network. In this case, our Hermes run did measure dispersed results were:

  • 80% of the peers drop the connection after a few seconds of establishing it, although it has to be noted that the high percentage here owes to the spikes we’ve seen towards the end of the trace period in this particular dataset. We do not expect this to be the normal behaviour under “steady state”.

  • 10% of the peers remain connected for a total of ~4 minutes.

  • the remaining 10% of the connections remain between ~5 mins and 1.6 hours.

However, these plots do not give the full picture. As we saw with the number of GRAFT and PRUNE messages, there is a time relation on these distributions. What this means is that just because 80% of the connections happen within the first second, doesn’t really mean that they were recently distributed over the trace period. To provide more clarity, we plot the connection duration (in seconds) split in 30min windows over the 3.5hr trace period.

Correlating the sudden spikes of GRAFT and PRUNE towards the end of the trace period, we find the following: the connections established over the first 2 hours and a half were indeed longer in duration.

We have created graphs breaking down the connections by topic, by agent, and by both topic and agent for 30min windows. We present only one representative plot here, as we didn’t see any behaviour that stands out, other than the following:

  • Lodestar clearly maintains connections for longer periods of time, followed by Nimbus
  • Teku nodes consistently disconnect almost immediately.

Both of our observations are also evident from the following table, which includes the percentiles of the total connected time per client (in seconds).

Percentiles p25 p50 p80 p90 p99
Grandine 6.53 17.22 31.52 43.67 117.36
Lighthouse 9.94 19.82 36.25 74.89 570.02
Lodestar 2.06 7.12 1768.40 5855.87 7165.20
Nimbus 200.44 523.96 599.88 599.97 4157.40
Prysm 5.00 5.00 175.65 594.67 4322.25
Teku 0.10 0.13 0.64 1.72 5.85
Unknown 0.07 0.14 5.00 5.22 5.90

Resulting number of peers per mesh

It is important to highlight that despite the spike in network connectivity and the relevant RPC interactions our node has kept a stable range of connections within each of the topic meshes.

The following graph shows the total number of mesh connections for each topic, binned at 5 minute intervals.

The range of connections per topic barely goes below 6, even at the last hour of data, where we’ve seen the spikes in GRAFT and PRUNE events. We can observe that:

  • GossipSub is doing a good job at keeping the range of connections around the gossipSubD=8 (between the gossipSubDlo=6 and the gossipSubDhi=12).
  • This ratio of connections per topic does see a small drop on the last hour of the run, matching the sudden spike of RECV_PRUNEs seen in earlier plots.
  • The drop in the number of connections during the last hour does seem related to the fact that the connections opened during this period were actually way shorter than the previous ones.

Conclusions and takeaways

  • Overall, we conclude that Gossipsub is keeping a stable mesh as far as the MeshDegree goes, barely exceeding the DLow and DHigh thresholds of 6 and 12, respectively, despite increased dynamicity at times (i.e., increased numbers of GRAFTs and PRUNEs).
  • Teku nodes always tear down connections to our node within a few seconds (or less) and fail to keep any connection running for longer periods of time (even minutes). This fact is likely to lead to some instability, which, however, doesn’t seem to be impacting the correct operation of the rest of the network (at least from what we can see so far).
  • Our data shows a sudden spike in GRAFT and PRUNE events during the last hour of the study, despite the ADD_PEER and REMOVE_PEER events staying steady over the entire run. This could suggest that Hermes or the network is struggling to maintain mesh connections with other participants.
    • The spike in GRAFTs and PRUNEs generally comes from incoming PRUNE events followed by subsequent outgoing GRAFT events. It is still not clear what triggers this behaviour, i.e., if it’s just an anomaly coming from the strict connectivity limits from remote peers that refuse connections quite often, or if it’s Hermes the one that spams remote peers with GRAFTs more than it should and remote peers in turn respond with PRUNE events.
    • We conclude that this incident is interesting to keep an eye on (e.g., nodes end up with stable connections at the Host level, but struggling to keep a healthy number of connections in each mesh) and verify through a longer experiment. However, we do not consider this to be a critical incident, i.e., doesn’t cause any other metric to suffer.

2 posts - 2 participants

Read full topic

Layer 2 Based Sequencer Selection

Published: Jun 05, 2024

View in forum →Remove

This is an exploratory proposal for a method of deterministically identifying sequencers on L2s to route transactions to, in order to facilitate universal synchronous composability between L2s. This approach uses the Ethereum base layer as a universal and credibly neutral sequencer selection mechanism, intended as a fallback to based sequencing, in cases where based sequencing may not be suitable for an L2 to adopt fully. This is very much an initial proposal intended for general feedback, discussion and debate.

Proposal

LimeChain’s proposal on Vanilla Based Sequencing describes two methods for selecting the sequencer on L2: primary selection, when the current L1 proposer has opted-in to be an L2 sequencer for the rollup, and fallback selection, when the current L1 proposer has not opted-in to be an L2 sequencer for the rollup, and hence some other method is required to select the L2 sequencer.

In the fallback selection mechanism, the L2 selects an L1 proposer at random from the other opted-in L1 proposers to be the L2 sequencer. This method will work perfectly well, but limits the possibility for synchronous composability between rollups. A single L1 proposer that is also the designated sequencer on multiple rollups is able to offer guarantees that bundles of transactions will be executed atomically on the respective rollups. In the fallback selection mechanism, there is no longer a single L1 proposer sequencing for two or more rollups at the same time, and instead we have separate and distinct L1 proposers, and guarantees of atomic execution of transactions are no longer possible.

To address this challenge, I propose to explore the idea of a deterministic sequencer selection function on L1 that can be used by any rollup. This would allow wallets or pre-confirmation gateways to identify which sequencers will be selected when, on which rollups, allowing them to route pre-confirmation requests accordingly.

The high level idea is that a pre-confirmation gateway can accept a bundle of transactions, typically a pair of transactions to be executed on two distinct rollups, and can route the requests to the relevant sequencers on the respective rollups, obtaining the preconf promises for the user.

Requirements

  • Sequencers on L2s will need to be recorded via some central registry on L1 (perhaps an extension of the L1 pre-confirmation registry), and will need to be able to issue pre-confirmations.

  • Pre-confirmations will probably need to be confirmed in 2 rounds instead of one. The first request is to place a lock on the preconfs (on a very short timeframe, in the order of 100s of milliseconds), and the second is to confirm. This will allow a pre-confirmation gateway (or wallet) to obtain a commitment that both sequencers on their respective rollups will issue a preconf promise, and when the sequencer has commitments from both, it can confirm the preconf requests and receive the preconf promises from both. The nomenclature gets a bit confusing here and will need to be thought about. Alternatively, another potentially less complicated but less secure approach is to allow the pre-confirmation gateway to cancel a pre-confirmation upon only receiving a preconf promise from one but not both L2 sequencers.

  • There is a deterministic function in a smart contract on L1 which can be used to identify which L2 sequencer is going to be selected when on which rollup. This could leverage prevrandao via block.difficulty to provide a lookahead which L2s use to select sequencers on L2. This could be as simple as function that acts as a PRF which accepts a lookahead size n measured in slots, and bounded range r corresponding to the number of L2 sequencers, and which references block.difficulty to return a list of random sequencers ids of size n, within the range r.

  • May require some collateral to incentivize L2 sequencers to honor preconf requests, and to keep the L1 registry up-to-date, though this last point could prove difficult to coordinate among L2s and will require some more thought.

Rationale

The fallback sequencer selection will be employed by L2s more frequently in the beginning, until enough L1 proposers opt-in to based sequencing.

We need at least 20% of Ethereum’s validator set to opt-in to based sequencing in order to have at least 1 proposer per epoch. Moreover, these proposers will need to opt in to being sequencers for a number of rollups (assuming enough rollups will provision for this in their sequencer designs - tbd). All of this is against a background of numerous AVSs that could offer more economically attractive alternatives for proposers to opt into.

Assuming we do reach the required threshold of 20% of the Ethereum validator set having opted-in to based sequencing (roughly 200K validators), depending on the design of decentralized L2s, they may not be able to support this many sequencers. For L2s that might seek to implement some form of consensus protocol using a validator set with a proposer selection mechanism, it may be impractical to support 200K validators. In this instance, having a credibly neutral sequencer selection mechanism would be helpful.

Furthermore, the fallback selection mechanism will likely to be continued to be employed by newer or smaller rollups in the future, before they can build a large enough share of L1 proposers that opt-in to being sequencers for their rollups. Again, in this instance, credibly neutral sequencer selection could be appealing.

The fallback selection by itself doesn’t support universal synchronous composability, but synchronizing sequencer selection across rollups in a credibly neutral way (by using the L1), can facilitate USC.

Risks and Trade-offs

Users will need to trust the crypto-economic guarantees of the respective L2s that they are seeking preconfs of atomic execution of transactions from. Even if the L2 sequencers are also L1 validators (note that they don’t need to be), users will still need to rely on the L2’s security.

Under the primary selection method of the vanilla based sequencing approach, these security guarantees are improved somewhat, as we are dealing with a single L1 proposer for both rollups, and so their ability to offer guarantees of atomic transaction execution across L2s is improved.

Assuming that the L1 proposer posts the data from each rollup to the L1, then we also benefit from the subjective finality of those transactions, and so we know within one slot if the respective transactions have been included or not. This also a strong assumption however, as it depends on the volume of transactions on L2, with most rollups waiting until they have enough transactions to completely fill a blob with compressed data, which may not happen within one L1 slot. So again, it falls back to the user relying on the crypto-economic security of the L2s themselves.

This approach will of course require some buy-in from L2s that will need to implement the fallback sequencer selection, but this is also the case with based sequencing in general.

A major trade-off is the increased complexity from managing atomic execution guarantees, as described in the requirements section above. This will likely involve a lock-and-confirm mechanism between L2 sequencers and the preconf gateways (or a user’s wallet), whereby the preconf gateway asks the sequencer to put a hold on a preconf for about a second, and then confirms that they want to go ahead with the preconf.

Open Questions

It’s not clear to me exactly how to incentivize L2s or L2 sequencers to keep the registry up-to-date. If an L2 sequencer ceases to participate as an L2 sequencer, the registry will need to reflect this. This challenge is addressed in mteam’s proposal for Credibly Neutral Preconfirmation Collateral, and potentially the same registry could be used for L2 sequencers, but this needs to be explored. Using a separate registry could bloat the ecosystem with the redundant overhead of managing multiple registrations for based-sequencing proposers.

There may be understandable apprehension from the ecosystem if the preconf gateways on L1 are used to route transactions to sequencers on L2s. This will cause even more centralization across the ecosystem, with potentially only 2 or 3 preconf gateways routing transactions across L1 and also a number of L2s. This could be improved through routing to individual sequencers via independent gateways on each L2, but this needs to be explored further.

This proposal as it stands will not withstand L1 re-orgs. It is an open question how the sequencer selection mechanism can be impervious to L1 re-orgs.


All questions, feedback, criticisms are very welcome.

5 posts - 3 participants

Read full topic

Cryptography An RSA Deterministic Key Generation Scheme: Algorithm and Security Analysis

Published: Jun 05, 2024

View in forum →Remove

TL;DR: We propose a secure and practical deterministic key generation scheme and pseudorandom number generator, from which RSA keys can be generated to simplify key backup and retrieval.
This research is a joint effort from Ethereum Fellows: @Mason-Mind @georgesheth @dennis @AshelyYan.

1. Introduction

Traditionally, cryptographic keys are generated using randomness to ensure unpredictability.
In contrast, deterministic key generation refers to the process of generating cryptographic keys in a deterministic manner. In deterministic key generation, a single starting point (seed) is used to derive the keys using some Key Derivation Function (KDF). It brings some convenience in key management, however, also security risks and privacy concerns. The advantages can be concluded as follows:

  • Streamlined key management: Users can generate a sequence of keys from the initial seed, eliminating the necessity to store multiple keys individually. Instead, they only need to safeguard the initial seed in order to secure all the derived keys.
  • Simplified key backup and retrieval: Users can regenerate all the cryptographic keys from the initial seed, which simplifies the key retrieval.
  • Meet the industry demand. Industry developers are always looking for this solution and has huge value to the product delivery.

However, in this function, if the seed is compromised, every derived key becomes vulnerable to potential threats. Therefore, it is important to keep the initial seed secure. Meanwhile, it is essential that the initial seed remains unpredictable to adversaries, for reasons that are self-evident.

The main difficulty in designing a KDF is the initial keying material. When the source keying material is not uniformly random or pseudorandom, additional preprocessing is needed. In blockchain, there are some applications and solutions. Hierarchical Deterministic Bitcoin Wallets provide an interesting way of managing cryptographic keys, which form a tree structure. At the root, there is a randomly generated master seed. Using the deterministic key generation technique outlined in the BIP32 standard, this master seed can produce child keys. Since all the keys are generated deterministically, the same set of keys can be generated from the master seed. Argon2 is another popular approach for key derivation. It is designed to be resistant against brute-force attacks. Argon2 improves security by using a salt and a password as inputs. The salt is a unique random value. The purpose of using salt is to prevent attackers from using precomputed tables (rainbow tables) to apply brute-force attacks on passwords. Since unique salts are used, even if two users have the same password, their hashed values are still different.

In this proposal, we consider a particular problem of deterministic key generation, which generates RSA keys from ECDSA signatures. We assume that each user has an ECDSA private key sk, and wants to generate an RSA key pair. Instead of generating the RSA key pair separately, we make the RSA key pair a deterministic function of sk and a fixed message m. Our goal is to make the RSA keys secure. In particular, we want the probability of RSA key collisions from different users to be negligible. Due to the nature of RSA key generation, it is not enough to generate a single random number and use it directly as the private key. Instead, we need a pseudorandom number generator. Due to this requirement, we cannot use the approaches that we discussed previously.

To solve this problem, we propose a good deterministic key generation scheme and pseudorandom number generator, from which RSA keys can be generated. From a high level of the scheme, we pick the initial seed to be a signature sig(m, sk). The hash of the signature is then used as the key of the AES cipher. The algorithm and security analysis can be found as follows.

2. Algorithm

Our goal is to generate RSA keys securely and deterministically from ECDSA signatures. To achieve this goal, we use the standard procedure to generate RSA keys and a pseudorandom number generator prng, which provides all sources of randomness. The sequence that prng generates is a deterministic function of the ECDSA private key sk and the message m. Therefore, the same user always generates the same RSA key pair. But from the adversary’s point of view, the sequence that prng generates looks random. Hence the RSA keys are random.

At a high level, we start with an ECDSA signature. We do not assume that the signature is pseudorandom. Instead, we assume it to be unforgeable. This implies that the signatures form a sufficiently large space. (Otherwise, the brute-force attack would succeed with non-negligible probability.) We also assume that SHA256 is a random oracle, which roughly means that the result of the hash function looks random to an adversary unless the adversary knows the preimage of the function. But since we already assumed that ECDSA signatures are unforgeable, the adversary knows the preimage of the hash function with negligible probability. We can safely conclude that the pseudorandom of the result of the hash function, which we use as the key to AES block cipher. We then get a good pseudorandom number generator, from which random keys can be generated.

Algorithm 1 describes the details of the algorithm. Given an ECDSA private key sk and a message m, first sign m using sk to get an ECDSA signature sig. We then use SHA256 to hash sig to get a key, and set seed to be the hash of key. A pseudorandom number generator prng_{sig} can thus be obtained using AES encryption. We define AES(seed, key) to be the sequence of AES encryption in counter mode: AES_{ENC}(seed, key), AES_{ENC}(seed+1, key), AES_{ENC}(seed+2, key), \ldots.

Algorithm 1: Deterministic Key Generation
Input: a fixed message m, secret key sk from ECDSA
Output: RSA key pair
Function:  detGenKeyPair(m, sk)
sig = ECDSA_Sign(m, sk)
key = SHA256(sig)
seed = SHA256(key)
prng_sig = AES(seed, key)
RSAKeyPair(prng_sig)

To generate some RSA key pairs, we use the standard RSA key generation procedure (Algorithm 1). The pseudorandom number generator is responsible for providing all the randomness used in Algorithm 2.

Algorithm 2: RSA Key Generation using prng
Function:  RSAKeyPair(prng_sig)
p <- prgn while p is not a prime do
p <- prgn
q <- prgn while q is not a prime do
q <- prgn
Compute n = p*q
Compute l(n) = (p-1)*(q-1)
Pick some e (e and l(n) are relatively prime)
Compute d: e * d = 1 mod l(n)
Return: e, d, n

In order to show that detGenKeyPair returns ``good’’ RSA key pairs, we compare our algorithm with a popular reference implementation \textit{Forge}, which is a native implementation of TLS and various other cryptographic tools in JavaScript. \textit{Forge} implements a function \textit{forge.pki.rsa.generateKeyPair}, which generates RSA key pairs. From now on, we will refer to \textit{forge.pki.rsa.generateKeyPair} as \textit{generateKeyPair}. Our goal is to show that the key pairs, that deGenKeyPair generates, are as good as the key pairs, which generateKeyPair generates.

More precisely, we want to show that, for any probabilistic polynomial-time adversary \mathcal{A}, given Oracle access to the deterministic key generation function deGenKeyPair(\cdot, \cdot) and generateKeyPair from generateKeyPair, \mathcal{A} cannot distinguish between the keys generated by the two oracles.

3. Security Analysis

3.1 Indistinguishability

Firstly, we prove that adversary \mathcal{A} cannot distinguish between the keys generated by deGenKeyPair and generateKeyPair.

Theorem 1: For any probabilistic polynomial-time adversary \mathcal{A}, |Pr[\mathcal{A}^{deGenKeyPair(\cdot,\cdot)}=1]-Pr[\mathcal{A}^{generateKeyPair(\cdot)}=1]| is negligible, assuming:

(1) SHA256 is a random oracle.

(2) ECDSA signatures are unforgeable.

(3) AES is a pseudorandom function family.

Proof 1: We use proof by contradiction. We show that if the adversary \mathcal{A} can distinguish between the keys generated by the two procedures, it must be the case that at least one of the assumptions does not hold.

Suppose that |Pr[\mathcal{A}^{deGenKeyPair(\cdot,\cdot)}=1]-Pr[\mathcal{A}^{generateKeyPair(\cdot)}=1]| is non-negligible. Since all the randomness comes from prng_{sig} and prng_{rand}, it must be the case that \mathcal{A} can distinguish between prng_{sig} and prng_{rand} with non-negligible probability. Recall that prng_{sig} is a sequence obtained by running AES encryption in the \textit{counter} mode. So prng_{sig} is AES_{Enc}(seed, key), AES_{Enc}(seed+1, key), AES_{Enc}(seed+2, key), etc. Let \mathcal{K} be the set of all possible keys. There are two possibilities: (i) key is uniformly at random from \mathcal{K}, which violates our assumption that AES is a pseudorandom function family. (ii) key is not uniformly at random from \mathcal{K}.

Suppose that key is not uniformly at random from \mathcal{K}. According to Algorithm 1, key = SHA256(sig), where sig=ECDSA\_Sign(m, sk). There are two cases to consider. (i) \mathcal{A} knows sig, which violates our assumption that ECDSA signatures are unforgeable. (ii) \mathcal{A} does not know sig. This violates our assumption that SHA256 is a random oracle.

3.2 Collision-resistance

Theorem 2: If the probability of collision of keys generated by generateKeyPair is negligible, the probability of collision of keys generated by deGenKeyPair is also negligible.

Proof 2: We use proof by contradiction. Suppose that the probability of collision of keys generated by generateKeyPair is negligible, and the probability of collision of keys generated by deGenKeyPair is non-negligible. The adversary can generate a set of keys \mathcal{K}, and count the number of collisions in \mathcal{K}. If the number of collisions is negligible, then the keys are generated by generateKeyPair, otherwise, they are generated by deGenKeyPair. So the adversary can distinguish between which oracle it is interacting with. But according to Theorem 1, such an adversary does not exist.

3.3 Correctness

Let (e,d,n) be the keys that \textit{detGenKeyPair} returns. Let m be a message, the encryption function, and the decryption function are defined as follows. RSA_{enc}(m) \stackrel{def}{=} m^e mod n, RSA_{dec}(m) \stackrel{def}{=} m^d mod n.

Theorem 3: Let (e, d, n) be any RSA key, which \textit{detGenKeyPair} returns. For any message m, RSA_{dec}(RSA_{enc}(m)) \equiv m mod n.

Proof 3: Since l(n) = (p-1)\times(q-1) and e \times d = 1 mod l(n), there must exist some integer k s.t. e \times d = k \times (p-1) \times (q-1) + 1. So RSA_{dec}(RSA_{enc}(m)) \equiv m^{e\times d} mod n \equiv m^{ k \times (p-1) \times (q-1) + 1} mod n.

RSA_{dec}(RSA_{enc}(m)) \equiv (m^{ k \times (q-1)})^{p-1} \times m mod p \equiv m mod p, by Fermat’s Little Theorem.

Similary, RSA_{dec}(RSA_{enc}(m)) \equiv (m^{ k \times (p-1)})^{q-1} \times m mod q \equiv m mod q, by Fermat’s Little Theorem.

According to the Chinese Remainder Theorem, RSA_{dec}(RSA_{enc}(m)) \equiv m mod n.

4. Conclusion

Our approach begins by enabling users to sign a fixed message using their ECDSA secret keys, ensuring the unforgeability of these signatures against potential adversaries. Subsequently, we employ SHA256 to hash the generated signatures, producing pseudorandom output, with the assumption that SHA256 behaves as a dependable random oracle. This resulting hash serves as the key input for the AES block cipher, facilitating the creation of our PRNG. We provide rigorous proof of the security of this construction from indistinguishability, collusion-resistance and correctness.

1 post - 1 participant

Read full topic

Meta-innovation BlockTech500 Index: Mitigating the greed and centralized mentalities stifling innovation and killing the true ethos

Published: Jun 02, 2024

View in forum →Remove

TL;DR

Backround

In the early days of banking, the pitch or narrative presented to customers focused on a few key benefits:

  1. Safety and Security: Banks and early banking institutions like temples or palaces offered a secure place to store valuable commodities, such as grain, precious metals, and other goods. This was particularly important in ancient times when theft and insecurity were common.

  2. Convenience: Banking institutions provided a convenient way to manage financial transactions. Instead of carrying large amounts of money or goods, individuals could deposit their wealth and make transactions through the bank, simplifying trade and commerce.

  3. Facilitation of Trade: By acting as intermediaries, banks facilitated trade by providing loans and credit. This allowed merchants to finance their operations, expand their businesses, and engage in long-distance trade more effectively.

  4. Record Keeping: Banks maintained detailed records of deposits, loans, and other transactions. This provided individuals and businesses with a reliable way to keep track of their financial activities and plan for the future.

  5. Financial Services: Banks offered various financial services, such as currency exchange, loans, and credit. This enabled individuals and businesses to access funds when needed and manage their finances more effectively.

  6. Interest and Returns: Early banks often offered interest on deposits, providing an incentive for people to deposit their money rather than keeping it idle. This also allowed banks to use deposited
    funds for lending and investment, generating returns for both the bank and the depositor.

  7. Stability and Trust: Over time, reputable banking institutions established a sense of stability and trust. Customers were reassured by the bank’s reliability and the formalized processes they offered, which were often backed by influential entities like governments or wealthy families.

The narrative was centered around the idea that banks provided a safe, convenient(See "Athens Greece News Post below), and efficient way to manage and grow wealth (whose wealth?), thereby supporting both individual and commercial financial needs.

That was the Value prop…
----------------------------------------------------------------------------------------------------------->


------------------------------------------------------------------->
While blockchain technologies advocate for decentralization in terms of architecture and governance, the mechanisms for funding and development can sometimes exhibit centralized characteristics. Here are a few reasons for this and some implications
------------------------------------------------------------------->
Banks aren’t the problem. A bank is just a building. Greed is the problem. It creates a barrier for entry, and an even bigger barrier for innovation to thrive how it should, without sacrificing ideologies just to get funding and then being stuck between getting the funds by joining the “Old Boys Club” or sticking to your moral grounds and ideals but not being able to do anything with it.

Stifling innovation and replacing it with what have now become nothing but mere catch phrases. “Decentralization” doesn’t matter on its own. It’s just the potential pathway to get what we really need from all this:

  • Equality
  • Freedom of choice and speech
  • Self Sovereignty
  • Trust inherently

Why Care About Decentralization?:

Decentralization isn’t just about rejecting centralized control for its own sake. Instead, it addresses several critical societal and operational concerns:

  1. Empowerment and Equity: Decentralization shifts the power from a centralized authority to multiple stakeholders, enabling a fairer distribution of power. This shift promotes equity, allowing individuals and smaller entities more influence over decisions that affect their lives and operations.

  2. Community and Stakeholder Engagement: By involving more participants in the decision-making process, decentralization enhances community engagement. This involvement can lead to decisions that better reflect the diverse interests and needs of the broader community.

  3. Transparency and Accountability: Decentralized systems often require mechanisms that make operations more transparent and participants more accountable. This openness can reduce corruption and increase trust among stakeholders.

  4. Innovation and Diversity of Thought: Decentralization encourages a wider range of ideas and solutions, fostering innovation. In a decentralized system, individuals and teams have the freedom to experiment and implement diverse solutions, which can lead to more creative and effective outcomes.

  5. Resilience Against Failures and Attacks: A decentralized structure is less prone to systemic failures and cyber attacks. Since there’s no central point of failure, challenges or breaches in one area do not cripple the entire system, making it more robust and reliable.

In summary, when we discuss why we care about decentralization, we’re focusing on building systems that are equitable, inclusive, and designed to serve the broader interests of all stakeholders & users involved, rather than just enhancing the profitability of a few. This approach not only strengthens the system’s ethical foundation but also its operational efficacy and societal impact.

Cause for Concern at the Root (Pain Points for Startups):

Centralization in Grant Distribution

  1. Resource Allocation: Even in decentralized networks, there’s often a need to manage resources, including capital, in a way that ensures the ecosystem grows healthily. Foundations often steward these resources because they have the infrastructure to manage and distribute funds effectively.

  2. Quality Control: By funneling grants through foundations or similar bodies, the ecosystem can maintain a certain level of quality and coherence in development. This helps prevent fragmentation and ensures that projects align with the overall strategic goals of the blockchain.

  3. Initial Stages of Ecosystem Development: In the early phases of any blockchain ecosystem, more centralized control can help steer the project towards stability and maturity. As the ecosystem matures, mechanisms can be implemented to decentralize decision-making processes, including funding.

Implications

  1. Gatekeeping: Centralized funding can lead to gatekeeping, where only projects that align with the specific visions of the grant-giving bodies are funded. This can stifle innovation outside of those parameters.

  2. Influence Over Development: Foundations or core teams can significantly influence the direction of the blockchain’s development through funding decisions. This can potentially conflict with the decentralized ethos of blockchain technology.

  3. Dependency: Relying on central bodies for funding might create dependency, which could be problematic if the goals of the central body change or if it faces financial difficulties.

VC Funding

Venture capitalists often look for proof of partnerships with established entities and early user adoption as indicators of a startup’s viability and potential for success. These requirements can sometimes push startups into traditional frameworks and infrastructures, even within innovative fields like blockchain.

Why VCs Value Partnerships and User Adoption

  1. Risk Mitigation: Partnerships with established companies can signal that the startup has passed certain due diligence checks, reducing the perceived risk for investors.
  2. Market Access: Established partners can provide market access, credibility, and resources that might be difficult for a startup to achieve on its own.
  3. Validation: Early user adoption, even on a small scale, serves as a proof of concept that there is market demand for the startup’s offerings.

Challenges for Startups

  1. Compromise on Innovation: To secure partnerships, startups might need to align their products or services more closely with existing systems, which can sometimes water down their innovative aspects.
  2. Dependency: Relying on established networks can create dependencies that may inhibit the startup’s ability to pivot or adapt in the future.
  3. Dilution of Vision: In trying to meet investor criteria for partnerships and early adoption, startups might stray from their original vision, potentially compromising on the disruptive potential of their technology.

Even as blockchain aims to decentralize many aspects of technology and finance, the evolution of its ecosystems often requires a blend of centralized and decentralized methods, especially in nascent stages.

Execution & Delivery

When a business or organization chooses between a “for-profit model” or a “for decentralization” model, it’s not just selecting a financial framework but also defining its core values and operational principles. The choice reflects the entity’s commitment either to maximizing profits for shareholders or to spreading power and benefits more evenly across its network of participants. We are still a long way from achieving this.

Proposed Mitigation Strategy

In the rapidly evolving blockchain and cryptocurrency ecosystem, promising projects with strong technical fundamentals often struggle to gain visibility and attract resources. The space is increasingly dominated by hype-driven speculation and over-commercialization, leading to the proliferation of “meme coins” and other assets with little intrinsic value unlocked. This trend disadvantages serious developers and innovators, hindering the growth of truly transformative technologies.

To address this issue, we propose the creation of a “BlockTech500” decentralized index fund that selects ranks for blockchain assets based on rigorous technical reviews and quantitative metrics. By shifting focus to fundamental technical strengths, this index aims to surface undervalued projects, provide a more efficient allocation of capital, and align incentives between investors and developers building the future foundations of Web3.

Key Benefits

  • Amplifying developer mindshare and attracting talent to technically meritorious projects.
  • Providing an alternative to hype-based speculation and promoting fundamental value investing.
  • Leveraging the wisdom of technical experts in a scalable, decentralized manner.
  • Enabling a community-led governance model for index curation and evolution.

Index Methodology

Quantitative Review Criteria

Definition: Let P = \{p_1, p_2, \ldots, p_n\} be the set of candidate blockchain projects. Each project p_i is evaluated across m criteria C = \{c_1, c_2, \ldots, c_m\}, which include:

Primary Criteria:

  1. Decentralization: Measured by node distribution, consensus mechanism robustness, Nakamoto coefficient. Denote as c_1(p_i).
  2. Scalability: Quantified by transaction throughput, block latency, state storage efficiency. Denote as c_2(p_i).
  3. Privacy: Assessed by the strength of privacy primitives, data obfuscation techniques, zero-knowledge proof schemes. Denote as c_3(p_i).
  4. Security: Score incorporating cryptographic primitives, attack resistance, formal verification, bug bounty programs. Denote as c_4(p_i).
  5. Innovation: Evaluates novel consensus algorithms, virtual machine architectures, smart contract languages. Denote as c_5(p_i).
  6. Censorship Resistance: Measures the cost and difficulty of censoring transactions, smart contracts, and user activity. Denote as c_6(p_i).
  7. Productivity: Quantifies developer productivity based on gas efficiency, contract deployment cost, tooling quality. Denote as c_7(p_i).

Tiebreaker Criteria:

  1. Community Engagement: Aggregated from platform usage, transaction volume, social sentiment. Denote as c_8(p_i).
  2. Adoption: Quantified by dApp ecosystem growth, institutional partnerships. Denote as c_9(p_i).
  3. Exposure: Measures brand awareness, media coverage, search interest. Denote as c_{10}(p_i).

Data Aggregation

On-chain metrics for the above criteria are computed using SQL queries on data indexed by Dune Analytics:

-- Calculate Nakamoto coefficient
SELECT 
  1.0 - sum(power(balance / total_balance, 2)) AS nakamoto_coeff
FROM (
  SELECT 
    sum(value) AS balance 
  FROM addresses a
  GROUP BY 1
  ORDER BY 1 DESC
  LIMIT (SELECT greatest(1, ceil(0.01 * count(distinct address))) FROM addresses)
) t, (SELECT sum(value) AS total_balance FROM addresses) t2;

Off-chain data from Github, social media, and VC deal flow are integrated to derive a holistic project view.

Scoring & Weighting

A weighted sum of criteria scores ranks each project:

S(p_i) = \sum_{j=1}^{m} w_j \cdot c_j(p_i)

with weights w_j satisfying \sum_j w_j = 1 and higher weight on primary criteria.

Index Rebalancing via Versus Battles

Reputation and Rewards

Voters:

  • Users who vote correctly (i.e., their vote aligns with the final outcome) see an increase in their reputation score, while users who vote incorrectly lose reputation.

  • The magnitude of the reputation change for voters is constant, regardless of the number of ranks skipped in the challenge.

  • Voters’ rewards are determined strictly by their voting reputation weights, with higher reputation scores leading to higher rewards for correct votes.

Proposers:

  • If a proposer initiates a challenge and the challenger token wins, the proposer’s reputation weight increases by a fixed amount, regardless of the number of ranks skipped. The proposer’s reward is calculated as follows:

  • for each rank skipped, the proposer receives an additional 1x their initial bet (e.g., if the proposer challenged a token one rank above and won, their reward would be 2x their initial bet; if they challenged a token two ranks above and won, their reward would be 3x their initial bet).

  • If the proposer initiates a challenge and the higher-ranked token maintains its position, the proposer loses their bet and also loses reputation weight. The reputation weight loss for proposers is 1x for each rank skipped in the challenge.

Workflow Example

Consider the following example of a proposer-initiated versus match:

Voters:

  • Users who voted correctly (i.e., in alignment with the final outcome) see an increase in their reputation score.

  • The magnitude of this increase is constant, regardless of the number of ranks skipped in the challenge. Users with higher reputation scores receive higher rewards for their correct votes. Users who voted incorrectly lose reputation.

Proposer:

  • If the challenger token t_C wins, the proposer’s reputation weight increases by a fixed amount, and their reward is calculated as follows:

  • they receive their initial bet of 1000 tokens plus an additional 1000 tokens for skipping one rank, resulting in a total reward of 2000 tokens (2x their initial bet).

  • If the higher-ranked token t_A maintains its position, the proposer loses their bet of 1000 tokens and 1x reputation weight, as the challenged token was one rank above the challenger.

Governance & Incentives

BlockTech500 is governed by a DAO that votes on disputed versus results, proposals, and criteria adjustments.

Initial Index Composition and Asset Addition Process

The initial composition of the BlockTech500 index will be determined through a rigorous, multi-stage process designed to identify the most technically meritorious blockchain projects at the time of launch. Let \mathcal{U} denote the universe of all eligible blockchain assets, and let \mathcal{I}_0 \subseteq \mathcal{U} represent the initial index composition.

As the blockchain ecosystem evolves over time, it will be necessary to periodically update the index composition to reflect the emergence of new technically meritorious projects. To this end, we propose the following asset addition process.

This two-part methodology, consisting of a rigorous initial selection process and an ongoing asset addition/removal mechanism, is designed to maintain the BlockTech500 index’s compositional legitimacy over time. By subjecting all assets, both initial and subsequent, to the same comprehensive technical evaluation process, we ensure that the index remains a true reflection of the most meritorious projects in the evolving blockchain landscape.

Moreover, the use of a diverse panel of independent experts (DAO Governance Delegates) which need to be nominated and win enough user votes to be in the top voted on nominees so that they are allowed to stake X tokens and become a delegate and a transparent, scoring system helps to mitigate potential biases and conflicts of interest in the index construction process. The inclusion of a deliberative weight-setting process allows for the incorporation of qualitative expert judgment while still maintaining a high degree of objectivity and repeatability. To become a delegate of the DAO you must stake X amount of tokens and they must vote among themselves to come up with dispute resolutions. they too have rep scores, but not weighted ones in the sense of voting power. but those always opposing the majority vote, would risk being replaced for underperformance at each quarterly review and nomination and election cycle of delegates. as the lowest performing ones could be replaced by new nominees if the voters lean that way

Ultimately, while no index construction methodology is perfect, we believe that the proposed approach strikes an appropriate balance between rigor, objectivity, and adaptability. By carefully managing the initial index composition and instituting a robust asset addition process, the BlockTech500 index can serve as a legitimate and enduring benchmark for technical merit in the dynamic world of blockchain technology.

Comparisons

Sure, let’s correct that section with a comparison to Messari.


Messari vs. BlockTech 500 Index

  1. Decentralization:

    • Messari: Provides comprehensive research and data analysis on various blockchain projects, including decentralization metrics. Messari evaluates node distribution, consensus mechanisms, and other decentralization factors to assess the overall robustness and security of blockchain networks.

    • BlockTech 500 Index: Evaluates projects on decentralization by examining node distribution, consensus mechanism robustness, and the Nakamoto coefficient. This ensures that projects are judged on their true decentralization and security merits, similar to Messari’s approach.

  2. Community Engagement:

    • Messari: Engages the community through detailed research reports, market analysis, and data dashboards. While it provides extensive insights and allows user interactions, it does not incorporate direct community voting for project evaluations.

    • BlockTech 500 Index: Incorporates versus voting challenges where the community actively participates in ranking decisions, increasing engagement and ensuring the community’s voice plays a crucial role in project evaluations. This gamified approach can drive higher community involvement compared to Messari.

  3. Indexing Mechanisms:

    • Messari: Focuses on providing in-depth research, market data, and analysis for blockchain projects. Messari offers tools and resources for tracking project metrics, market performance, and industry trends, but it does not maintain a dedicated index ranking system based purely on technical evaluations.

    • BlockTech 500 Index: Provides a structured, transparent ranking system that evaluates projects based on a wide range of technical criteria, helping highlight technically strong projects and providing them with the necessary visibility. The index aims to shift focus to fundamental technical strengths, offering an alternative to hype-driven market rankings.

By focusing on core technical attributes such as decentralization, scalability, privacy, and security, the BlockTech 500 Index can highlight projects that are technically robust and innovative, similar to how Messari provides detailed insights but with an added layer of community-driven evaluation and gamified engagement.

The Graph Network

1. Decentralization:

  • The Graph: Uses a network of nodes to index data from multiple blockchains, relying on economic incentives through the Graph Token (GRT) to maintain decentralization. Indexers stake GRT to participate, and curators signal high-quality subgraphs by staking GRT.

  • BlockTech 500 Index: Emphasizes decentralization through comprehensive technical evaluations, including scalability, privacy, security, innovation, censorship resistance, and productivity.

2. Community Engagement:

  • The Graph: Community engagement is driven through the roles of curators, indexers, and delegators. Each role is incentivized to participate and ensure the quality of the indexed data. Curators earn rewards for identifying valuable subgraphs.

  • BlockTech 500 Index: Adds an interactive element to community participation with versus voting challenges. This direct involvement can drive more engagement and a sense of ownership among community members.

3. Indexing Mechanisms:

  • The Graph: Provides a decentralized indexing and querying protocol for blockchain data, using GraphQL to make the data easily accessible. Indexers stake tokens to ensure data reliability, and the system uses economic incentives to maintain integrity.

  • BlockTech 500 Index: While not a data indexing protocol, it offers a comprehensive ranking system based on technical evaluations, aiming to provide visibility and credibility to technically strong projects, which might otherwise be overlooked due to a lack of community engagement or exposure.


Currently there aren’t any Cryptocurrency ranking and review dapps or tools that allow for this level of gamification and precision where high quality decentralized community level insights which can be incentivized in such an exact approach as head to head comparisons which are broken down even further into categories.


Advantages & Benefits

Increasing Visibility and Credibility

  1. Highlighting Technical Merit: By focusing on core technical attributes such as decentralization, scalability, privacy, and security, the index brings attention to projects that are technically robust and innovative, even if they have not yet built a large community or gained significant exposure.

    • Case Study: Projects like Algorand and Polkadot, which are technically advanced, have benefitted from increased visibility through rankings and technical evaluations despite initially having smaller communities compared to giants like Ethereum.
  2. Attracting Funding and Partnerships: Being featured in a prestigious, technically-focused index like the BlockTech 500 can attract investors and partnerships. Investors often look for technically sound projects that have the potential for growth. This can provide the necessary capital for these projects to further develop their community and market presence.

    • Example: Technical excellence often attracts venture capital interest. For example, projects that focus on innovative consensus algorithms or novel scalability solutions can secure funding from tech-savvy investors looking for the next breakthrough.

Facilitating Community Growth

  1. Building Trust: Inclusion in the BlockTech 500 Index can serve as a stamp of approval for technical quality, helping to build trust among potential users and developers. This can drive community engagement as more individuals and organizations feel confident in participating.

    • Impact: Trust can lead to increased participation from developers and users, which is crucial for community growth. Projects with high technical ratings can see a surge in developer interest, leading to more contributions and improvements.
  2. Leveraging Exposure: The exposure from being listed in the BlockTech 500 can help technically strong projects that lack marketing prowess or community engagement to gain visibility. This can lead to organic growth as more people become aware of the project’s potential and start contributing to or using the technology.

    • Example: Innovative blockchain projects that receive media coverage and analyst attention due to their technical merits can experience a significant increase in community interest and engagement.

Addressing the Innovation Gap

  1. Encouraging Innovation: By recognizing and rewarding technical innovation, the index encourages projects to focus on developing new technologies and solutions. This can lead to a healthier blockchain ecosystem with a variety of robust, innovative projects.

    • Outcome: Projects that might have otherwise remained unnoticed due to their lack of marketing can gain the spotlight, promoting a diverse range of solutions and technologies in the blockchain space.
  2. Support for Undervalued Projects: Projects that are technically sound but undervalued in the market due to lack of exposure can benefit from the structured and transparent evaluation provided by the BlockTech 500 Index. This can help level the playing field, ensuring that technical merit is recognized and rewarded.

    • Benefit: This helps ensure that good technology does not go unnoticed and that resources are allocated more efficiently within the blockchain ecosystem.

In conclusion, the BlockTech500 Index can significantly help projects that are strong in innovation and technical aspects but lack in community and other tiebreaker attributes. By providing these projects with increased visibility and credibility, the index can attract funding and partnerships, build community trust, and ultimately ensure that technical excellence is recognized and rewarded.

3 posts - 1 participant

Read full topic

Economics Integrating Governance Tokens and Memecoins: A Dual-Token System for Enhanced Blockchain Efficiency and Stability

Published: Jun 01, 2024

View in forum →Remove

TL;DR

Background

In the evolving landscape of blockchain technology, different types of tokens serve distinct purposes, creating unique economic dynamics and challenges. Governance tokens and memecoins are two prominent types of tokens with contrasting roles:

  1. Governance Tokens: These tokens are typically used for voting, decision-making, and protocol governance within a blockchain network. They hold significant importance in controlling and influencing the network’s future direction. However, their critical role often results in low transaction frequency, as holders prefer to retain them for strategic governance purposes rather than regular transactions. This low velocity can lead to decreased liquidity and underutilization of their potential as a medium of exchange.

  2. Memecoins: Designed for high-frequency transactions, memecoins are intended to be used as everyday transactional currency. They offer fast transaction times and are widely adopted for various payment activities. Despite their high velocity and liquidity, memecoins usually do not contribute to the network’s security or governance, leading to a separation between transactional utility and network governance.

This dichotomy creates a fragmented token economy where neither type of token fully exploits its potential, affecting the overall efficiency and stability of the blockchain ecosystem. Governance tokens remain largely inactive in everyday commerce, while memecoins, although frequently used, do not enhance the security or governance of the network.

Problem

The key challenge within blockchain systems employing governance tokens and memecoins e.g. (DOGE) or “Dogecoin” revolve around the distinct usage intentions and transaction demands for each type of token. Governance tokens, typically used for voting and protocol governance, often lack incentives for regular transactions due to their importance in network control. This can lead to decreased liquidity and underutilization of the tokens’ potential as a medium of exchange. Conversely, memecoins are designed for high-frequency transactions but might not contribute significantly to network security or governance. This dichotomy results in a fragmented token economy where neither type fully exploits its potential, thus affecting the overall efficiency and stability of the blockchain ecosystem.

Proposal

To address these issues, an innovative system design is proposed dubbed “TTTs” or “Turtle Time Tokens”. (TTTs) have intentionally delayed transaction confirmation times to harmonize the economic roles of governance tokens and memecoins. By intentionally extending the confirmation time for governance token transactions, their velocity can be decreased, promoting value stability and encouraging use in strategic, high-value transactions or staking for governance purposes. Conversely, memecoins can serve as the primary medium for everyday transactions due to their faster confirmations, enhancing the blockchain’s usability and liquidity.

This dual-token system aims to create a balanced distribution of token utility across different network activities, fostering a robust economic environment that supports both governance and rapid transaction needs. The decoupling of governance and transactional roles enhances the efficiency and stability of the blockchain ecosystem, ensuring that each token type serves its intended purpose effectively.

Breakdown

The proposed solution aims to integrate the economic roles of governance tokens and memecoins more cohesively by altering transaction confirmation times and restricting the staking capabilities of the chosen memecoin being used as the native payment token, enhances the distinct utility of each token type, promoting a balanced and efficient blockchain ecosystem.

Governance Tokens:

By intentionally extending the confirmation time for governance token transactions, we can decrease their velocity, thus stabilizing their value and encouraging their use in more deliberate, high-value transactions or staking for governance purposes. This design leverages longer confirmation times to disincentivize frivolous use and preserve the tokens for strategic decisions and network support. Governance tokens, therefore, become more valuable for long-term network governance and security.

Wrapped Memecoins:

Wrapped Dogecoin (wDOGE) and similar memecoins can serve as the primary medium for everyday transactions due to their faster confirmations, enhancing the blockchain’s usability and liquidity. To maintain this liquidity and usability, wrapped memecoins would be blocked from staking capabilities. This restriction ensures that memecoins remain liquid and readily available for transactions, preventing them from being locked up in staking contracts. By keeping memecoins out of staking, the focus for staking and network security remains on governance tokens.

Benefits of the Dual-Token System

  1. Balanced Token Utility:

    • Governance Tokens: Used for high-value transactions, governance, and staking. Extended confirmation times encourage strategic use and value stability.
    • Wrapped Memecoins: Used for everyday transactions with fast confirmations, ensuring high liquidity and usability. Blocking staking capabilities maintains their transactional focus.
  2. Economic Stability:

    • Governance Tokens: Reduced velocity and increased stability due to longer confirmation times.
    • Wrapped Memecoins: Enhanced value stability as their valuation is decoupled from the fluctuating underlying value of the protocol’s governance aspects.
  3. Security and Compensation:

    • Transaction Fees: The application of gas fees on memecoin transactions ensures that network validators are adequately compensated, maintaining network security and operational integrity without the need for memecoins to contribute directly to governance.
    • Staking Focus: Governance tokens are the primary focus for staking, enhancing network security and governance efficiency.

Implementation

  1. Governance Token Design:

    • Implement extended confirmation times for governance token transactions.
    • Design smart contracts for staking governance tokens to secure the network and participate in protocol governance.
  2. Wrapped Memecoin Design:

    • Ensure fast confirmation times for memecoin transactions.
    • Implement smart contracts that block staking capabilities for wrapped memecoins, ensuring their liquidity and transactional focus.
  3. Economic and Security Models:

    • Develop models to calculate optimal confirmation times and transaction fees to balance liquidity, stability, and network security.
    • Create incentives for validators to process both governance token and memecoin transactions, ensuring robust network security.

By integrating these design choices, the proposed solution promotes a more balanced distribution of token utility across different network activities, fostering a robust economic environment that supports both governance and rapid transaction needs. This dual-token system enhances the overall efficiency and stability of the blockchain ecosystem, ensuring that each token type serves its intended purpose effectively.

Utility Expansion and Demand

U = \sum_{i=1}^{n} u_i

where u_i represents a specific use case the token enables. All else equal, a higher U should correspond to increased demand D, which we can model simply as:

D = \alpha U - \beta P + \epsilon

where P is the token price, \alpha, \beta are coefficients representing the sensitivity of demand to utility and price respectively, and \epsilon captures all other demand factors. Therefore, the increase in utility from permitting T_g transactions should boost demand for governance tokens by \alpha \Delta U.

Value Stability and Velocity Impacts

With intentionally long confirmation times \tau_g for T_g transactions creating a natural disincentive for using governance tokens in everyday payments and commerce compared to the memecoin with \tau_m confirmations. This should reduce the velocity V_g of governance tokens. From the equation of exchange:

MV = PQ

where M is money supply, V is velocity, P is price level, and Q is real economic output transacted in the token. Reduced V_g puts upward pressure on P_g. Compared to memecoins used for payments, this dynamic could make governance token prices P_g more stable.

We can quantify the velocity difference as follows. Let \lambda_g be the fraction of M_g governance tokens transacted per unit time and \lambda_m be the equivalent for M_m memecoins. Assuming the confirmation time drives usage, the expected velocities are:

V_g = \frac{\lambda_g}{\tau_g}, \quad V_m = \frac{\lambda_m}{\tau_m}

So the velocity ratio simplifies to:

\frac{V_g}{V_m} = \frac{\lambda_g \tau_m}{\lambda_m \tau_g}

If \lambda_g \approx \lambda_m (similar transaction demand) but \tau_g \gg \tau_m, then V_g \ll V_m. Memecoins should have much higher velocity and be the dominant medium of exchange.

Fee Revenue and Security Budget

However, the governance token transactions T_g, though slower, still generate fee revenue for validators. If T_g transactions are a fraction \phi of all transactions and the average fee is \bar{f}, then the expected security budget from governance token usage is:

B_g = \phi \bar{f} \lambda_g M_g

Compared to a system with only memecoins, this design increases the total security budget by B_g, without inflating V_m and putting downward pressure on P_m. The security ratio between the two tokens is:

\frac{B_g}{B_m} = \frac{\phi \lambda_g M_g}{(1 - \phi) \lambda_m M_m}

Even if \phi and M_g are small compared to memecoins, this can still be a significant security contribution, especially if governance tokens have a higher average transaction size and therefore higher \bar{f}.

User Experience and Economic Incentives

The contrasting confirmation times \tau_g and \tau_m create a natural UX and economic incentive for users to treat the two tokens differently in their activities:

Algorithm 1 User Token Selection

v ← value of transaction
ut ← user’s time preference
if v is low and ut prefers fast settlement then
    Use memecoin
else
    if staking or network utility dominates then
        Use governance token
    else
        Use either based on other factors
    end if
end if

Users are incentivized to use memecoins for everyday small transactions given the faster \tau_m confirmations. Conversely, governance staking and protocol utility create a strong incentive to hold governance tokens despite their slower transactability.

In essence, the system economically “prices in” the opportunity cost of using scarce governance tokens in transactions through the longer confirmation times \tau_g. This encourages efficient allocation between staking and transacting. At the margin, a user should only transact a governance token if the economic gain exceeds the time value and staking opportunity cost.

Memecoin Stability and Single-Purpose Design

As mentioned, a key factor in a memecoin’s potential stability is its focused, single-purpose design optimized for payments. Unlike governance tokens or more complex multi-purpose tokens, a memecoin like Dogecoin aims to excel at one core function: facilitating fast, cheap, and reliable transactions.

This specialization has several potential benefits for stability:

  1. Reduced Exposure to Protocol-Level Risks: By being used across multiple networks, a memecoin can diversify its risk exposure. Even if a particular network experiences issues or a decline in the perceived value of its governance token, the memecoin’s value may remain more stable given its usage and acceptance on other chains.
  2. Network Effect and Lindy Effect: As a memecoin gains adoption as a payment method across multiple platforms, it can benefit from a strong network effect. The more users and merchants accept it, the more valuable and stable it becomes. Over time, this can create a self-reinforcing cycle of stability (the Lindy effect).
  3. Decoupling from Platform Innovation: A payment-focused memecoin’s value is less strongly coupled to the technological progress and innovations of any single platform. As long as it continues to meet its core payment functionality, its value can remain relatively stable even if some networks advance faster than others.

We can formalize this concept of a memecoin’s “stability advantage” SA over a platform-specific governance token as follows:

SA = \frac{\sigma_g}{\sigma_m}

where \sigma_g is the price volatility of the governance token and \sigma_m is the price volatility of the memecoin. All else equal, we would expect SA > 1 for a widely adopted memecoin used across many platforms.

Quantifying Cross-Platform Adoption

The stability advantage of a cross-platform memecoin depends heavily on the degree and distribution of its adoption across networks. We can quantify this “adoption spread” A_s as:

A_s = 1 - \sum_{i=1}^{N} \left(\frac{T_i}{T}\right)^2

where N is the total number of networks the memecoin is used on, T_i is the transaction volume on the i-th network, and T is the total transaction volume across all networks.

A_s ranges from 0 to 1 - \frac{1}{N}, with higher values indicating more evenly spread adoption. An A_s close to 0 means the memecoin is heavily dominated by usage on a single network, while a value close to 1 - \frac{1}{N} indicates relatively equal adoption across all N networks.

Putting it together, we can propose a simple model for the stability S of a cross-platform memecoin:

S = k \cdot A_s \cdot SA

where k is a constant reflecting other factors like overall cryptocurrency market conditions. This suggests that a memecoin’s stability is driven by both its inherent stability advantage over platform-specific tokens and the breadth of its adoption across platforms.

Factors Contributing to Stability

  1. Increased Liquidity:

    • Widespread Use: As wDOGE is adopted across multiple blockchains, its liquidity would increase. High liquidity typically reduces price volatility, as large trades can be absorbed without significantly affecting the price.
    • Liquidity Pools: Enhanced liquidity pools and decentralized exchanges on multiple blockchains would facilitate seamless conversion and trading, further stabilizing the price.
  2. Broad Network Effect:

    • Adoption and Usage: The more widely wDOGE is used, the stronger its network effect becomes. As more merchants, users, and platforms accept and use wDOGE for payments, its utility and trust increase, contributing to price stability.
    • Lindy Effect: The longer wDOGE is used successfully, the more likely it is to continue being used, enhancing its reputation and perceived stability.
  3. Diversified Risk:

    • Cross-Platform Stability: By being used on multiple blockchains, $wDOGE$’s value is less tied to the performance or issues of any single blockchain. This diversification reduces the risk of value fluctuations due to problems on one specific network.
    • Decoupling from Governance: Since wDOGE is not used for governance or staking, its value is decoupled from the complexities and risks associated with network governance decisions and staking economics.

Economic Model for Stability

  1. Supply and Demand Dynamics:

    • Stable Demand: As wDOGE becomes a preferred payment token across multiple blockchains, the consistent demand for transactions helps stabilize its value.
    • Predictable Supply: Assuming the supply of wrapped Dogecoin is managed effectively (e.g., through minting and burning mechanisms that maintain a 1:1 peg with the original Dogecoin), the predictability in supply further contributes to price stability.
  2. Velocity of Money:

    • High Velocity: wDOGE ’s use in frequent, everyday transactions ensures high velocity. According to the equation of exchange (MV = PQ) (where (M) is money supply, (V) is velocity, (P) is price level, and (Q) is real output), high velocity helps maintain a stable (P) if (Q) (transaction volume) is also high and stable.
    • Transactional Stability: As wDOGE is used primarily for payments, the consistent transaction volume across multiple blockchains contributes to its price stability.

Benefits of Stabilized wDOGE for Payments

  1. Reliable Medium of Exchange:

    • Price Stability: With reduced volatility, wDOGE becomes a more reliable medium of exchange, encouraging its use in everyday transactions without concerns over significant value changes.
    • Merchant Adoption: Merchants are more likely to accept a stable token for payments, reducing the risk of loss due to price fluctuations.
  2. User Confidence:

    • Trust and Acceptance: Stability fosters trust among users, making them more comfortable holding and transacting with wDOGE.
    • Wider Adoption: As user confidence grows, wider adoption follows, creating a positive feedback loop that further stabilizes the token’s value.
  3. Integration with Financial Systems:

    • Easier Integration: A stable payment token can be more easily integrated with traditional financial systems, such as point-of-sale systems, online payment gateways, and financial services.
    • Regulatory Compliance: Stability can also make it easier to comply with regulatory requirements, as regulators often favor less volatile assets for payment purposes.

Widespread adoption of e.g. wDOGE across multiple blockchains has the potential to significantly stabilize its value, making it an even more effective and reliable payment token. The increased liquidity, broad network effect, diversified risk, and stable supply and demand dynamics contribute to this stability. A stable Memecoin enhances user confidence, encourages merchant adoption, and facilitates integration with traditional financial systems, promoting its use as a preferred medium of exchange in the cryptocurrency ecosystem.

Advantages

Allowing fungible governance tokens to be spent but with slower confirmations than memecoins is a powerful tokenomic design pattern. It creates a natural separation in token utility and incentivizes staking while still harnessing transactional usage for security. More broadly, it illustrates how subtle technical choices like confirmation times can be deeply interlinked with a cryptocurrency’s economic structure and incentives.

Applications

Improving our understanding of these tokenomic principles can help inform the rational engineering of future blockchain networks. Despite these challenges, the potential of a widely adopted, cross-platform memecoin to achieve relative stability compared to platform-specific tokens remains compelling. As decentralized networks continue to evolve, observing the dynamics of payment-focused memecoins like Dogecoin can provide valuable insights into the future of cryptocurrency stability and adoption.

Comparisons

The proposed dual-token system with differentiated confirmation times using TTTs for governance tokens and memecoins like DOGE, is a novel approach to harmonizing the economic roles of different token types in blockchain networks. To better understand its potential impact and significance, it’s essential to compare it with other leading concepts and approaches in this area.

  1. Dual-Token Economies: The idea of using multiple token types within a single blockchain ecosystem is not new. Several projects, such as Decred and Cosmos, have implemented dual-token economies to separate the roles of staking, governance, and transactions. However, the proposed system takes this concept further by introducing differentiated confirmation times to optimize the utility and stability of each token type.

  2. Stablecoin Designs: Stablecoins, such as Tether (USDT) and USD Coin (USDC), have gained popularity as a means to mitigate the volatility of cryptocurrencies. These tokens are typically pegged to a stable asset, like the US dollar, to maintain a consistent value. While the proposed system’s memecoin (e.g., wDOGE) is not explicitly designed as a stablecoin, its focus on payments and stability shares some similarities with stablecoin concepts. The key difference lies in the approach to achieving stability - through widespread adoption and decoupling from governance, rather than an explicit peg.

  3. Velocity Reduction Mechanisms: Some projects have explored mechanisms to reduce token velocity and encourage holding, such as transaction fees, demurrage, or time-locked staking. The proposed system’s extended confirmation times for governance tokens can be seen as a novel velocity reduction mechanism that aligns with the token’s intended use case. This approach is more organically tied to the network’s economic incentives compared to external mechanisms.

  4. Interoperability and Cross-Chain Adoption: The proposed system emphasizes the importance of widespread adoption of memecoins like wDOGE across multiple blockchains. This aligns with the growing trend of interoperability and cross-chain solutions in the blockchain space. Projects like Polkadot, Cosmos, and Chainlink are working on enabling seamless communication and value transfer between different blockchains. The proposed system can leverage these advancements to facilitate the widespread adoption and stability of memecoins.

  5. Economic Incentive Design: The field of tokenomics focuses on designing incentive structures that align the behavior of network participants with the overall goals of the system. The proposed dual-token system is a prime example of economic incentive design, as it creates a clear separation of roles and incentives for governance token holders and memecoin users. This approach is in line with the ongoing research and experimentation in the field of tokenomics.

While the proposed system introduces novel concepts, such as differentiated confirmation times and the specific focus on memecoins like wDOGE, it also builds upon and complements existing approaches in the blockchain space. The emphasis on stability, interoperability, and incentive alignment is consistent with the general direction of the industry.

Conclusion

The design choices for blockchain governance and payment tokens have far-reaching implications for network dynamics, user behavior, and overall economic stability. By leveraging different confirmation times for governance and payment tokens, we can create a balanced ecosystem that encourages efficient token usage, enhances security, and promotes long-term stability. Further research and comparative analysis will continue to refine these models and inform the development of robust and scalable blockchain systems.

3 posts - 1 participant

Read full topic

Layer 2 Strawmanning Based Preconfirmations

Published: May 31, 2024

View in forum →Remove

By Lin Oshitani (Nethermind Research). Thanks to Conor and Aikaterini for the detailed discussions and review. Thanks also to Ahmad and Brecht for their review and comments. This work was partly funded by Taiko. The views expressed are my own and do not necessarily reflect those of the reviewers or Taiko.

Introduction

Based sequencing provides a credibly neutral shared sequencer layer that enables composability among rollups and between rollups and L1. Additionally, based preconfirmations provide fast preconfirmation services on top of based sequencing, significantly enhancing the user experience as a result. However, compared to non-preconfirming based sequencing, naive implementations of based preconfirmations introduce negative externalities that require thoughtful consideration. Although issues have been highlighted in works such as this Bell Curve episode (mainly in the context of non-based sequencers) and this write-up from Chainbound, we believe the topic remains largely underexplored.

In this post, we will analyze a simple “strawman” preconfirmation setup, identify its shortcomings, and shed light on the challenges that future solutions must address.

The Strawman

The strawman based preconfirmation setup is as follows:

  • The L1 proposer may or may not delegate the preconf right to some external entity.
    • We use the term preconfer to describe the entity providing preconfirmations, which can either be the L1 proposer itself or an entity delegated from the L1 proposer.
  • The preconfer handles preconfirmations by providing two endpoints:
    • Request endpoint: For users and searchers to request preconfirmations.
    • Promise endpoint: For streaming the preconf results to the public. It enables the preconfirmation requester to promptly receive the result while allowing other users to stay updated on the latest preconfirmed state before initiating their own preconfirmations.
  • Users will include a “preconf tip” to the requests to incentivize preconfers to provide preconfirmations.
  • The preconfer preconfirms transactions primarily on a first-come-first-serve basis.

Based Preconf (2) (1)

Furthermore, because they are significantly more complex to design and implement, we focus solely on execution promises. Execution promises guarantee the exact sequence and state of a transaction. In contrast, inclusion promises only ensure that a transaction will be included without specifying the conditions of its inclusion.

The Problems

We will cover six problems with the strawman based preconfirmation design:

  • Problem 1: Latency races
  • Problem 2: Congestion
  • Problem 3: Tip pricing
  • Problem 4: Fair exchange
  • Problem 5: Liveness
  • Problem 6: Early auctions

Problem 1: Latency races

Whoever has the lowest latency to the preconfer gains all the MEV back-running profit. This is because they can:

  1. Be the first to obtain the latest state of the chain through the promise endpoint and
  2. be the first to insert their back-run transaction via the request endpoint.

This structure has historically incentivized latency races, where network participants strive to minimize latency to the limit. Eventually, this would lead to searchers choosing to colocate or vertically integrate with preconfers, which significantly risks the network’s geographical decentralization.

Such latency races have been a long-lasting concern for existing centralized sequencers. For example, the Arbitrum team has explored the idea of implementing Proof of Work (PoW) where they grant fast connections to participants who succeed in PoW while imposing artificial delays on participants who do not. However, this proposal encountered backlash from the community due to the substantial economic waste introduced.

Problem 2: Congestion

Given that L2 transaction fees are typically low, searchers may choose to avoid latency races altogether and instead flood the rollup with probabilistic arbitrage attempts. This can be done by spamming an arbitrage contract that attempts an arbitrage and rollbacks if it fails. In Solana, where the fee is extremely low, it has been reported that validators waste ~58% of their time processing such failed arbitrage transactions.

This would result in a situation resembling pre-Flashbots priority gas auctions, where the competition among searchers congests the block space with failed arbitrage transactions, ultimately driving up gas fees for regular users.

Problem 3: Tip pricing

The preconfer must solve an online MEV problem, where they decide whether to preconf a transaction with no/limited visibility to other transactions that compete for the same position. For example, suppose the preconfer receives a preconf request with 1 ETH tip. How would the preconfer know that the tip is appropriately priced? Should they accept the tip and preconf immediately or wait for a while in case there is another request with a higher tip?

Problem 4: Fair exchange

The preconfer can withhold preconf promises and not return them to the user in a timely manner. Note that preconfers are incentivized to withhold preconf promises as much as possible to maximize their opportunity to reorder and insert transactions, thereby increasing their MEV.

As an extreme example, the preconfer could withhold all promises during its window (12 sec or more), reorder and inject txs as it wishes, and only publish the promises when the final tx batch is submitted to L1.

Problem 5: Liveness

For the case when the proposer delegates the preconfirming rights to an external preconfer, the liveness and censorship resistance of the preconfirmations will rely solely on this single external entity for the duration of the preconfer’s slot(s).

Problem 6: Early auctions

Any system with L1 composable preconfirmations (i.e., preconfirmation of L1 transactions) will likely result in preconfer-builder integration, where preconfer and builder become the same entity. This is for two reasons:

  • With L1 preconfirmations, most cross-domain MEV, including CEX-DEX arbitrage, will be captured through preconfirmed transactions.
    • Considering the bulk of MEV revenue comes from CEX-DEX arbitrage, this means that most MEV revenue will be secured through preconfirmations. Consequently, the revenue from building the non-preconfirmed portion of the block will be greatly reduced.
  • Preconfed L1 transactions must be included at the top of the current block.
    • Inserting any transactions before a preconfirmed transaction could alter the state anticipated by the preconfirmation, potentially invalidating the preconfirmation guarantee. This means builders must constantly incorporate the latest preconfed transactions into their blocks, which would be extremely difficult, if not unfeasible.

Combined with preconf delegation happening ahead of the proposer’s slot, preconfer-builder integration leads us to a world where L1 proposers delegate their preconfirmation rights and block-building rights to the same external entity ahead of time for their slot.

Selecting the block builder in advance, known as early auctions, contrasts sharply with the current MEV-Boost PBS pipeline, where block builders are dynamically chosen just-in-time (JIT) within the slot through block auctions. More details comparing JIT auctions and early auctions can be found here.

The goal of based sequencing is to inherit the security of L1. However, with based preconfirmations, we risk altering the security landscape of the underlying L1 itself. Although early auctions might not be entirely detrimental (further research and experimentation are needed), they represent a fundamental shift from the current MEV-Boost builder market. Therefore, they should be introduced with great care, especially when introduced off-protocol, where control over centralization tendencies is limited.

Conclusion

We observed that naive implementations of preconfirmations can lead to various negative externalities. As with all things in blockchains, trade-offs are inevitable. However, such negative effects should be mitigated as much as possible and, when needed, introduced as a deliberate choice, not an accident.

At Nethermind, along with our collaborators, we are actively researching solutions that address the issues outlined in this document. Stay tuned for more updates!

2 posts - 2 participants

Read full topic

Networking Gossip IWANT/IHAVE Effectiveness in Ethereum's Gossipsusb network

Published: May 30, 2024

View in forum →Remove

Summary & TL;DR

The ProbeLab team (https://probelab.io) is carrying out a study on the performance of Gossipsub in Ethereum’s P2P network. This post is reporting the first of a list of metrics that the team will be diving into, namely, how efficient is Gossipsub’s gossip mechanism. For the purposes of this study, we have built a tool called Hermes (GitHub - probe-lab/hermes: A Gossipsub listener and tracer.), which acts as a GossipSub listener and tracer. Hermes subscribes to all relevant pubsub topics and traces all protocol interactions. The results reported here are from a 3.5hr trace.

Study Description: The purpose of this study is to identify the ratio between the number of IHAVE messages sent and the number of IWANT messages received from our node. This should be done both in terms of overall messages, but also in terms of msgIDs. This metric will give us an overview of the effectiveness of Gossipsub’s gossip mechanism, i.e., how useful the bandwidth consumed by gossip messages really is.

TL;DR: The effectiveness of Gossipsub’s gossip mechanism, i.e., the IHAVE and IWANT message exchange is not efficient in the Ethereum network. Message ratios between Sent IHAVEs and Received IWANTs can reach to more than 1:50 for some topics. Suggested optimisations and things to investigate to improve effectiveness are given at the end of this report.

Overall Results - Sent IHAVEs vs Received IWANT

The plots below do not differentiate between different topics. They present aggregates over all topics. The ratio of sent IHAVEs vs received IWANTs does not seem extreme (top plot) with a ratio of less than 1:2, but digging deeper into the number of msgIDs carried by those IHAVE and IWANT messages shows a different picture (middle plot). The ratio itself for all three topics are given in the third (bottom plot), where we see that especially for the beacon_block topic the ratio is close to 1:100 and going a lot higher at times.

Per Topic Results - Sent IHAVEs vs Received IWANT

Next, we’re diving into the ratio per topic to get a better understanding of the gossip effectiveness for each topic. We’re presenting the overall number as well as the ratio per topic. The ratio of sent IHAVEs vs received IWANTs is more extreme and reaches an average of close to 1:100 for the beacon_block topic, 1:10 for the beacon_aggregate_and_proof topic and 1:6 for the sync_committee_contribution_and_proof topic.

It is clear that there is an excess of IHAVE messages sent compared to the usefulness that these provide in terms of received IWANT messages. There’s at least a 10x bandwidth consumption that we could optimise for if we reduced the ratios especially for the beacon_block and beacon_aggregate_and_proof topics.

The beacon_aggregate_and_proofs topic sends hundreds of thousands of message_ids over the wire in a minute, with very few IWANT messages in return. The ratio of sent IHAVE msgIDs to the received IWANT msgIDs stays around 10 times bigger.

Overall Results - Received IHAVE vs Sent IWANT

The situation is even more extreme for the case of Received IHAVE vs Sent IWANT messages in terms of overhead. We include below the overall results only, as well as the ratios per topic. We consider that the ratios are even higher here because our node is rather well-connected (keeps connections to 250 peers) and therefore is more likely to be included in the GossipFactor fraction of peers that are chosen to send gossip to (i.e., IHAVEs). This in turn means that we must be receiving lots of duplicate msgIDs in those IHAVE messages. Digging into the number of duplicate messages are subject to a different metric further down in this report.

Anomalies

Gossipsub messages should always be assigned to a particular topic, as not all peers are subscribed to all topics. Having a topic helps with correctly identifying invalid messages and avoiding overloading of peers with messages they’re not interested in.

We have consistently seen throughout the duration of the experiment both IHAVE and IWANT messages sent to our node with an empty topic. Both of these are considered anomalies, especially given that the IWANT messages we received were for msgIDs that we didn’t advertise through an IHAVE message earlier.

Digging deeper into the results, we have seen that 49 out of the 55 peers that we received messages with an empty topic were Teku nodes. We have started the following Github issue to surface the anomaly: Possible Bug on GossipSub implementation that makes sharing `IHAVE` control messages with empty topics · Issue #361 · libp2p/jvm-libp2p · GitHub, which has been fixed: Set topicID on outbound IHAVE and ignore inbound IHAVE for unknown topic by StefanBratanov · Pull Request #365 · libp2p/jvm-libp2p · GitHub.

Takeaways

  • The average effectiveness ratio of the gossip functionality is higher than 1:10 across topics, which is not ideal.
  • Messages that are generated less frequently (such as beacon_block topic messages) are primarily propagated through the mesh and less through gossip (IHAVE/IWANT messages), hence the higher ratios, which reach up to 1:100 for this particular topic.
  • GossipSub control messages are relevant, but we identify two different use-cases for GossipSub that don’t benefit in the same way from all these control messages:
    • Big but less frequent messages → more prone to DUPLICATED messages, but with less overhead on the IHAVE control side. The gossiping effectiveness is rather small here.
    • Small but very frequent messages → add significant overhead on the bandwidth usage as many more msg_ids are added in each IHAVE message.

Optimisation Potential

Clearly, having an effectiveness ratio of 1:10 or even less, i.e., consuming >10x more bandwidth for IHAVE/IWANT messages than actually needed, is not ideal. Three directions for improvement have been identified, although none of them has been implemented, tested, or simulated.

  1. Bloom filters: instead of sending msgIDs in IHAVE/IWANT messages, peers can send a bloom filter of the messages that they have received within the “message window history”.
  2. Adjust GossipsubHistoryGossip factor from 3 to 2: This requires some more testing, but it’s a straightforward item to consider. This parameter, set to 3 by default [link], defines for how many heartbeats do we send IHAVE messages for. Sending messages for 3 heartbeats ago obviously increases the number of messages with questionable return (i.e., how many IWANT messages do we receive in return).
  3. Adaptive GossipFactor per topic: As per the original go implementation of Gossipsub [link], the GossipFactor affects how many peers we will emit gossip to at each heartbeat. The protocol sends gossip to GossipFactor * (total number of non-mesh peers). Making this a parameter that is adaptive to the ratio of Sent IHAVE vs Received IWANT messages per topic can greatly reduce the overhead seen.
    1. Nodes sharing lots of IHAVE messages with very few IWANT messages in return could reduce the factor (saving bandwidth).
    2. Nodes receiving a significant amount of IWANT messages through gossip could actually increase the GossipFactor accordingly to help out the rest of the network.
    3. There is further adjustments that can be made if a node detects that a big part of its messages come from IWANT messages that it sends. These could revolve around increasing the mesh size D, or rotating the peers it has in its mesh.

For more details and results on Ethereum’s network head over to https://probelab.io.

4 posts - 3 participants

Read full topic

Applications DamFi: Introducing On-Chain Autonomous Funds (OAFs), Blockchain's Answer to ETFs

Published: May 29, 2024

View in forum →Remove

Abstract:
DamFi introduces a decentralized protocol facilitating On-Chain Autonomous Funds (OAFs), akin to ETFs but tailored for the DeFi ecosystem. By leveraging Zero-Knowledge Proofs (ZKPs) for enhanced computational capabilities, DamFi ensures efficient and trustless fund operations. Moreover, DamFi can integrate with external DeFi protocols like EigenLayer, and theoretically any other DeFi protocol, to maximize investment flexibility and diversification.

Motivation:
The motivation behind creating DamFi stemmed from a desire to simplify the investment process for the average person. In traditional finance, ETFs provide a straightforward way to invest in a diversified portfolio with minimal effort. However, the DeFi space lacked a comparable analogue due to the limitations of smart contract technology. With the advent of Zero-Knowledge Proofs (ZKPs), we recognized an opportunity to overcome these limitations. ZKPs enable efficient off-chain computations with proofs uploaded on-chain, allowing us to create On-Chain Autonomous Funds (OAFs) that function similarly to ETFs, making decentralized investing accessible and straightforward for everyone.

Introduction:
Current crypto investment management often requires significant user involvement and active management. DamFi addresses this by offering automated, on-chain investment funds that adapt to market conditions without requiring constant oversight, making them ideal for both novice and experienced investors.

Core Features:

  1. On-Chain Autonomous Funds (OAFs):

    • Automated rebalancing and management of crypto assets.
    • Designed to function similarly to traditional ETFs in a decentralized context.
    • Diversified investing solution for both retail and institutional investors.
  2. Zero-Knowledge Proofs (ZKPs):

    • ZKPs enhance computational capabilities by performing calculations off-chain.
    • Proofs are uploaded on-chain to ensure efficient and trustless operations.
  3. Integration with External DeFi Protocols:

    • Supports integration with protocols like EigenLayer, where tokens will be staked.
    • Potential to leverage major DeFi protocols such as Aave, offering extensive investment
      opportunities.
  4. Chainlink Oracles:

    • Utilizing Chainlink’s decentralized oracle network to ensure accurate and reliable off-chain data
      feeds.
    • Enables secure and verifiable price feeds and other external data, enhancing the robustness of
      fund operations.

Fund Offerings:

  • Heritage Dam: Focuses on Bitcoin (BTC) and Ethereum (ETH) investments.
  • Castor Credit Dam: Invests in a range of DeFi protocols.
  • Grateful Dam: Specializes in Liquid Staked Derivatives.

Fund Management:

  • Funds are set to rebalance every x days or when weights deviate by x%, depending on the specific fund’s rules.

Future Developments:

  • Customizable investment strategies tailored to user preferences.
  • “Funds of Funds” structure, where users can invest in multiple funds, increasing their exposure and diversification.
  • Users will have the ability to choose whether to utilize integrated external protocols, how often their funds rebalance, what coins are included, and the associated fees.
  • Theoretical possibility to create a “fund of funds” to mimic a crypto S&P 500.

Community and Governance:

  • Currently, no community governance, but plans to establish a DAO and introduce a governance token in the future.

Technical Overview:
DamFi’s architecture employs smart contracts for autonomous fund management. The integration of ZKPs ensures efficient off-chain computations, while the proofs are uploaded on-chain for verification. This modular approach enhances security, scalability, and adaptability in the evolving DeFi landscape.

Partnerships and Collaborations:

  • We are partnered with Ora and utilizing their ZKPs to ensure the computational efficiency and security of our protocol.
    - Currently in talks with a few others but not ready to announce them yet.

Testnet Launch and Airdrop:
We are excited to announce the launch of the DamFi testnet. To celebrate this milestone, we will conduct an airdrop to reward early adopters and contributors in the future. Please take a look at https://damfi.io and launch the app!

Currently working are our rebalancing dams and staking in the “earn” tab

Call to Action:
We invite the community to provide feedback, engage in discussions, and explore the potential of DamFi. Your insights will be crucial as we refine our protocol and expand its capabilities. Also, please take a look at our Galxe campaign here Join DamFi on Galxe

Contact Information:
For more details, visit our website at Damfi.io or join our community here: damfi | Twitter | Linktree
Let’s collaborate to shape the future of decentralized finance.

Conclusion:
DamFi represents a significant advancement in the DeFi space, offering secure, automated, and flexible investment options. By integrating with external DeFi protocols, we provide users with unparalleled opportunities to diversify and optimize their portfolios.

Thank you for your interest in DamFi. We look forward to your feedback and support.

Thanks,
DamFi Team

1 post - 1 participant

Read full topic

Ecosystem (5)
Curve
Gauge Proposals Proposal to add eBTC/tBTC to the Gauge Controller

Published: Jun 20, 2024

View in forum →Remove

Summary: This proposal, submitted on behalf of BadgerDAO and Threshold, seeks to add the eBTC/tBTC pool’s gauge to the Gauge Controller on the Ethereum network.

References/Useful links:

Protocol Description:

  • eBTC: eBTC is a collateralized crypto asset, soft pegged to the price of Bitcoin and built on the Ethereum network. It is backed exclusively by Lido’s stETH and powered by immutable smart contracts with minimized counterparty risk. Designed to be the most decentralized synthetic Bitcoin in DeFi, eBTC allows anyone in the world to borrow BTC at no cost.
  • tBTC: tBTC is a permissionless wrapped Bitcoin, that is 1:1 backed by mainnet BTC. tBTC is trust minimized and redeemable for mainnet BTC without a centralized custodian.

Motivation:
Incentivizing the eBTC/tBTC pool is essential to provide deep liquidity for both eBTC and tBTC, promoting their usage within the DeFi ecosystem. This will enhance the trading experience for users, ensuring low slippage and high availability for both tokens. Additionally, it will support the broader adoption of decentralized Bitcoin solutions in DeFi, fostering a more robust and diverse ecosystem.

BadgerDAO and Threshold have committed to co-incentivizing the eBTC/tBTC pool to ensure its success and sustainability. This strategic collaboration aims to attract a significant amount of liquidity, thereby enhancing the pool’s depth and efficiency.

Specifications:

  1. Governance:

    • eBTC: eBTC’s Minimized Governance framework is detailed in this forum post. The protocol’s contracts are immutable, with minimal parameters that can be modified via two Timelocks with 2 or 7-day delays. Only parameters that do not violate users’ trust assumptions can be changed.
    • tBTC: tBTC operates on Threshold DAO’s decentralized threshold encryption protocol. Threshold DAO is governed by the network’s work token, T. T token holders govern the DAO via proposals raised to the Threshold forum, which can be raised to a vote via Snapshot, as well as the on-chain Governor Bravo module via Boardroom. In the future, all contract authorities will be passed to the Governor Bravo contract.
  2. Oracles:

    • eBTC: The eBTC Protocol primarily relies on Chainlink for price data and is adding a secondary (fallback) oracle via governance. Details on the Chainlink Oracle setup can be found here.
    • tBTC: tBTC does not rely on an oracle price feed.
  3. Audits:
    eBTC:

  1. Centralization vectors:
  • eBTC: The protocol has no major centralization vectors. Minimal governance is conducted transparently and distributedly, with robust timeclocks and monitoring. Contracts are immutable, and collateral types cannot be changed.
  • tBTC: Threshold Network governance is decentralized, and updates are ratified by the DAO. tBTC contracts updates are currently managed by the Council multi-sig.
  1. Market History: Both eBTC and tBTC have maintained a consistent and tight peg to BTC for most of their history. For the case of tBTC, even though some depegs were observed during its early history, it’s price hasn’t deviated more than 1% for almost a year, making it one of the most stable BTC wrappers in the market:
  • eBTC
  • tBTC

1 post - 1 participant

Read full topic

Proposals Distribute 304k OP from DAO vault on Optimism to tricrypto pools over 90 days

Published: Jun 19, 2024

View in forum →Remove

Summary:

Distribute 304k OP from DAO vault on Optimism to tricrypto pools over 90 days. Also claim 60k OP to DAO vault on Optimism for a recent grant approved by Optimism (this is 40% of an overall 150k OP grant).

Abstract:

See the previous proposal to distribute OP to these pools. The incentives have expired and this vote is to continue incentives.

Motivation:

Bootstrap Curve Lending on Optimism

Specification:

Streamers are already set up to distribute to target gauges. Only modification is to increase the reward duration to 90 days. See additionally a call to claim 60k OP to the DAO vault on Optimism. Following actions are broadcasted over the x-gov relayer.

OP_TOKEN = "0x4200000000000000000000000000000000000042"
STREAMER = "0x1C8f3D9Fc486e07e3c06e91a18825a344CeeFc54"
AGENT_PROXY = "0x28c4A1Fa47EEE9226F8dE7D6AF0a41C62Ca98267"
AMOUNT_TO_STREAM = 304828805439120005699539

GAUGES = ["0x3050a62335948e008c6241b3ef9a81a8c0613b76", "0xb280fab4817c54796f9e6147aa1ad0198cfefb41"]  # up to 8 gauges

ACTIONS = [
    ("0x8e1e5001C7B8920196c7E3EdF2BCf47B2B6153ff", "broadcast", [ # claim OP
        (OP_TOKEN, encode("transferFrom(address,address,uint256)", "0x19793c7824Be70ec58BB673CA42D2779d12581BE", "0xd166eedf272b860e991d331b71041799379185d5", 60000000000000000000000)),
    ]),
    ("0x8e1e5001C7B8920196c7E3EdF2BCf47B2B6153ff", "broadcast", [ # setup streamer
        ("0xD166EEdf272B860E991d331B71041799379185D5", encode("transfer(address,address,uint256)", OP_TOKEN, AGENT_PROXY, AMOUNT_TO_STREAM)), # transfer from vault to proxy
        (OP_TOKEN, encode("approve(address,uint256)", STREAMER, AMOUNT_TO_STREAM)),  # allow the gauge streamer to transferFrom proxy
        (STREAMER, encode("set_reward_duration(uint256)", 7776000)),
        (STREAMER, encode("notify_reward_amount(uint256)", AMOUNT_TO_STREAM)),
    ]),
]

Vote:

DAO vote 761 here

1 post - 1 participant

Read full topic

Gauge Proposals Proposal to add ETHFI/weETH on Ethereum to Gauge Controller

Published: Jun 04, 2024

View in forum →Remove

References/Useful links:

• Website - https://www.ether.fi/
• Documentation - ether.fi Whitepaper | ether.fi
• Github Page - etherfi-protocol · GitHub
• Twitter - x.com

Protocol Description:

Ether.Fi is a decentralized, non-custodial liquid staking protocol built on Ethereum, allowing users to stake their ETH and participate in the DeFi ecosystem without losing liquidity. The protocols eETH is a liquid restaking token (weETH is the non-rebasing equivalent), serving as a representation of ETH staked on the Beacon Chain, which rebases daily to reflect the associated staking rewards. Users can deposit ETH into the liquidity pool on Ethereum Mainnet to mint eETH, hold eETH to accrue rewards, and use eETH within DeFi or swap it back to ETH at any time via the liquidity pool.

ETH staked through the ether.fi liquidity pool accrues normal Ethereum staking rewards, and will also be natively restaked with EigenLayer. Staking with eETH on ether.fi automatically natively restakes that ETH to EigenLayer and accrues normal staking rewards while allowing users to keep composability on their eETH in other DeFi protocols.

ETHFI is the governance token that gives community members a direct mechanism to contribute to the ether.fi protocol and influence the growth of the ether.fi ecosystem. ETHFI was launched on March 18, 2024

Motivation:

Ether.Fi is looking to seed a ETHFI/weETH twocrypto pool on Curve to serve as a primary source of liquidity for ETHFI on chain. Incentivising a Curve pool will continue to boost the liquidity of the governance token. Higher liquidity ensures that traders and investors can easily enter or exit positions, which is essential for the overall usability and attractiveness of the ETHFI token within the DeFi ecosystem. To ensure success, ether.fi is committed to growing pool liquidity through bribes and incentives.

Specifications:

  1. Governance: Currently, the ether.fi protocol utilises a time-locked multi signature wallet, with the signatories being doxxed ether.fi executives and investors. Governance of ETHFI is comprised of a Discourse channel for discussions, a snapshot page used for voting, and Agora used for voting delegation.
  2. Oracles: For weETH, the protocol relies on an oracle for withdrawals and beacon state. The Oracle is based on the hash consensus mechanism and run by the committee members. Initially, the ether.fi team will be the only ones to operate the Oracle nodes, however the protocol grows, it will add more external parties to join the committee.
  3. Audits: Audit reports for the http://ether.fi/ protocol are found on the GitBook page - Audits | ether.fi. The audits have been carried out by reputable firms such as Certik, Zellic, Nethermind, Omniscia and Solidified to ensure the security of the protocol. An audit competition was also recently completed through Hats Finance.
  4. Centralization vectors: The centralization vectors primarily relate to the Oracle until it becomes decentralized, in line with the protocol roadmap. The price (staking rewards for rebasing) and the validator management (spinning up new validators and exiting them for liquidation) are also currently centralized for the early stages of the protocol to ensure mobility. As mentioned above, the signatories currently consist of the doxxed executive team and investors. ETHFI is a non-upgradable governance token.
  5. Market History: eETH has accumulated approx. 1.58M of ETH staked within the protocol since launching in the middle of November 2023. ETHFI has a market cap of approx. 542M, with a FDV of 4.7B. 24 hour trading volume since launch has ranged from

Links:

  • Pool / Token: 0xcccd572b22dee28479b11dd99589a1e4c0682a7e
  • Gauge: 0x6579758e9e85434450d638cfbea0f2fe79856dda

3 posts - 3 participants

Read full topic

Proposals sUSDe LlamaLend market for up to 35x leverage with a special monetary policy

Published: Jun 01, 2024

View in forum →Remove

Summary:

Ethena’s sUSDe vault provides a very interesting currency for farming and speculation. Currently giving around 25% APR, it is also fairly volatile, with fluctuations up to 0.5%-1% (or even more). Liquidity for sUSDe is also very good.

It would be very advantageous for traders to leverage in the dips (say, at -1% deviation). With x35 leverage, one could win 35% of the initial deposit while earning a few hundred % APR.

Below I present a description of how this can be achieved.

Market parameters:

Simulator required few modifications to support two different Thalf times: one is for sUSDe/sDAI pool (Thalf = 30 min), and another for sDAI/FRAX (Thalf = 10 min). Data used /frax pool because it historically had a better liquidity and existed for longer, however in reality we would use sUSDe/sDAI + sDAI/crvUSD or even (better) need to create sUSDe/crvUSD.

Results show that maximum leverage is achievable at the following parameters:

fee = 0.2%
A = 200
liq_discount = 1.4% (conservative)
loan_discount = 1.9%

Max leverage = 1 / (1.9% + 0.96%) ~= x35

The graph for maximum losses in an hour:
2_A

Average losses are much much smaller, on the order of 0.025% per day at max leverage:

Dynamic monetary policy:

Monetary policy is pegged to EMA of sUSDe rate: it reaches its value at utilization = 85%, getting lower to 35% that rate at utilization = 0:
Figure_1

Simulation for this rate curve is presented here, smart contract can be found here.

Comparison of EMA and raw sUSDe rates is on the picture below (we peg rate to EMA):

Addrresses:

1 post - 1 participant

Read full topic

Proposals PegKeeper V2 Upgrade: Add PKv2 USDC(25m), USDT(25m), pyUSD(15m), TUSD(10m)

Published: May 30, 2024

View in forum →Remove

Summary:

PegKeeperV2 upgrade! Regulator is introduced for smarter control of PK ceilings. Adding following PegKeepers with corresponding crvUSD debt ceilings:

  • USDC(25M),
  • USDT(25M),
  • pyUSD(15M),
  • TUSD(10M).

Older PegKeepers ceilings are set to 0.

Abstract:

There is a DAO vote in progress to upgrade the crvUSD PegKeepers to refine the system architecture and be more resilient to spam attacks and depeg of any constituent pegkeeper asset. Curve has recently released PKv2 documentation where you can read further on the new design.

Motivation:

The old pegkeeper was vulnerable to several adverse scenarios, most critically to the risk that a pegkeeper asset depegs temporarily or permanently, causing unbacked crvUSD to enter circulation and potentially resulting in depeg. The new design mitigates this risk by preventing the pegkeeper from depositing crvUSD into circulation if the pool oracle price deviates from other pegkeeper pools. There is also a rate limit implemented that requires multiple pegkeepers to become active before any single pegkeeper can deposit up to its debt ceiling. There is also an emergency admin set to the Curve emergency DAO multisig that can pause pegkeepers if necessary. Together these additional precautions reduce crvUSD’s dependency on the stability of its pegkeeper assets.

Because of the reduced risk to crvUSD, the proposal here reintroduces TUSD as a pegkeeper asset. It also introduces pyUSD as a new pegkeeper asset, replacing USDP (also issued by Paxos). Both of these assets have been previously reviewed by Llama Risk:

pyUSD Pegkeeper Onboarding Review
TUSD Asset Risk Assessment

Specification:

There is a DAO vote active for this upgrade. The vote first removes the 4 v1 pegkeepers from the monetary policy based on crvUSD aggregated prices (used to determine crvUSD interest rates) and sets their debt ceilings to 0.

Next, it adds the 4 v2 pegkeepers to the new PegKeeper Regulator:

Call via agent (0x40907540d8a6C65c637785e8f8B742ae6b0b9968):
├─ To: 0x36a04CAffc681fa179558B2Aaba30395CDdd855f
├─ Function: add_peg_keepers
└─ Inputs: [(‘address[]’, ‘_peg_keepers’, (‘0x5b49b9add1ecfe53e19cc2cfc8a33127cd6ba4c6’, ‘0xff78468340ee322ed63c432bf74d817742b392bf’, ‘0x68e31e1edd641b13caeab1ac1be661b19cc021ca’, ‘0x0b502e48e950095d93e8b739ad146c72b4f6c820’))]

It then adds the 4 new pegkeepers to each of the monetary policy contracts and sets their respective debt ceilings to:

  • USDC(25M),
  • USDT(25M),
  • pyUSD(15M),
  • TUSD(10M).

Finally, it sets the emergency admin to the Curve emergency DAO multisig, the same address used for certain emergency actions in Curve pools including kill gauge and some parameter controls. In this case, the multisig can globally pause or unpause pegkeepers deposit, withdrawal, or both (for all pegkeepers included in the PegKeeper Regulator).

Vote:

1 post - 1 participant

Read full topic

Proposals Pay $250k Bug Bounty to f(x) Protocol for Discovery of Curve Swap Router Bug

Published: May 30, 2024

View in forum →Remove

Summary

On April 30, 2024, an f(x) user lost ~$725,000 from slippage while swapping stETH to fxUSD because of incorrect price quoting on the f(x) swap UI. The UI uses the Curve swap router as part of its determination of the optimal swap path. The Curve router API delivered a price quote that differed from the execution price. The bug has been corrected and the user compensated in full from the f(x) treasury.

Curve Grants intends to pay a bounty to f(x) for the discovery of a bug associated with this incident for $250,000 worth of CRV, the maximum bounty size offered by Curve. Payment will be made to the f(x) treasury.

The Incident

On April 30, 2024, a user placed a swap order of 314.89 stETH for fxUSD on the f(x) swap UI. The UI compared rates between the protocol direct minting, DEX aggregators, and Curve to offer the best rate. The open source Curve SDK was used to route swaps on Curve. The user was quoted a favorable rate when routing through Curve (as quoted by the f(x) swap interface):

However, the actual rate at the time had very high slippage through Curve, which was correctly displaying high slippage on the Curve UI:

The affect user executed this tx that resulted in a loss of $726,039 compared to the quoted price by the f(x) swap interface.

The trade routed through

  1. Curve stETH/ETH
  2. Curve LLAMMA crvUSD/WETH
  3. Curve crvUSD/fxUSD


Source: BlockSec Explorer

The user received 221,448.52 fxUSD.

f(x) Response

The f(x) team responded to the incident by compensating the affected user in full from their treasury. The user was compensated 726,039 fxUSD in this tx.

They f(x) team coordinated with the Curve team to identify the cause of the issue. An announcement about the incident was posted to the AladdinDAO Dicord channel.

Source of the Problem

f(x) had been using Curve-js version 2.46.4 whereas the Curve UI uses the latest version (version 2.57.3 at the time of the incident). There had been a substantial update in v2.47.0 and v2.48.0 that changed the behavior of the respective routers. This versioning mismatch was responsible for incorrect router behavior on the f(x) swap interface and explains why the Curve UI was giving a different quote. Curve recommends anyone integrating Curve-js to always use the latest version available.

Note that f(x) had previously contacted Curve while attempting to upgrade the router version to the latest version just two weeks before the incident. Their team had been experiencing issues with upgrading successfully, and unfortunately were not able to successfully upgrade before this incident occurred.

Additional Bug Discovery

Investigation into the incident revealed a bug existing in the most recent version of the Curve router. It had not caused any user losses but was, nonetheless, considered a risk that required patching.

In the most recent router implementation, it still might have been possible that the route could change between the quote and the swap. (Note that this is a possible, albeit very unlikely scenario.) As seen below, getBestRouteAndOutput uses a 5 minute cache for route and a 15 second cache for output. If the cache were to expire, it could happen that the route or output would update during the swap.


Source: Curve-js router.ts

The caching has been modified to take the route and output from permanent cache. Permanent cache is now filling in the getBestRouteAndOutput method.


Source: Curve-js router.ts

This update in the most recent version of Curve-js (v2.57.5), patched as of commit 32e9905, resolved this unlikely possibility that the router quote updates mid-execution.

Takeaway

Curve makes all possible efforts to maintain its open source services, although these repositories are available “as is”, and Curve cannot guarantee the performance of this software for use by third part integrators. Wherever possible, integrators should use the latest version of Curve software and consult with the Curve team on integrations.

However, this incident revealed a potential issue that could affect users by executing a swap at a different rate from the quote given by the router. This was considered a low probability but high impact risk. Therefore, Curve Grants intends to award f(x) with a bug bounty worth $250,000 in CRV for discovering the issue. Curve Grants intends to send the amount to the f(x) treasury.

f(x) Treasury Multisig: 0x26B2ec4E02ebe2F54583af25b647b1D619e67BbF

Address Confirmation: deployments/deployments.mainnet.md at main · AladdinDAO/deployments · GitHub

1 post - 1 participant

Read full topic

Gauge Proposals INV and T whitelisting for direct liquidity mining incentives

Published: May 29, 2024

View in forum →Remove

Summary:

Whitelist INV and T for direct liquidity mining incentives.

Context:

Inverse Finance and Threshold are incentivizing liquidity provision across various pools, including DOLA/crvUSD and TricryptoINV for Inverse and ETH/T, arbi-WBTC/tBTC, op-WBTC/tBTC, crvUSD + tBTC + wstETH, thUSD + 3CRV and thUSD + crvUSD for Threshold through veCRV voting incentives. Recently, the effectiveness of these voting incentives has turned net negative in free markets due to the correlation between the volume of incentives and the dilutive effect of available votes.

Rationale:

In order to maintain a neutral to positive efficiency of vote incentives, Inverse Finance and Threshold would like to whitelist the INV and T tokens for direct liquidity mining towards these pools.The INV and T distribution will be managed by the Quest board address from Paladin, which must be whitelisted to facilitate the distribution.

Parameters:

Token ticker: INV
Token contract address: 0x41d5d79431a913c4ae7d69a668ecdfe5ff9dfb68

Token ticker: T
Token contract address: 0xcdf7028ceab81fa0c6971208e83fa7872994bee5

Quest board address : 0xF13e938d7a1214ae438761941BC0C651405e68A4

Target pools:

  • DOLA/crvUSD : 0x8272e1a3dbef607c04aa6e5bd3a1a134c8ac063b
  • TricryptoINV : 0x5426178799ee0a0181a89b4f57efddfab49941ec
  • crvUSD + tBTC + wstETH : 0x2889302a794da87fbf1d6db415c1492194663d13

Target gauges:

  • DOLA/crvUSD : 0xecad6745058377744c09747b2715c0170b5699e5
  • TricryptoINV : 0x4fc86cd0f9b650280fa783e3116258e0e0496a2c
  • crvUSD + tBTC + wstETH : 0x60d3d7ebbc44dc810a743703184f062d00e6db7e

1 post - 1 participant

Read full topic

Gauge Proposals Proposal to add PAL/WETH to the Gauge Controller [Ethereum]

Published: May 23, 2024

View in forum →Remove

Summary:

Proposal to add gauge support for the PAL/WETH pool on Mainnet.

Abstract:

Our proposal aims to deepen the PAL/ETH liquidity using the new crypto ng type of pool.

References/Useful links:

Link to:

Protocol Description:

Paladin is a DeFi ecosystem focused on governance protocols and markets, designed to unlock the value of governance. It aims to enable every DeFi stakeholder to participate in governance as they wish, building solutions around governance power. These solutions are designed to strengthen the underlying DeFi protocol by leveraging the token’s voting power or utility while creating value for token holders.
Paladin’s flagship product, Quest v2, introduces a pioneering method for boosting voting incentives across various DEXs including Curve and F(x) protocol.
The Autovoter by Paladin optimizes voting incentives for holders of vlCVX, vlAURA, and vlLIQ, offering some of the most competitive yields available for governance tokens.
Warlord, a novel governance index, allows users to leverage vote incentives within the Convex and Aura ecosystems while automatically managing CVX and AURA positions.
Paladin Lending enhances decentralized finance by facilitating voting pools for prominent protocols like Curve.

Motivation:

With PAL being the cornerstone of the Paladin’s product suite and with the upcoming tokenomics upgrade, Paladin has decided to move its protocol owned liquidity to a crypto ng Pool to use the latest and most efficient Curve pool for volatile assets.
To provide additional immediate PAL liquidity, Paladin has been incentivizing DEX liquidity for PAL ever since its inception and plans to keep using governance power and PAL bribes with this new pool.

Specifications:

  1. Governance: The current Governance of Paladin is composed of a Forum to discuss Proposals, a Snapshot space to vote, and the execution of a vote is currently handled by a Community Multisig where the members are elected through the DAO voting process but the DAO is currently voting to switch governance to an Optimistic Council (PIP-23)

  2. Oracles: The protocol does not use Oracles

  3. Audits: You can find all the audits of the current products here.

  4. Centralization vectors: Treasury is managed by multi sigs

  5. Market History: Curve has been the homeland for PAL’s liquidity since March 21.
    Link to the latest curve pool here

Pool details:

Pool : 0x85847ef522d78efdbec8afb0045dc7d6982837c3

Gauge : 0x8682FA9C4b6495Ddf38643807Ca088eBE0d22b8B

For / Against / Abstain

1 post - 1 participant

Read full topic

Gauge Proposals Proposal to add eBTC/wstETH to the Gauge Controller

Published: May 17, 2024

View in forum →Remove

Summary: This proposal, submitted on behalf of BadgerDAO and Lido, seeks to add the eBTC/wstETH pool’s gauge to the Gauge Controller on the Ethereum network.

References/Useful Links:

Protocol Description: eBTC is a collateralized crypto asset, soft pegged to the price of Bitcoin and built on the Ethereum network. It is backed exclusively by Lido’s stETH and powered by immutable smart contracts with minimized counterparty risk. Designed to be the most decentralized synthetic Bitcoin in DeFi, eBTC allows anyone in the world to borrow BTC at no cost.

Motivation: The request to enable this gauge is motivated by several factors. Firstly, establishing Curve as a primary liquidity venue for eBTC will strengthen the protocol and enhance its composability and use cases. Secondly, Lido aims to expand the liquidity profile of stETH and views eBTC as a strategic pairing due to the synergies between the assets.

Additionally, the CDP nature of eBTC, backed solely by stETH with a 110% MCR requirement and no fees or interest rates, makes it the most capital-efficient way to long stETH against BTC in DeFi. Having a pool with this pair will enable a smooth path for most leverage operations, and we believe Curve is the perfect venue to handle this with the highest efficiency possible. The liquidity in this pool is expected to drive most of the leverage volume, resulting in high fees for the protocol.

Finally, Lido plans to incentivize LPs to adopt the pool. The communities of Lido, BadgerDAO, Curve, and Convex have historically collaborated closely, and this pool will further strengthen these ties, unlocking a variety of integrations and synergies.

Specifications:

  1. Governance: eBTC’s Minimized Governance framework is detailed in this forum post. The protocol’s contracts are immutable, with minimal parameters that can be modified via two Timelocks with 2 or 7-day delays. Only parameters that do not violate users’ trust assumptions can be changed.
  2. Oracles: The eBTC Protocol primarily relies on Chainlink for price data and is adding a secondary (fallback) oracle via governance. Details on the Chainlink Oracle setup can be found here.
  3. Audits:
  1. Centralization Vectors: The protocol has no major centralization vectors. Minimal governance is conducted transparently and distributedly, with robust timeclocks and monitoring. Contracts are immutable, and collateral types cannot be changed.

  2. Market History:

  • eBTC has maintained a strong peg to BTC since its launch (~2 months ago) ranging from 0.99 to 1.009.
  • All its liquidity (~$4.1M) is currently in a Uniswap V3 pool paired with wBTC, where it has consistently maintained a strong peg:

2 posts - 2 participants

Read full topic

Proposals Create LSD pools on Llamalend e.g. stETH/ETH (non crvUSD)

Published: May 13, 2024

View in forum →Remove

Summary:

Create LSD pools on Llamalend e.g. stETH/ETH

Abstract:

Llamlend and crvUSD is a huge revenue source for Curve. DeFi users mainly borrow $. However some DeFi users are using leverage strategies using LSD around difference of yield between Liquid Stake Derivate and borrowing native chain token (e.g. ETH…).
Sometime, there is a depeg between on LSD stETH vs ETH on secondary market, there could be on opportunity for Llama lend to solve the depeg problem with the soft liquidation algorithm.
There is a market already available on DeFi Saver.
DeFi Saver

Motivation:

This proposal would increase revenues and TVL to the DAO.

Specification:

to be discussed , stETH , rETH,

For:

New revenues streams, new pools, listing of small market size LSD token.
Possible application on new markets with small LSD borowing / lending : stBNB/BNB , stAVAX/AVAX, stMATIC/MATIC, stFTM/FTM …

Against:

Time required for implementation, market too small for Curve

4 posts - 2 participants

Read full topic

Proposals Distribute 1,421,611 ARB from the DAO Vault on Arbitrum to Curve Pools

Published: May 07, 2024

View in forum →Remove

Summary:

This proposal is to continue incentives to the set of Arbitrum pools that were already receiving ARB. Proposing to distribute the remaining ARB (1,421,611.166666666666472534 ARB) over 80 days, approx. 17,770 ARB per day.

Abstract:

Some background because I’m not sure there’s any previous proposals on the forum related to ARB distributions. Curve received 3,476,795 ARB to the DAO vault on Arbitrum on June 6, 2023 as part of an airdrop to DAOs/ecosystem partners. There were a couple of votes that passed to begin distribution of the ARB tokens to Curve pools on Arbitrum:

Vote ID 563 | Executed 1/13/2024:
Distribute ~1,150,000 ARB from Arbitrum grant to arbitrum LPs over ~8 weeks (60 days) with the initial set of receivers as follows:

  • 2pool (0x7f90122bf0700f9e7e1f688fe926940e8839f353),
  • crvUSD/USDC.e (0x3adf984c937fa6846e5a24e0a68521bdaf767ce1),
  • tricrypto-crvUSD (0x82670f35306253222f8a165869b28c64739ac62e),
  • crvUSD/USDT (0x73af1150f265419ef8a5db41908b700c32d49135),
  • crvUSD/USDC (0xec090cf6dd891d2d014bea6edada6e05e025d93d),
  • crvUSD/FRAX (0x2fe7ae43591e534c256a1594d326e5779e302ff4), and
  • crvUSD/MIM (0x4070C044ABc8d9d22447DFE4B032405970878d06).

The set of receivers can be increased with future votes by the Parameter DAO.

Vote ID 631 | Executed 3/4/2024:
Enable ARB rewards for crvUSD/CRV/ARB tricrypto gauge and disable ARB rewards for the MIM/crvUSD gauge. Disperse 85K ARB to co-incentivize the MIM/crvUSD gauge over 30 days with the MIM team.

Vote ID 649 | Executed 3/19/2024:
Extend ARB rewards by adding an additional 811,252 ARB to the reward streamer, to be distributed over 42 days (approx. 19,315 per day).

The list of current ARB incentive receivers (Curve pools), which receive an equal portion of the stream, are:

Pool Name Address
Tricrypto-crvUSD (3c-crvUSD) 0x82670f35306253222F8a165869B28c64739ac62e
crvUSD/USDC (crvUSDC) 0xec090cf6DD891D2d014beA6edAda6e05E025D93d
crvUSD/USDT (crvUSDT) 0x73aF1150F265419Ef8a5DB41908B700C32D49135
crvUSD/USDC.e (crvUSDC.e) 0x3aDf984c937FA6846E5a24E0A68521Bdaf767cE1
Curve.fi USDC/USDT (2CRV) 0x7f90122BF0700F9E7e1F688fe926940E8839F353
crvUSD/Frax (crvUSDFRAX) 0x2FE7AE43591E534C256A1594D326e5779E302Ff4
TriCRV-ARBITRUM (crvUSDARB…) 0x845C8bc94610807fCbaB5dd2bc7aC9DAbaFf3c55

The RewardStream contract linearly vests the tokens in the contract evenly to all designated rewards_receivers over a specified reward_duration. In the case of this proposal, the 7 receivers will each receive approx. 17,770 ARB each week for an ~11 week distribution period. This will spend the remainder of the ARB in the DAO vault.

Motivation:

Proposing to spend the remainder because it’s a hassle to continue creating multiple votes for an inherently finite reward program. The incentives are overall a useful way to bootstrap crvUSD on Arbitrum and Curve Lending on Arbitrum, which requires sufficient crvUSD liquidity to process liquidations and pool liquidity where pool oracles are employed in lending markets. Not anticipating any major departure in strategy that would require adjustments to the reward distribution, so this proposal prefers to distribute all over a reasonabl