zkServerless Framework & x86 Verifiable Compute Engine

Hi everyone! This is Butian from the Blockless team. I would like to share with you an open-source middleware that we’ve been exploring with the EigenLayer team, to be secured by the Restaking Collective. I’m eager to hear your thoughts and feedback, so please don’t hesitate to comment below!

TL;DR

  • We have developed an edge-native WebAssembly-based Turing-complete compute engine that operates on lightweight nodes, suitable for everyday devices like personal computers.
  • Blockless offers zkServerless executions and pioneers the industry’s first x86 emulator within a WebAssembly (WASM) runtime. Blockless as an Actively Validated Service (AVS) enables verifiable computing on a trustless network, collectively secured and powered by EigenLayer’s restakers and operators.
  • The implementation allows L1 blockchains like Ethereum to achieve uncapped throughput, leveraging high-performance verifiable off-chain executions and automatic horizontal scaling with an advanced load distribution algorithm.

Introducing Blockless

For developers, Blockless is a WebAssembly-based verifiable execution platform that enables the creation and execution of serverless functions and x86 programs with near-native machine speed. It guarantees off-chain execution security through pluggable zk and dynamic consensus verification methods. As a community-powred network, the decentralized stack is non-custodial and censorship-resilient.

Blockless provides fully-managed and hands-off deployment, allowing builders to focus solely on writing the business logic. It provides uncapped throughput and ready-to-use function templates to cater to the common needs of decentralized applications (dApps), such as programmable oracle, indexer, automation trigger, and hosting solutions.

In simple terms, Blockless is the solution that handles heavy computing or external-resource-required tasks that blockchains/smart contracts are not capable of executing, without compromising security & decentralization.

How does restaking with Blockless work?

As a permissionless network, Blockless invites EigenLayer restakers to secure and power the network via two options:

  1. Native staking: ETH can be restaked on EL to operate or delegate a node.
  2. ETH LP staking: ETH-BLS LP tokens can be restaked on EL to operate or delegate a node.

There is no marginal cost for restakers, and the minimum hardware requirement for being an operator is considerably low. In principle, Blockless allows participation from IoT, personal, and server-grade devices. However, for initial network bootstrapping, the minimum hardware requirement spec is set to that of a Raspberry Pi (4 Core 1.5 GHz 64-bit 8GB RAM).

As a node operator, you will be involved in running serverless functions, x86 emulators, generating zk proofs (may require higher-grade hardware), or participating in application-specific local consensus. However, once the node is set up, tasks are automatically assigned and executed seamlessly in the background without requiring your attention. Blockless incorporates an automated orchestration mechanism that distributes work to suitable nodes across the network. Nodes are selected for execution based on hardware capacity and the resource consumption requirements of the specific task.

Furthermore, Blockless features built-in resource consumption metering. This ensures that restakers and operators can anticipate receiving compensation proportional to their stake and hardware capacity.

What are the benefits of the collaboration?
1. Security guarantee for restakers, operators, and computing tasks

  • There are built-in failover and fault tolerance mechanisms in Blockless’ P2P protocol which protect restakers’ asset safety - even if their node operators mess up for unintended reasons, their staked ETH or LP will not be slashed - the job will simply get picked up by the next available and most qualified node.
  • Any execution failure exceeding the timeout limit will be caught without triggering slashing, even if the bug was omitted from the security audit so that restakers and operators can rest assured with the extra layer of protection.
  • A strictly isolated sandbox environment on the host machine prevents operator from tampering with or accessing running process. Furthermore, with end-to-end encryption and MPC extensions, even private keys can be securely handled and parsed as variables, safeguarding sensitive information from unauthorized access or exposure.
  • Operators have the ability to specify the maximum hardware resource consumption, eliminating the risk of slashing due to service overload.

2. Portability and high-performance efficiency for dApps

  • 100MB docker image can be 1MB or less in a comparable WASM format. The optimization enables faster deployment and execution of applications in resource-constrained environments.

  • WASM runs at near-native speed across devices such as IoTs, smartphones, laptops, and as well as professional bare metal CPUs, GPUs, and cloud servers. Theoretically, any device that can open up a web browser can participate as a network operator.

A bonus for Web2 devs

  • Developers have the flexibility to code in popular programming languages such as Go, Rust, AssemblyScript, and Python, etc.,and achieve immediate interoperability with smart contracts, and other JavaScript programs in the open web ecosystm.

By combining the restaker network and the trustless verifiable compute engine, the collaboration between EigenLayer and Blockless creates a community-driven execution network that is more secure, efficient, and scalable. This collaboration brings benefits to developers, validators, and users, as it enhances the overall reliability, resilience, and performance of the system.

What’s in it for me?

  • For restakers: you can expect to earn more profit at zero marginal cost, with an additional guarantee of the security of your restaked assets. You also have full transparency to operators for delegation, giving you more control and confidence in your restaking activities.
  • For developers: ship feature-rich and high-performance advanced decentralized applications (dApps) without worrying about the underlying infrastructure. This significantly reduces your go-to-market time, engineering headcounts, and execution risks, allowing you to focus on creating innovative dApps.
  • For those who believe in an open web, together we can create a community-driven, community-controlled, and community-powered decentralized stack. One that shapes a collective future and open ecosystem.

What’s next?

  • The EigenLayer and Blockless teams will collaborate on the staking requirements and guidelines. We will integrate the restaking and execution modules, and conduct security audits and end-to-end testing. In the spirit of transparency, which both teams uphold, this integration will be developed in public as the Eigenlayer client and contract interfaces emerge.
  • The technical documentation can be found here. Need more? Additional resources about Blockless can be found here.
  • Interested in staking your claim, running a node, or developing with this technology? Register here: EigenLayer x Blockless.
  • Let us know what you think by replying below! Comments and questions are welcomed and appreciated. As we keep building, community feedback becomes a crucial part of what we’re trying to achieve.
42 Likes

Looking forward to this partnership

7 Likes

What’s the proof generation time for a simple computation program at lightweight nodes (laptop)?

2 Likes

Hi Butian,

I have questions that would help me, and perhaps others, understand the benefits and long-term implications of this partnership more clearly:

  1. Staking Rewards: Could you provide more details about the potential rewards for restakers? What factors influence these rewards, and how will they be calculated?

  2. Node Operations: Could you elaborate on the technical abilities required to run nodes?

  3. Security Measures: Can you provide a more detailed breakdown of the features you mentioned? Particularly, how do these measures enhance the security of restakers’ assets and the integrity of node operations?

  4. Scalability: How does the partnership plan to address the issue of scalability as the number of restakers and operators increases (throughput, economics, etc)?

  5. Long-term Vision: Could you expand on the long-term benefits of this collaboration? Specifically, how do you envision this partnership contributing to the overall growth and stability of the EigenLayer ecosystem?

Thanks you

5 Likes

the platform is nice and descriptive in general. The only problem is that transactions are approved a little late.

1 Like

If Blockless runtime is implemented, would it be right to say that the main benefit operators get is security?

So if I’m running a node which hosts multiple AVSs, the Blockless runtime would make sure my computer don’t die from overload, and thus protecting me from getting slashed?

2 Likes

The key parameter that impacts performance is proof verification, which is carried out with each invocation and is remarkably swift, usually completed within a few milliseconds. The generation of zk proofs (may require higher-grade hardware) is conducted only once, during the build phase of the development process.

2 Likes

Great questions. Thank you seb3point0. I will try to keep the answers brief.
1/
The rewards are calculated based on the computation occurred on the device. As a general guideline, devices with better internet connections and, hardware resources (CPU, GPU, RAM), and in-demand geo-location, receive more tasks and rewards over time. The actual rewards are affected by the supply (of validators/operators) and demand (of such computing resources).

We do realize that restaker need a reward projection to compare the opportunity cost. Therefore, we provide the projected revenue based off an ML model for a given device’s hardware resources. This model informs the suitability score, S(i), of a device i, considering the device’s hardware resource capacities.

The device selection algorithm operates based on the principles of simulated annealing, utilizing two key formulas:

  1. ΔE = S(i,candidate) - S(i,current), where:
    • S(i,candidate) represents the suitability score of a candidate device randomly selected from the pool.
    • S(i,current) represents the suitability score of the currently selected device.
    • ΔE is the difference in suitability scores between the candidate and the current device.
  2. P_accept = min(1, exp(−ΔE/T)), where:
    • T is the temperature parameter, initialized to 10 and reduced by a factor of 0.99 in each iteration.
    • P_accept is the acceptance probability for the candidate device to replace the current device.
      Let’s define our feature matrix X of size n x m, where n is the number of suitable devices within the task acceptance threshold, and m is the number of measured hardware component dimensions (CPU, RAM, Storage, GPU, Network capabilities, and geolocation scarcity).
      Simultaneously, we define a weight matrix W of size m x 1, each entry of which corresponds to the weight or coefficient of a hardware component.
      The product Z = X * W represents the demand likelihood that a node is selected. Here, Z is the expected suitability score of the devices, combining the hardware features with their respective weights.

The overall network revenue is denoted by N. Consequently, the reward R is defined as the sum of the product Z and N over service time commitment, i.e. R=Sum( Z * N), representing the expected reward for a node.

2 Likes

2/
Node Operator will be able to spin up a node via a user-friendly GUI, removing the technical ability requirement.

3/

  • Resource consumption: We can make sure the AVS don’t take over compute required by running ETH nodes. This is done so on our networking layer
  • Malicious AVS: we want to make sure the AVS software don’t attack the host machine
  • Privacy: we want to make sure that it’s hard to peak inside the avs to maintain a degree of privacy

4/
Throughput is improved as both the node operators and compute consumption scale proportionally. The economic follows the free-market supply and demand equilibrium. But a waitlist mechanism may be adopted to put a hard cap on the number of nodes based on current demand, or alternatively the restaking requirements may be dynamically adjusted to ensure the number of qualified validators make economic sense.

5/
The partnership provide additional security to the restakers by avoiding unintended slashing, and full transparency to operators for delegation, enhancing control and trust in restaking activities.

2 Likes

What scenario are you thinking in particular?

1 Like

Operators get security and profit that comes with the transparency and control for trustless delegation.

It is still possible that other AVS(s) cause the overload and slashing. However, it’s an option if you were to run another AVS within the x86 emulator, overload can be avoided by setting a maximum hardware capacity that can be isolated for running that AVS.

2 Likes

Why the proof generation is only conducted during the build phase? How does this ensure all computation to be verifiable? Also interested to know about the approximate proof generation time for such proof during the build phase.

1 Like

Would be interesting to see how this pan out.

Just some random thoughts. Curious have you guys thought about making it a general service/security framework for folks to build on EigenLayer, instead of being an AVS itself?

Because I assume by being an AVS, you gotta incentivize operators with financial means (stablecoin, you BLS token, etc.), which complicates the economics.

But if teams building AVS on EigenLayer can use your p2p and runtime frameworks as a part of their building processes, you could directly charge them for the service?

2 Likes

2/
Node Operator will be able to spin up a node via a user-friendly GUI, removing the technical ability requirement.

3/

  • Resource consumption: We can make sure the AVS don’t take over compute required by running ETH nodes. This is done so on our networking layer
  • Malicious AVS: we want to make sure the AVS software don’t attack the host machine
  • Privacy: we want to make sure that it’s hard to peak inside the avs to maintain a degree of privacy

4/
Throughput is improved as both the node operators and compute consumption scale proportionally. The economic follows the free-market supply and demand equilibrium. But a waitlist mechanism may be adopted to put a hard cap on the number of nodes based on current demand, or alternatively the restaking requirements may be dynamically adjusted to ensure the number of qualified validators make economic sense.

5/
The partnership provide additional security to the restakers by avoiding unintended slashing, and full transparency to operators for delegation, enhancing control and trust in restaking activities.

1 Like

The time it takes to generate a proof is tied to the circuit size. The circuit is created only once during build time, which is the most resource-intensive part. Proof generation, which happens during runtime, typically scales linearly with circuit size and can be as quick as 30 seconds.

1 Like

Absolutely. Blockless is a general computing framework supporting verifiable, serverless execution that is currently up and running (yet without the restaking mechanism). We have a simple economic model where validators/operators are paid in line with their contributions. As an open-source, permissionless framework, any revenue directly benefits the community powering the network, fostering a healthy, self-sustaining ecosystem.

3 Likes