EigenLayer introduces an AVS (Application-Specific Verification System) in the ecosystem called Brevis. Its highlight is providing a decentralized ZK (Zero-Knowledge) coprocessor, but what is a ZK coprocessor? Why is it said to truly enhance the efficiency of blockchain? Are Rollups not enough? This article will provide a brief introduction.
Content Index
Toggle
What problem does the ZK coprocessor aim to solve?
Use case: Product Data Collection and Computation
Problem 1: Blockchain virtual machines are not suitable for extensive computations
Problem 2: Applying external data has trust assumptions
Web3 data dilemma
Introduction to the ZK coprocessor
Separating computational work from the EVM
How does the ZK coprocessor avoid the aforementioned problems?
Difference between the efficiency enhancement of the ZK coprocessor and Rollups
Introduction to ZK coprocessor implementation projects
Brevis
Axiom
ZK coprocessor is not the only solution
Existing Web3 products always feel like they lack certain functionalities compared to Web2 products, such as automated recommendations, loyalty programs, precision marketing, etc. These have been standard features in traditional software for years, so why are they missing in Web3 products?
To provide a more concrete example, when a user watches a TV series on Netflix, the backend collects the user’s viewing history and behavioral data. By analyzing this data through algorithms, the system can find content that the user may like and personalize recommendations and advertisements.
But in the Web3 ecosystem, when a user frequently trades stablecoins on Uniswap, exchanging USDT for USDC, why can’t the Uniswap interface automatically display this commonly used trading pair for the user?
The issue lies in “data.” Web2 products can systematically collect and compute data, such as user viewing history. However, Web3 products cannot do the same, even for simple transaction records.
Some readers may ask, “Aren’t transaction records already on the blockchain? Why can’t they be read?” Actually, it’s not that simple. Reading historical records from the blockchain requires significant gas fees. Furthermore, every time a call is made, the records need to be reloaded or stored, and any approach consumes a substantial amount of resources, making it unsuitable for decentralized virtual machines.
The second issue is data processing. Although relatively simple, it also requires significant computational resources on top of the EVM. If the problem becomes more complex (such as analyzing user viewing history and determining content for recommendations based on viewing time), the project becomes cost-prohibitive.
Whether it’s Layer1 or Layer2, their original designs are not suitable for tasks involving large-scale data retrieval or computation. Even with high blockchain TPS, it doesn’t help solve this problem.
“If the blockchain cannot perform computations, then let’s do it off-chain!” This is a reasonable line of thinking, and services like Etherscan, Dune, and DeBank have adopted this approach to solve many problems.
But if the off-chain computation results are related to a significant TVL or user rights, the assumed trust risks increase. For example, in the case of blockchain airdrops, although the rules for eligibility are public, the actual filtering process is still managed by the team, and the final results are published. This introduces a trust assumption (how can one be sure the team hasn’t cheated), which is why controversies and grievances often arise in community airdrops. Not to mention the consequences that could arise from inaccurate data.
Therefore, the approach of off-chain computation seems unfeasible in many application scenarios.
This creates a dilemma: fetching and computing data directly incurs high costs, but using off-chain computation results introduces trust assumptions. It’s not surprising that there are currently no native services using Web3 data.
Almost all mainstream Web2 applications and products leverage data to create experiences and maintain dominance. If Web3 truly aims to create “killer applications,” data, while not everything, is definitely essential.
Hence, the market has introduced the concept of ZK coprocessors to address this problem by achieving off-chain computation without increasing trust assumptions, thus effectively solving the problem.
The ZK coprocessor (ZK Coprocessor), also known as ZK Auxiliary Processor, aims to improve the efficiency of the front-end processes, namely data collection and data processing, through off-chain computation and zero-knowledge proofs. This enhances the efficiency of the blockchain without the need for trust assumptions.
The name “coprocessor” is derived from the relationship between CPUs and GPUs. GPUs handle the heavy image processing tasks that CPUs are not well-suited for, maximizing computer efficiency. Similarly, blockchains are decentralized computers and can adopt similar approaches.
From an abstract perspective, a coprocessor primarily focuses on two tasks:
Data retrieval: Reading transaction records from the blockchain ledger and verifying data authenticity through ZK proofs.
Data computation: Performing calculations based on the data and requirements and verifying the results’ authenticity through ZK proofs.
Finally, the data is returned to the smart contract, which can efficiently verify and use the data through zero-knowledge proofs, maximizing execution efficiency.
The ZK coprocessor primarily enhances the efficiency of data collection and data processing, whereas Rollups enhance data processing and execution efficiency. They are not competitors but rather complementary to each other.
Mainstream ZK coprocessor projects include Brevis, Herodotus, and Axiom. While the core concepts align with the aforementioned content, the implementation methods may differ slightly.
Brevis, founded by the same team as Celer Network, is divided into three parts:
zkFabric: Responsible for collecting data from the blockchain and computing zero-knowledge proofs for the blockchain header information.
zkAggregatorRollup: Stores and transmits data to smart contracts on the blockchain, including the data collected by zkFabric and zkQueryNet.
zkQueryNet: Handles the data and computations required by Web3 smart contracts.
Axiom provides a trustless blockchain query service. Unlike the centralized off-chain query architecture mentioned earlier, Axiom’s data can be confirmed for its correctness through zero-knowledge proofs.
The ZK coprocessor can solve the problems of Web3 data collection and computation, which are crucial for improving product user experience and precision marketing.
However, there are other ways to solve “data computation on the blockchain.” For example, Smart Layer utilizes external data to achieve similar results, but the trust assumption risk depends on the security of the Smart Layer network itself.
Recommended Reading:
What is Smart Layer? How to integrate Web3 with real-life scenarios?
Reason for recommendation: This article provides a detailed introduction to the design architecture and operation principles of Smart Layer. It is highly recommended to reference this article alongside for a more comprehensive understanding of the problems the ZK coprocessor aims to solve.
If the information is relatively unimportant, relying on off-chain computation without the need for complete trust in all data is also a viable option. The important thing is to solve the right problem.
Brevis
ZK Coprocessor
ZK coprocessor
Zero-knowledge proofs
Extended reading
EigenLayer is about to launch its own token! What real use cases can the first batch of AVS provide?
What technological innovations does zkSync have? How does it have the potential to impact the existing Rollup ecosystem?