A Gentle Introduction to zkML
What is “Zero-Knowledge Machine Learning” and why is it important?
Thank you to Jason Morton from EZKL for the feedback and review
We’ve all heard of machine learning, but what is zero-knowledge machine learning (zkML)? In this article, we proceed to break down what it is from a high level and explain its significance in various use-cases.
Introduction
Zero-knowledge machine learning (zkML) is a cryptographic protocol where the party that computes the output on an AI model given an input also generates a cryptographic proof that effectively proves something about the computation. (Note: computing an output from an input on an AI or ML model is also known as “inference”)
With zkML, the party that computes the inference can prove things like:
“I ran X input on Y model to generate Z output”
Okay, it’s quite apparent what the “ML” in zkML is about. What about zero-knowledge (ZK)? The magic of zkML comes from the fact that the prover can prove these statements while keeping certain elements (either X, Y or Z in the example above) hidden from verifiers. In other words, the prover can prove something is true while still ensuring others retain no knowledge of certain elements of the computation. More practically, an example of the zero-knowledge property may look like:
“I ran X input on Y model to generate Z output, and I can prove this without revealing the specific weights of Y model.
It’s almost like creating a cryptographic “receipt” for the inference, but instead of proof of purchase, it’s a proof that a specific computation has occurred.
But why is zkML important? And how can it affect you in your everyday life? Keep reading and we promise we will clear things up!
The Significance of zkML
The increasingly prevalence of machine intelligence is undeniably pervasive, infiltrating nearly every facet of modern life and rapidly expanding its footprint across multiple industries. From recommendation systems that drive your online shopping behavior to the quantitative trading systems that power the financial markets, AI has become an integral part of our daily experiences.
But usually, regular folks don’t think about that, at all.
When you type in your query on ChatGPT, you implicitly trust that OpenAI executes on the right query with the right model. When you ask Siri a question about Trump’s political stance, you trust that Apple’s servers are conducting the inference correctly.
At first, it seems like there’s nothing wrong with this, but when you put the ballooning compute costs for AI companies in perspective, there’s actually a strong financial incentive for these companies to inference on cheaper models or manipulate queries to significant reduce their computational costs.
Seems relatively harmless when ChatGPT responses suddenly drop in quality, or if your generated-art NFT sucks. But what if a fintech company uses cheaper models which results in inaccurate risk assessments? Or if a healthcare AI company uses a cheaper model which results in a false negative diagnosis?
We’re not saying AI companies do this, but we’re saying if they decided to do it, nobody would know and there is absolutely no recourse for people to verify or do anything about it.
Use-Cases
So what are the core use-cases of zkML? Here we break it down into a couple core categories and a variety of examples. Note that the reason why zkML is highly applicable for blockchains is because most blockchain virtual machines (VMs) don’t support computation-heavy operations like inference, so in order for the computation to be done outside of the VM, zkML is needed within the VM to validate the operations. This is why zkML is often seen as the key piece of infrastructure that effectively enables blockchains to harness AI and ML.
- DeFi
– Dynamic spread calculation in AMMs for liquidity protection
– Sophisticated risk engines for DeFi protocols
– On-chain derivatives pricing models - NFTs/GameFi
– NFT art generation with diffusion models
– On-Chain GameFi dialogue generation with LLMs - Web3.0 Security
– On-chain smart contract vulnerability detection - AI-driven on-chain reputation Systems
– On-chain reputation systems to measure flow toxicity in AMM trading
– On-chain reputation systems to flag sybil accounts in airdrops - Validity ML
– Ensuring traditional AI/ML companies are running inference on the correct model with the correct inputs. This could be particularly important in fields like HealthCare AI, or default risk assessment models in finance.
For more on use-cases specific to Web3.0, feel free to check out our previous article: Applications of AI/ML on the Blockchain.
The Vanna Network
The problems described above are precisely ones that we are tackling at Vanna Labs: the Vanna Network is a P2P blockchain network that features native AI and ML inference secured by zkML technology from EZKL.
In other words, we’re building open-source, decentralized infrastructure that allows anyone to permissionlessly upload models, inference other models, and validate zkML proofs for the network all on-chain. This allows anyone to plug-and-play any models with the guarantee that the inference is cryptographically secured and allows consumers of the inference to validate and trust the model output. The Vanna Network will also feature cross-chain communication and traditional Web2.0 API endpoints so anyone can harness the power of verifiable inference from any context. We aim to bring zkML-secured inference to everyone, in a way that’s so easy that it’s a simple function call. Feel free to give the Vanna Network docs a spin for more context.
Goal: Censorship-resistant, verifiable, and permissionless AI everywhere.
The Magic of zkML
So… how does this magic work? We’d highly recommend looking through EZKL’s blog as well as their docs to get a stronger technical understanding of zkML, but here’s a high-level overview.
- Compilation: The AI model is converted into a mathematical circuit using tools like Circom. In the context of ZK proofs, tools like Circom are commonly used to describe computations as circuits in a high-level domain-specific language.
- Setup Phase: There are two steps in the setup phase, firstly a trusted party generates public parameters that will be used in the proof system, e.g. using a multi-party compute (MPC) like the Perpetual Powers of Tau. Secondly, the prover (or party who sets up the compiled circuit) generates the circuit-specific proving keys and verifying keys used for proof generation and validation.
- Witness Generation: The actual data (witness) related to the statement is mapped into the circuit.
- Proof Generation: Using the proving key and the mapped witness, a prover generates a succinct proof.
- Proof Validation: The generated proof is made available along with the statement to verifiers. The verifiers can use the public verification key to validate a proof cheaply in constant time.
An important key factor to keep in mind is that proof generation is quite computationally expensive, but proof validation is very cheap. Because of this, it makes sense for projects like Vanna to adopt an architecture where a proof is generated once, but validated numerous times by many nodes across the network.
Conclusion
Conclusively, zkML has quite tremendous implications on AI and its applications. It prevents model providers from engaging in fraudulent inference for cost reduction purposes, it provides guarantees in the integrity of your inference, and ultimately it could do all this without revealing information that needs to remain private.
We hope you’re as bullish on zkML as we are, feel free to follow us on our socials.
Website: https://www.vannalabs.ai/
Twitter: https://twitter.com/0xVannaLabs
Discord: https://discord.com/invite/68CNHHcMK8
Finally, a special thanks to the EZKL team, check them out!
Website: https://ezkl.xyz/
Github: https://github.com/zkonduit/ezkl
Twitter: https://twitter.com/ezklxyz