The State of Blockchain x AI Inference

OpenGradient
10 min readAug 9, 2023

--

Introduction

Blockchain technology and artificial intelligence (AI) are some of the hottest fields there are in the technology world right now, and for good reason. AI is rapidly automating tasks and making decisions that were once thought to be the exclusive domain of humans, while blockchain technology serves as a new paradigm of decentralized compute that can potentially revolutionize the modern financial system.

In this article, we plan to dive into the current state of the integration between these two fields. We’ll go into the use-cases of AI on the blockchain, why AI is significantly more prevalent in Web2.0 compared to Web3.0, and both the existing solutions to the problem as well as the major challenges faced by the integration.

While AI has been around for decades, the recent rise in its popularity driven by the commercialization of large language models has really shed a lot more light on the impact AI can have on our lives. The use-case for AI, however, extends far beyond just ChatGPT; AI is being used in healthcare for diagnostics and treatment. In consumer applications, AI is being used in feed-ranking algorithms to determine what users see on the main page. The use-case for AI is endless and research put the CAGR (compound annual growth rate) of the AI market cap to be around 20% for the next decade.

Blockchain technology has also skyrocketed in popularity over the past half-decade, with tens of billions of dollars of capital now locked on-chain. Blockchain technology brings value in the form of increased efficiency, greater transparency, and censorship-resistance to the financial system, and has incited a Cambrian Explosion of dApps (decentralized applications) being developed on the blockchain: from financial applications like decentralized exchanges or lending markets, to newer use-cases like NFT (non-fungible token) collecting or tokenization of virtual assets in games.

The contrast between the state of AI in Web2.0 and Web3.0, however, remains quite stark. AI technology has seemed to penetrate every corner of our lives in Web2.0, from Siri to your Instagram feed to the front page of recommendations when you do your online shopping on Amazon. Unfortunately, the same cannot be said about most modern blockchain use-cases: AMMs (automated market maekers) are still quoting static and arbitrarily-determined spreads, systematic trading algorithms or simple model-driven analysis and most compute-heavily workloads cannot be performed on-chain. These are impactful use-cases as well, as DeFi applications lock up billions of dollars in value. So…why is that? Although the convergence of AI and blockchain has many implications, as AI can further augment existing use-cases and applications in Web3.0 like it did to Web2.0, the integration of the two fields continues to be a challenge.

Difficulties in the AI x Blockchain Convergence

One of the difficulties of AI and blockchain integration comes from the ostensible clash in design ethos when it comes to decentralization of the two technologies. On one hand, AI training and inferences are both extremely expensive, both in dollar terms and in terms of computational requirements: as an example it costs approximately $12 million USD just to train GPT-3. Moreover, companies develop extremely specialized hardware to train models and service inferences as well: AI inference as an example can be up to 60–70% more efficient if run on GPUs. The high barriers of computation and cost results in high levels of centralization when it comes to both training and inference: these models are often both trained and served on powerfully equipped HPC clusters or servers where massive amounts of user inference requests are computed.

The Blockchain Trilemma

On the other hand, one of the core pillars of blockchain technology often documented in the classic blockchain trilemma is decentralization. In case one isn’t familiar, modern blockchains often have to design their architecture in a way that strikes a balance between security, decentralization, and scalability. In this instance, increasing the compute requirement for running a full node could have positive impacts on performance and thus, scalability of the blockchain, but it would have negative impacts on the level of decentralization since the requirements to run a full node are higher now. This is a major obstacle when it comes to being able to incorporate processes with heavy computational requirements into the blockchain like training or performing inference on AI models, as blockchains want to remain as decentralized as possible.

In the following sections, we take a look at limitations and challenges of both the native implementation of AI on the blockchain as well as the off-chain approach in the context of model inference.

Challenges of Native Inferences

On-chain native implementation of AI is difficult primarily for three different reasons. Firstly, as discussed above, there are compute limitations introduced due to hardware requirements for validator nodes. Secondly, there is a scaling bottleneck due to gas limitations per block in modern proof-of-stake networks. Last but hardly the least, it’s difficult to scale when every full node runs every single inference in order to verify state.

Before moving on, to lend clarification to what is meant by the “native” implementation, we’re defining it as direct on-chain inference or training of a model by the validator nodes in a blockchain network.

As we’ve discussed in the previous section, there are immediate hardware limitations that makes CPU validators not suitable for servicing AI inferences. GPUs have many more cores than CPUs, which allows them to perform more calculations at the same time. For example, a typical GPU might have 1000 cores, while a typical CPU might have only 4 cores. GPUs are also more optimized for floating-point operations, which are the most common type of operation used in AI tasks. Without running a network consisting of highly-performant GPU-based full nodes, which would come at the cost of decentralization, it could be difficult to achieve scalable on-chain inference.

Inference or training on CPU validators is already inefficient as it is, and on top of that one must consider the fact that the amount of gas per block is highly limited. For those who are unfamiliar, “gas” is essentially a measure of computation on the blockchain, more computationally heavy processes will cost you more gas on the blockchain to run. Blockchains have a gas limit per block, a maximum limit as to how much cumulative computation a block’s worth of transactions can afford. This is important because it sets a limitation on the size of the block, and ensures that full nodes can keep up with network validation. In other words, if too much computation was put into a block, validator nodes would gradually stop being able to keep up with the network due to space and speed requirements. When you consider that network congestion is already a major concern for blockchains, the incorporation of expensive AI processes could seriously exacerbate this problem.

Lastly, security is also an important aspect of the blockchain network, so modern proof-of-stake blockchains full nodes re-execute the transactions in the proposed block and attest to post-block state as a way to reach consensus. This means that if on-chain AI existed, for every inference run by a user, every full node would have to run the inference once. Now, imagine thousands of users around the world running thousands of inferences on the blockchain, and your home CPU validator executing ALL those inferences. It’s nothing to bat an eye at in the context of regular transactions, but when it comes to inferences from AI models that are already computationally expensive to begin with, which are moreover run on inefficient hardware, it further serves as an obstacle for native AI on the blockchain.

Challenges of Off-Chain Inferences

The challenges delineated above with the native implementation has prompted projects to consider the off-chain approach. There are a couple methods this can be done but it typically resembles a workflow like this: some request is made on-chain and that prompts a listening node to make an API request off-chain to a model provider which computes the request, and the responsible party writes the results of the request into smart contract memory on-chain where it can be retrieved.

While this is more scalable and efficient than the native approach discussed above, this introduces an entirely new problem: integrity.

AI models are often called a “black box”, and for good reason. It’s because people often have very little transparency and insight as to why and how an inference result is generated from a model. As a result, one can imagine that running off-chain inferences exposes blockchain networks to completely novel vectors of attack. Here’s a practical example to paint the picture: imagine a protocol that relies on off-chain ML to predict the leading returns of tokens before entering long/short positions to reflect those forecasts. Now, if I were the model provider that serviced these inferences, I could deliberately not run the model and return some bogus result, and proceed to simply front-run or sandwich their transactions for an easy profit. I could erroneously return a result that signals them to buy $DOGE, and proceed to buy $DOGE before them, and sell it right after they buy to make a profit on their market impact. That’s bad, and extremely dangerous because unlike many security vulnerabilities it can go completely undetected.

In order for blockchains to truly develop use-cases that depend on AI, security protocols that ensure the integrity of the inferences must be put in place.

Off-chain AI exposes new vulnerabilities in Pam

The Solution

The solution to the AI x Blockchain integration are mainly two-pronged and involve zero-knowledge machine learning (ZKML) and optimistic machine learning (OPML). As we will detail in future blog posts, these are both solutions the Vanna Blockchain will integrate natively on-chain.

Zero-Knowledge Machine Learning

ZKML is a bleeding field of research that involves generating cryptographic proofs that aim to prove that some computation, in this case the inference done by some model, was computed correctly and with integrity. As described succinctly by EZKL, we can create ZK proofs that cryptographically prove statements like:

I ran this publicly available neural network on some private data and it produced this output

or

I ran my private neural network on some public data and it produced this output

The term “zero-knowledge”, as one might infer, is derived from the fact that certain things like the model input, or the model weights, can actually be kept private from the verifier without any compromise to the cryptographic security of the proof.

The advent of ZK is massive for decentralized AI/ML, because the execution of the inference as well as the proof generation can now be done on one beefy machine, and the proof can be cheaply verified across a network of light nodes to ensure that the inference was done correctly instead of having to replicate the computation across the entire network!

However, whereas validation of the ZK proof can be done cheaply in constant time, the biggest challenges faced by ZKML currently involve the sky-high computational and time costs that comes with generating the ZK proof. As a result, companies like Cysic are creating hardware tailored to proof generation and developers at EZKL are constantly improving their software to speed up proof generation.

Traditional Blockchain VS Blockchain with ZKML

Optimistic Machine Learning

Optimistic machine learning is another solution to decentralizing AI inference, with “optimistic” referring to the idea of optimistically trusting the results of an inference unless (or until) someone challenges the result.

In a challenge, the challenger first puts up a financial stake and a verification game is played, where a bisection protocol is used to locate the disputed step that caused the divergence in computation. The arbitrator smart contract resolves the challenge, heavily slashing the inference computer or the challenger depending on the success or failure of the challenge respectively.

This “innocent until proven guilty” optimistic model comes with both pros and cons compared to ZKML. The biggest advantage is the lower time/computational cost that comes with adopting a model that doesn’t generate a proof on every inference, which is significantly lower especially with larger models where both the inference and proof generation can be extremely expensive. However, the biggest disadvantage is the lack of the immediate cryptographic security that ZKML offers, and in the event of a successful challenge, the potential halting of a blockchain or henceforth the replaying of transactions…etc.

Conclusion

While the intersection of AI and Blockchain is an interesting space as AI could revolutionize Web3.0 use-cases like it did in Web2.0, inherent limitations in the software architecture and hardware requirements do make this a difficult challenge.

And that’s one problem that Vanna Labs is tackling head-on, through bleeding-edge solutions like OPML and ZKML, the Vanna network will be able to execute on-chain inference in a scalable and secure fashion. Stay tuned to our medium page, our website, and our twitter for further updates that are coming very very soon.

Twitter: https://twitter.com/0xVannaLabs

Website: https://www.vannalabs.ai/

--

--