When Your Smart Contract Calls an AI Model, Where Does It Actually Go?
RaySonSio6 min read·Just now--
ok this is not weird bad, you can say weird interesting. like "oh shit this is actually novel" weird.
everyone thinks ritual as "AI on blockchain" but a few explains what happens when your contract goes "hey i need to run llama 3" and sends that request into the void
so let me break down what's actually happening because it's kinda wild
the request leaves your chain and goes... where exactly?
okay so you deployed a contract on base. needs AI inference for whatever - risk scoring, content moderation, agent decision making, doesn't matter.
you call the AI precompile. transaction gets signed. and then what?
this is interesting and here ritual’s architecture is genuinely different from what you will expect
the request doesn’t go to "ritual’s servers" or centralized endpoint. it hits ritual’s network as a compute job broadcast.
think of it like a mempool but for computation instead of transactions. your job sits there waiting to be picked up.
nodes compete for your job
ritual nodes aren't validators in the traditional sense. they're compute providers with different hardware setups running different workloads.
when your AI inference request hits the network, nodes that CAN run that specific model see it and compete to fulfill it.
competition happens on:
- hardware capability - do they have the GPU/memory to run this model?
- pricing - what’s their bid based on resonance mechanism?
- reputation - have they been reliable for similar jobs?
- current load - are they already maxed out or have capacity?
so it’s not random assignment or round-robin. it’s an actual market where nodes bid to run your compute.
this is why resonance (their fee mechanism) matters so much. you need price discovery for heterogeneous compute or the whole thing falls apart.
the compute happens in a sidecar
once a node picks up your job, execution doesn't happen in the main EVM runtime
that would be insane. you can't run llama 3 inference inside ethereum's execution environment. completely different resource requirements.
instead it routes to the AI sidecar - a dedicated execution environment running parallel to main chain execution.
the sidecar is basically a specialized compute container. GPU access, model weights cached, optimized for inference workloads. completely separate from regular smart contract execution.
your main chain keeps humming along doing normal EVM stuff. the AI sidecar handles the heavy lifting.
this is the whole point of ritual's architecture - different compute for different jobs instead of one-size-fits-all validators.
results come back with proof
okay so the node ran your inference in the sidecar. got a result. now what?
can't just trust them right? "yeah bro i totally ran llama 3 correctly trust me"
this is where computational integrity comes in and ritual has a few options:
- optimistic verification - assume it’s correct unless challenged, if challenged node gets slashed
- zk proofs - node generates succinct proof that computation was executed correctly, can verify on-chain
- TEE attestations - if running in trusted execution environment, hardware provides cryptographic proof
- multi-party verification - multiple nodes run same job, compare results, slash if mismatch
which method gets used depends on the job requirements and what level of security you need vs cost you're willing to pay.
for most stuff optimistic is fine. for high-value decisions you want zk or TEE. for critical stuff you want multi-party.
point is the result comes back with SOME cryptographic guarantee it was computed correctly. not just "node said so"
the response hits your callback
result gets packaged up with proof and sent back to your contract's callback function
you specified this when you made the request - "when you're done, call this function with the result"
classic oracle pattern but with verifiable compute instead of just data feeds
your contract receives the output, verifies the proof (or the network already verified it depending on method), continues execution.
from your contract's perspective it's almost like a normal function call. just async with verification built in.
why this is actually interesting
most "AI on blockchain" projects do one of two things:
1. run inference off-chain, post result on-chain, hope nobody notices you're centralized
2. try to run AI inside the EVM somehow which is completely impractical
ritual's approach is different - specialized sidecars for specialized compute, connected to main execution through verifiable network calls
it's not trying to make ethereum run AI. it's building parallel execution environments that ethereum can delegate to.
and the network call layer handles all the coordination - job broadcast, node selection, compute execution, proof generation, result delivery
the part a few talks about: latency
real talk tho there's a tradeoff here
network calls take time. your contract makes request, waits for node to pick it up, node runs inference, generates proof, sends result back, your callback executes.
that's not instant. you're talking seconds minimum, potentially longer for complex jobs or high-security proofs.
for some use cases that's fine. risk scoring a loan? couple seconds is whatever.
for other stuff it's a problem. real-time game logic? trading decisions in volatile markets? might be too slow.
ritual's not trying to pretend this is zero latency. they're building for use cases where verifiable compute matters more than speed.
and honestly that's the right tradeoff. you can't have instant, cheap, verifiable, and decentralized all at once. pick three.
what happens when nodes go offline
been wondering about this - what if the node that picked up your job just... dies?
there's timeout logic built in. if a node doesn't respond within expected timeframe, job gets re-broadcast and another node can pick it up.
you’re not stuck waiting forever for a node that crashed. but this adds more latency. and if multiple nodes fail in sequence you’re looking at significant delays.
symphony consensus handles the parallel execution
this is where symphony (ritual's consensus mechanism) becomes important
you've got main chain execution happening. you've got sidecars running parallel compute. you've got network calls coordinating between them.
how does consensus work across all this?
symphony uses execute-once-verify-many (EOVMT) - select nodes execute the workload, generate proofs, other nodes verify proofs instead of re-executing
so not every validator needs a GPU cluster to participate in consensus. they just need to verify the proofs that compute nodes generated.
this is how you can have specialized hardware for different jobs while maintaining decentralization.
main chain validators might be running on basic hardware. AI sidecar nodes need GPUs. ZK sidecar nodes need proving hardware. TEE nodes need SGX. all participating in same network.
network calls are what ties it together - they’re the communication layer between heterogeneous compute environments.
where this gets really interesting: cross-chain calls
ritual’s building general message passing between ritual chain and other chains. so eventually you could have a contract on arbitrum make a network call that gets fulfilled by ritual’s compute network and result comes back to arbitrum.
your L2 contract gets access to ritual’s AI/ZK/TEE infrastructure without leaving the L2.
this is the "ritual as AI coprocessor for all of web3" vision people keep mentioning.
network calls become the interface layer - any chain can delegate compute to ritual, results flow back through verified messaging
why i think this matters more than the "AI on blockchain" narrative
everyone focuses on "ritual does AI on-chain" but that's not the innovation
the innovation is building a network call layer that can handle heterogeneous compute with verifiable results
AI is just the first use case. but the architecture works for any specialized computation:
- ZK proving for L2s that need prover networks
- TEE execution for privacy-preserving apps
- cross-chain state verification
- heavy cryptographic operations
- whatever specialized compute future apps need
ritual's network call infrastructure is the foundation that makes all of it work
ngl i thought this would be more straightforward when i started digging
"smart contract calls AI model, gets result back, cool"
but the actual implementation - node competition, sidecar execution, proof generation, callback delivery, consensus across heterogeneous hardware - it's way more intricate than the marketing suggests
which is good tbh. means they’re solving real problems.
@ritualnet on x if you wanna learn more!