Start now →

We Ran a Medical AI Model Inside a Hardware Enclave.

By Neurolix Protocol · Published May 8, 2026 · 10 min read · Source: Blockchain Tag
TradingRegulationAI & Crypto
We Ran a Medical AI Model Inside a Hardware Enclave.

We Ran a Medical AI Model Inside a Hardware Enclave.

Neurolix ProtocolNeurolix Protocol9 min read·Just now

--

From Zero to Verified On-Chain Attestation — Here’s the Proof.

Google Cloud · AMD SEV · neurolix-tee-node-1 · May 5, 2026

The Inconvenient Truth About Modern AI

Every hospital sits on data that could save lives. Every bank runs trading models that move markets. Every law firm has documents that decide cases.

None of them can use modern AI on that data.

Not because the models don’t exist — GPT-4, Claude, Gemini, MedGemma are right there. Not because the compute doesn’t exist — AWS, Azure, Google Cloud sell GPUs by the second.

They can’t use it because the moment sensitive data touches a public cloud, it becomes a compliance liability. GDPR. HIPAA. SOC 2. Each one a regulatory landmine.

According to the Cisco AI Readiness Index 2025, data privacy is now the #1 stated barrier to enterprise AI adoption — ahead of cost, talent, and infrastructure.

This is not a future problem. This is a $45 billion problem today.

The AI industry has spent five years optimizing for scale: bigger models, more parameters, larger context windows. Nobody optimized for the one thing regulated industries actually need:

Cryptographic proof that sensitive data never leaves a secure boundary.

That’s the problem Neurolix is building to solve.

What Is a TEE, Really?

Most explanations of Trusted Execution Environments are written for security researchers. Here’s one that isn’t.

Imagine a sealed glass box inside a server. Anyone can verify the box exists. Anyone can verify what model is running inside. But nobody can see the data flowing through it — not the cloud provider, not the system administrator, not the protocol operator. Not us.

This isn’t a software promise enforced by policy. It’s a hardware boundary enforced by silicon.

Modern TEE technologies — AMD SEV-SNP, Intel TDX, AWS Nitro Enclaves — encrypt VM memory at the CPU level. Even an attacker with physical access to the data center cannot read what’s inside.

When a workload runs inside a TEE, three things happen:

  1. The CPU generates a cryptographic attestation token proving the enclave was active
  2. The data is processed without ever being decrypted outside the boundary
  3. The output emerges with a commitment hash — a unique fingerprint of the computation

Verify the attestation, verify the hash, and you have mathematical certainty that the work happened correctly and privately.

This is what Neurolix is designed to deliver — at scale, on regulated data, with on-chain SLA guarantees. The architecture has been validated in this proof of concept. The protocol is being built.

Press enter or click to view image in full size
Hardware Isolation → Session Attestation → Cloud Verification → AI Execution

Field Report from the Build

What follows is the actual sequence of events. The terminal output is real. The on-chain transaction is verifiable by anyone reading this.

Step 1 — Spinning Up the Confidential VM

Google Cloud’s Confidential Computing offering supports AMD SEV-enabled instances out of the box. The configuration:

First verification — confirming SEV was actually enforcing memory encryption, Hardware-level confirmation. The enclave was live:

Google Cloud · AMD SEV · neurolix-tee-node-1 · May 5, 2026

Step 2 — Loading GPT-2 Inside the Enclave

We chose GPT-2 (124M parameters) intentionally for this PoC. It’s small enough to iterate fast on CPU-only hardware, large enough to be a real language model rather than a toy. The point was never to demonstrate state-of-the-art inference — it was to prove the architecture works end-to-end.

Inside the enclave:

from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')

# Confidential medical inference
prompt = "Patient diagnostic report - confidential"
output = model.generate(...)

The inference ran. Honestly, the first time it worked I ran it twice just to make sure. The data — which inside a real deployment would be actual patient records — never touched any non-encrypted memory region.

Step 3 — The Attestation

This is where Neurolix becomes different from a “trust us” architecture.

The Confidential VM produced an attestation token, signed by Google Cloud’s attestation service. The decoded payload:

{
"google": {
"compute_engine": {
"instance_confidentiality": 1,
"instance_name": "neurolix-tee-node-1",
"project_id": "project-55eb2dea-259c-4db5-a25",
"zone": "us-central1-a"
}
},
"iss": "https://accounts.google.com"
}

instance_confidentiality: 1 is the value that matters. It is Google certifying — cryptographically — that the enclave was active during the inference. Not us claiming it. Google certifying it.

Press enter or click to view image in full size

Step 4 — The Computation Commitment

The inference produced a unique commitment hash:

ec52836f23170a1b601dd7e475107f314ca004186707f69836f7615901a665bd

This hash is a fingerprint of the computation output — a verifiable record of what was processed. Combined with the Google Cloud attestation token, the two form independent but complementary proofs: one certifying the enclave was active, one certifying what was computed inside it. Modify any single bit and the hash changes completely. It’s the cryptographic equivalent of a signed, sealed, dated affidavit.

But a hash sitting on a server is just data. To make it verifiable forever, it needed to live on-chain.

Step 5 — Anchoring to Base Sepolia

We deployed NeurolixAttestation.sol to Base Sepolia testnet. A minimal contract — by design. No tokens, no governance, no complexity. One job: register attestations from the protocol.

After deployment, we called registerAttestation() with the GPT-2 commitment hash, the TEE type, the cloud provider, and the model identifier.

Transaction confirmed. The proof is now permanent.

Verifiable by anyone, forever: base-sepolia.blockscout.com/tx/0x6c9a8e68ddc3b96b70b09785e1efbc519b371132aaf7b7ac5e428954de010046

Step 6 — The Loop Closes

From cold infrastructure to verifiable on-chain attestation, end-to-end.

Confidential VM (AMD SEV)

GPT-2 inference inside enclave

Attestation token signed by Google Cloud

Computation commitment hash produced

Hash anchored on Base Sepolia blockchain

Globally verifiable, permanent proof

I didn’t expect it to work this cleanly on the first attempt.

TEE → Attestation → Blockchain. The loop is closed.

Proof of Execution — watch the protocol primitive run live

Honest Limits of This PoC

I could have waited six more months to publish this. I didn’t, and here’s exactly why — and what’s still missing.

This proof of concept demonstrates that the architecture works. It does not, on its own, demonstrate a production-ready protocol. The honest limits:

The model is small. GPT-2 124M is from 2019. Production workloads will require Llama 3 70B, Mixtral, Claude-class models. The architecture supports them. We have not yet tested them.

The enclave runs on a single cloud. True decentralization requires multi-cloud, multi-operator attestation. Phase 2 of the roadmap brings Azure Confidential Computing and AWS Nitro Enclaves into the network.

AMD SEV is a strong baseline, not the strongest TEE. AMD SEV-SNP and Intel TDX provide stronger memory integrity guarantees. Future iterations will support both.

The compute is CPU-only. Real AI workloads require GPUs inside confidential VMs — NVIDIA H100 with TEE-IO, or AMD MI300X confidential mode. Phase 3 milestone.

We chose to ship the PoC fast and document its limits openly, rather than wait for a perfect demo nobody could see.

The Smart Contract Story — Including What’s Still Broken

The Neurolix protocol consists of six smart contracts on Base L2. Over the past four days they went through four major iterations, audited internally with a structured Devil’s Advocate methodology — actively looking for vulnerabilities before the network goes live.

Issues found and resolved (v1 → v4):

These are now fixed in v4 and the codebase is significantly hardened.

Issues identified and NOT yet resolved — to be fixed before mainnet:

  1. Heartbeat farming risk. The current heartbeat function does not enforce that the work was actually done — it can be called from a script. Mitigation: heartbeats must include a workloadCommitment plus an oracle-signed proof. Implementation pending.
  2. MEV exit during SLA breach. A node operator can call deregisterNode() while sessions are active, escaping slashing. Mitigation: deregisterNode must revert if any active session exists for the node. One-line fix, scheduled for v5.
  3. SLA parameter trust. Session parameters are currently set by the client unilaterally. A malicious client could set impossible SLAs to claim refunds. Mitigation: bilateral signature (client + miner) required on session creation. Architecture defined, implementation pending.

These three are blockers for mainnet. They are not blockers for the testnet PoC, where the goal is to validate the architecture and gather feedback from early users.

We document them publicly because that is the difference between security theater and actual security.

Why $OLIX Is the Fuel

The OLIX token is not a marketing layer wrapped around the protocol. It is the economic primitive that makes the protocol work.

Three roles, hardcoded in the contracts:

1. Payment for compute.

Every confidential AI session is paid in OLIX. No stablecoin escape hatch — using the network requires the token. Demand is anchored to real usage, not speculation.

2. Deflationary burn.

Every compute session burns a percentage of OLIX permanently. The hard cap is immutable — but circulating supply decreases with every job. More AI on Neurolix → structurally less OLIX in circulation.

3. Node operator collateral.

Running a node requires staking OLIX. Bad behavior gets slashed. Good behavior gets rewarded with a vesting structure designed to align long-term node operator incentives with protocol health.

This is the difference between a token that exists to be sold and a token that exists because the protocol cannot function without it.

The complete tokenomics — including exact distribution, vesting schedules, and burn parameters — will be published as a dedicated whitepaper before mainnet launch.

Roadmap

Phase 1 — Hardening & Multi-Cloud

Resolution of open security issues. External audit. Azure + AWS attestation.

Phase 2 — Base Mainnet Launch

OLIX token deployment. NodeRegistry live. First node operators onboarded.

Phase 3 — GPU Confidential Compute

NVIDIA H100 with TEE-IO support. Production-grade LLM inference inside enclaves.

Phase 4 — Enterprise Pilot Program

First regulated-sector partners. Closed beta. Commercial agreements

Phase 5 — General Availability

Open network. Permissionless node onboarding. Public SDK.

What This Is, and What It Isn’t

Neurolix is not a generic GPU compute marketplace. It is not competing on price per hour.

Neurolix is the first vertical DePIN product built specifically for AI teams operating on regulated data — healthcare, finance, biotech, legal.

Sectors where data privacy is not a feature. It is a legal requirement.

The thesis is simple: in the coming years, no serious enterprise will train AI on sensitive data without cryptographic proof of confidentiality.

Neurolix is building the protocol that will provide that proof — at scale, on-chain, with hardware guarantees.

Want to Follow the Build?

This is Day 7. Follow what comes next.

X: @NEUROLIX

Email: [email protected]

On-chain: base-sepolia.blockscout.com/tx/0x6c9a8e68…

If you work in regulated AI and these problems sound familiar, reach out. Even if just to compare notes.

Built by @BartLee99

— -

This is the beginning of a build log. New posts every week.

Follow @NEUROLIX on X for daily progress updates. Subscribe to this Medium publication for technical deep-dives.

— -

Disclaimer: This article is for informational and technical purposes only. $OLIX is a utility token designed to access Neurolix computational resources. It does not constitute a financial product, investment advice, or an offer to buy or sell securities. The Neurolix protocol is in active development and this document describes a proof of concept, not a
production-ready system. Participation in any future token event involves risk. Please conduct your own research.

This article was originally published on Blockchain Tag and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →