User‑Owned AI on NEAR: Privacy As The Room Where It Happens
ArQ6 min read·Just now--
Most people meet AI in a chat box.
You type a question, you watch tokens appear on screen, and if the answer looks right you move on. It feels intimate. Just you and the model.
But behind that clean interface, your words usually travel through a data center you cannot see, into logs you cannot inspect, and sometimes back into training pipelines you cannot influence. The place where your data actually lives is hidden.
The easiest way to understand what NEAR is building is to imagine that this place becomes visible. Not as a metaphor, but as a room with rules you can verify.
Two rooms
Imagine two glass rooms side by side.
In the first room a generic AI assistant runs on a central server. You see racks of machines and a tangle of cables. On the far wall, a board lists internal teams with access to logs. When you talk to the assistant, your voice echoes through the room and up into a balcony where unknown operators can listen in. They promise they will not, but there is no way for you to check.
In the second room the layout is similar. Racks of machines, cables, monitoring dashboards. The difference is that the middle of the room is sealed inside a steel cube with a thin window of light running along the top. You can see requests enter, see responses leave, but no one, not even the operator who owns the building, can look inside while it runs.
The second room is what NEAR is aiming for with NEAR AI Cloud. It uses confidential computing hardware from Intel and NVIDIA so that prompts and model outputs live only inside secure enclaves, never in the clear on the host machine. When a request finishes, the enclave can produce an attestation, a signed statement that proves the code ran inside genuine hardware with a specific measurement.
From the outside, that proof is the difference between “trust us” and “here is the receipt”.
What “user‑owned AI” really means
NEAR started talking publicly about “user‑owned AI” in 2024. The idea was not that every model would literally sit in a wallet. It was that users should have meaningful control over three things:
- Where the model runs.
With NEAR AI Cloud, inference happens inside TEEs that can be verified through attestations, not on arbitrary servers. - Who sees the data.
The platform encrypts data in transit and keeps it in the enclave so the infrastructure provider cannot read prompts or responses, even with root access. - How the agent acts on assets.
On NEAR, agents can be wired into smart contracts and the chain abstraction stack, so they sign transactions through MPC networks instead of handing keys to one company.
In other words, ownership is not a slogan. It is the combination of verifiable compute, non‑custodial control of keys, and on‑chain logic that survives any single front end.
Why privacy has to move below the application
When AI first exploded into public use, most products treated privacy as a layer you put on top of a system.
There were checkboxes about data retention, setting pages for “do not use this for training”, and privacy policies written in legalese. The underlying infrastructure remained the same. Logs still existed. Access still depended on internal process rather than cryptographic proof.
NEAR’s recent public messaging flips the stack. In an agent economy, privacy becomes critical infrastructure, not a last‑minute configuration.
Agents that hold value need to see sensitive information. Portfolios, invoices, negotiation histories, medical data in some cases. If that information flows through infrastructure that can read and reuse it, you have to trust everyone who operates that infrastructure. As the number of agents and operators grows, that becomes impossible.
So NEAR’s approach is to push privacy down the stack:
- Into the GPU and CPU hardware that executes the model.
- Into the cryptographic protocols that sign transactions across chains.
- Into the execution layer that handles cross‑chain actions through Confidential Intents.
If the lower layers are private by design, every agent and application built on top inherits that property. You do not need each new project to reinvent its own privacy system.
NEAR AI Cloud from a builder’s eye view
The marketing line for NEAR AI Cloud is simple. Private chat and inference for AI agents, backed by TEEs, with verifiable attestations.
From a builder’s perspective, the interesting details look like this:
- Intel TDX and NVIDIA Confidential Computing
NEAR AI Cloud runs confidential virtual machines on Intel TDX for CPU isolation and NVIDIA’s data center GPUs for model execution. Both vendors provide remote attestation services. That means a dapp or backend can verify that a given VM or container is running the expected code inside genuine secure hardware before sending it any sensitive data. - Encrypted request pipeline
Prompts are encrypted before they enter the enclave. Inside the enclave, they are decrypted, processed by the model, and re‑encrypted before leaving. Outside observers, including NEAR AI Cloud operators, only see ciphertext. - Attestation endpoint
NEAR exposes an endpoint that returns attestation reports for running workloads. Developers can validate those reports against Intel and NVIDIA attestation servers. This is what lets someone say “my agent only trusts results from enclaves with this exact measurement”.
None of that changes the way a user types into a chat box. The UI can look identical to any other AI app. The difference is in who, or what, can peek inside the room while the model is thinking.
Confidential Intents — private actions for agents
Talking is one half of an agent. Acting is the other.
On NEAR, actions are increasingly routed through Intents — a system where users or agents specify outcomes and off‑chain solvers compete to execute them across chains. In early 2026, NEAR introduced Confidential Intents, which add a privacy execution layer on top of this architecture.
Public coverage describes three key properties:
- Sensitive parts of the strategy or transaction logic run inside secure enclaves instead of on public nodes.
- Only the minimum required data is revealed to the chains that need to settle value.
- Off‑chain components produce proofs that the intent was executed correctly, without exposing all intermediate details.
For DeFi, that protects strategies from copy‑trading and front‑running. For agents, it protects the agent’s internal reasoning and private context.
Imagine an AI treasurer agent managing a project’s runway across several chains. It needs to see payroll data, vendor invoices, and internal forecasts. With Confidential Intents and NEAR AI Cloud, it can store and reason about that data inside enclaves, then route cross‑chain swaps and payments through Intents without dumping the full financial picture into a public mempool.
The on‑chain world still sees what it needs to — final transfers, balances, settlement — but the private context that informed those decisions stays sealed.
Chain abstraction and keys that no one owns alone
Privacy of data is one side of the coin. The other is control over keys.
NEAR’s chain abstraction stack, built around Chain Signatures, lets smart contracts on NEAR request signatures for other blockchains through an MPC network. Private keys for those external chains are never held by one server. Instead, they are split into fragments across many nodes, and any signature is produced collectively.
An AI agent that lives on NEAR can use this system to control assets on multiple chains without ever storing monolithic private keys on a single machine. When it wants to act, it:
- Computes a plan in NEAR AI Cloud inside an enclave.
- Expresses the desired outcome as an Intent or Confidential Intent.
- Relies on chain abstraction and MPC signing to execute across chains in a non‑custodial way.
No single provider ever has both the full view of the data and unilateral control over the keys. That combination is what makes “user‑owned” more than branding.
Why this matters now
Three trends are colliding at the same time:
- AI models are gaining the ability to act autonomously on behalf of users.
- On‑chain systems are maturing into global settlement layers for assets and contracts.
- Privacy tech is finally leaving the lab in the form of production TEEs, MPC networks, and protocols like Confidential Intents.
NEAR’s strategy is to sit at the intersection of these trends and offer an opinionated stack: AI inference in verifiable enclaves, transaction logic in private execution layers, and settlement across many chains through non‑custodial signing.
It is not the only way to build AI in Web3. But it is one of the clearest attempts to treat privacy as infrastructure rather than a feature request.
If the next generation of agents end up managing our money, negotiating on our behalf, and seeing the parts of our lives that never make it into public feeds, then the room where they think and act will matter more than any front‑end chat box.
NEAR’s bet is that this room should be one you can inspect, verify, and ultimately own. Not as a metaphor, but as hardware, code, and cryptography you can point to.
That is what makes privacy, in this context, the real infrastructure play.