Portable AI Context Might Become More Important Than The Models Themselves
Vaggelis5 min read·Just now--
One of the biggest problems in AI right now is not model quality anymore.
It is context.
Every AI tool is becoming smarter, faster, and cheaper. But despite all the progress, most systems still behave like they have short-term memory loss.
You switch from ChatGPT to Claude and lose continuity.
You open a new session and the AI forgets your project.
You use multiple agents and none of them share understanding.
You spend more time rebuilding context than actually doing work.
That is the problem Plurality and Oasis are trying to solve with portable AI context infrastructure.
And honestly, this feels like one of the most important infrastructure narratives emerging around AI.
The key idea is simple:
Your AI context should belong to you, not to the platform hosting the model.
Right now, context is trapped inside centralized silos.
Your preferences, workflows, memory, identity, communication style, project history, and behavioral patterns are effectively locked into individual platforms. The AI industry talks constantly about “personalized AI,” but most personalization today is platform-specific. The moment you leave the platform, the memory disappears with it. ()
That creates several problems at once.
- First, users become dependent on specific providers because all accumulated context lives there.
- Second, agents become fragmented. One AI assistant does not know what another one learned five minutes earlier.
- Third, privacy becomes a nightmare because centralized providers effectively hold the most intimate behavioral dataset ever created.
As AI systems become more integrated into daily life, context becomes more valuable than prompts themselves.
Your context is:
- your work history
- your communication patterns
- your research
- your intent
- your goals
- your preferences
- your relationships
- your identity layer
That is an incredibly sensitive dataset.
And this is where Oasis enters the picture.
Oasis Network has spent years building around confidential compute, privacy-preserving infrastructure, and trusted execution environments. Instead of positioning itself as another generic Layer 1 chain, it has increasingly focused on becoming infrastructure for confidential AI systems.
Plurality is now building portable AI context infrastructure on top of that stack.
The architecture matters because portable context without privacy creates a different kind of problem.
A lot of people hear “portable AI memory” and immediately think:
“Cool, my AI remembers me everywhere.”
But there is a darker side to that future too.
Imagine:
- cross-platform behavioral tracking
- unified surveillance profiles
- centralized memory graphs
- persistent psychological models
- invisible profiling across every AI interaction
Without privacy guarantees, portable context becomes one of the most dangerous surveillance systems ever built.
Plurality’s approach is trying to avoid that by making context:
- permissioned
- revocable
- encrypted
- portable
- user-controlled
Instead of the platform owning your memory, the user owns the context layer itself. This is actually a massive shift conceptually.
Today’s AI ecosystem mostly works like this:
Platform owns:
- the model
- the memory
- the identity
- the permissions
- the retrieval layer
The user is basically renting intelligence from closed systems. Portable context flips that relationship.
The user becomes the owner of:
- memory
- preferences
- agent history
- contextual identity
- sharing permissions
And the models become interchangeable execution engines. That changes the power dynamics of AI dramatically.
If context becomes portable, then switching models becomes easy.
Suddenly:
- ChatGPT
- Claude
- Gemini
- Grok
- local models
- autonomous agents
all become interfaces into the same persistent intelligence layer. That is why this narrative matters so much. The real moat may not be the model itself anymore.
The moat may become:
- who owns context
- who controls memory
- who controls identity
- who manages permissions
- who secures retrieval
Plurality calls this an “Open Context Layer,” essentially a portable memory substrate for the agentic web. And when you think about where AI is heading, the use cases start becoming obvious.
Imagine:
- AI agents collaborating using shared but permissioned memory
- persistent workspaces across multiple LLMs
- AI systems that retain long-term project continuity
- personal AI companions with memory persistence
- context-aware browsers and productivity tools
- selective monetization of expertise/context
- cross-agent coordination without constant re-prompting
These systems require infrastructure that current AI platforms are not designed for. Most LLMs today are still fundamentally stateless systems pretending to have memory. Even current “memory” implementations are usually:
- centralized
- provider-controlled
- opaque
- difficult to export
- difficult to revoke
That is not sustainable long term. The broader trend is becoming clearer across the Oasis ecosystem too. Plurality is not the only team building around this thesis.
Ekai is building a private context layer for AI agents on Oasis as well. Their “Contexto” system focuses on persistent AI memory, scoped retrieval, and confidential context storage using ROFL and Sapphire.
Both projects are converging toward the same idea. AI systems need:
- persistent memory
- privacy-preserving infrastructure
- portable identity
- verifiable compute
- secure context routing
without exposing raw user data.
And honestly, this feels much more important than another incremental improvement in benchmark scores. People underestimate how broken the current AI experience still is.
The average user constantly:
- repeats instructions
- rebuilds context
- loses workflows
- switches tools manually
- re-explains projects
- manages fragmented histories
That friction becomes catastrophic once agents start operating autonomously for long periods. An agent without persistent context is basically trapped in short-term cognition. This is especially important for multi-agent systems.
Future AI workflows will probably involve:
- research agents
- coding agents
- browsing agents
- financial agents
- communication agents
all coordinating together.
Without a shared, permissioned context layer, those systems become fragmented and unreliable. Portable context is probably a prerequisite for the “agentic web” everyone keeps talking about.
Of course, there are still major open questions. For example:
- Who hosts the context?
- How is revocation enforced?
- How do agents authenticate context access?
- How are permissions standardized?
- How do you prevent context poisoning?
- How do you price context storage/retrieval?
- How do you audit memory usage?
This is still very early infrastructure, but I think the direction is extremely important. For years, crypto tried to compete with AI. Now some projects are realizing the bigger opportunity may be enabling the infrastructure AI systems are missing:
- privacy
- ownership
- interoperability
- coordination
- trusted execution
- persistent identity
And Oasis seems increasingly focused on exactly that layer. Not “AI on blockchain”, but confidential infrastructure underneath agentic AI systems. That is a much more compelling thesis to me.