Start now →

Building the Alternative: Making private markets machine readable

By Vega · Published April 20, 2026 · 12 min read · Source: Fintech Tag
Blockchain
Building the Alternative: Making private markets machine readable

Building the Alternative: Making private markets machine readable

VegaVega10 min read·Just now

--

Before the rise of AI, our systems only supported human work. This meant that data models were primarily designed for the purpose of storing records and executing on deterministic workflows that we created. They did not need to encode the deeper operational logic of a domain, this was only needed by — and lived in the heads of — the actors, i.e. the user, or the engineer writing code for how the system should perform.

Now, we expect the system to be both the record and the actor.

Expectations are, however, one thing and reality another for AI in private markets. Despite advancements in adjacent areas like investment team analysis, adoption in fund and client operations remains relatively shallow. Core processes across the investor lifecycle are still manual and AI is workflow-assistive at best. In short, advancements in reasoning abilities have not yet translated to industry-grade automation here.

The reasons are structural. This is a domain where data is low-quality and siloed, tolerance for error is close-to-zero, and traceability is fundamental given what is at stake for client trust. The workflows require a depth of niche operational knowledge that is not publicly available and is difficult to codify even for human understanding, let alone use to train a model.

We think the most complex domains are the ones most worth tackling. Structural problems do not yield to generic solutions — this moment requires something built from the ground up.

This is not just a data problem. It is an operational intelligence problem.

LLMs are semantic by nature. In order to produce strong output, they need context. Foundational models have been trained on publicly available internet data, making them very effective at general tasks but less so in niche verticals like fund operations. In private markets, the operational logic of the client business is, by nature, private. It is context that lives in the heads of experienced professionals, and takes the form of vast domain expertise that standalone agents lack today.

Take the word ‘commitment’, for example. In private markets, this carries a precise meaning which differs from its use in law or relationships. Humans working in their verticals know this implicitly, but AI does not. For the reader, this might seem simple, but agents commonly misinterpret commitment as ‘dedication’.

Even a simple query relies on AI to make countless assumptions of this kind — and every one of these is a vector for hallucination. This is why horizontal AI appears so capable at the surface but so often disappoints in fund operations. It fails when required to handle edge cases, interdependent processes, links across the investor and fund lifecycle, or enforcing overarching constraints.

What results is a classic chicken-and-egg problem. We give the LLM insufficient context, it fails, we lose trust, and restrict its access further. And so it fails again. Many have framed the AI reliability issue as simply a data problem: ‘give the model more data, and it will do better.’ We disagree. This is not just a data access problem, it is an operational intelligence problem.

Today, most conversations around building vertical-specific AI approach the operational intelligence problem at the micro-level. The focus is on fine-tuning very detailed instructions that we give to each agent for each specific workflow.
This is important, but for AI to work effectively within complex systems, we also need to build in the macro view. Beyond fine-tuning the workflow, agents are still missing the complex, overarching domain context within which they must operate.
We call this the ‘institutional memory’: the collective knowledge of how a firm or industry operates, and how the different pieces link together.
This logic must be made explicit. The way we approach this at Vega is as an overarching context layer — a control lifecycle ontology — between our vertical agents and the platform that GPs and LPs do their work on.
We call this layer Vega Intelligence. It takes the unwritten industry context that experienced professionals carry in their heads and make it machine readable for agents. In doing so, we are making it possible to build an entirely new class of systems for the industry: AI-native operational infrastructure.

Why are we talking about ontologies now?

Ontologies are not a new idea. They have existed for decades in fields such as knowledge representation and semantic web research. What has changed is not the concept itself, but why it matters now.

Technology has never needed the same explicit knowledge as us. Software was historically only built to support human action. For this, a classical reference architecture was sufficient, with humans coding deterministic workflows. Now, we are asking AI to act in non-deterministic ways, and to act very well — with the right judgement, context, and nuance that a human would have.

Straight out of the gate, we are asking for a great deal whilst providing very little context in return.

What we call ‘unwritten institutional memory’ works very well to sort human action behind a collective goal, but not when you want a machine to perform the same action. When the system becomes the actor, the absence of explicit operational intelligence becomes a structural limitation.

In addition to this, fund operations is not generic knowledge work. Actions are legally binding, regulated, cross-entity, and state dependent. In order to meet the industry’s bar for a minimum viable system, output must be reproducible, traceable, validated against fund-specific constraints, and defensible under scrutiny.

We cannot expect AI to learn reliably from unstructured exposure, especially in as unforgiving a domain as this one, where mistakes can cost millions.

In essence, we are asking the lawyer to skip law school, jump straight into court to argue a multi-year class action lawsuit, and blaming their lack of available training data when they fail.

The engineering perspective

Engineers often hear the world ‘ontology’ and assume either one of two things:
(i) That it is a philosophical abstraction, or
(ii) that it is a glorified database schema.

In practice, it is neither. It is useful to see it as the type system of an industry.

In programming languages, a type system defines what objects exist, what operations are valid, and what invariants must hold across the system. If a developer tries to divide a string by an integer, the compiler immediately stops the programme because the operation violates the rules of the type system. In complex industries, however, that equivalent “compiler” rarely exists. Actions take place across loosely coupled systems, and the governing rules are often enforced socially or procedurally rather than programmatically.

An operational ontology effectively introduces a type system for the real world. In private markets, for example, a Commitment represents a pledge of capital by an investor. A Capital Call is a state transition, which reduces unfunded commitment, and a Distribution is a capital flow, which increases realised returns. An Investor Right defined in a legal agreement may create an Operational Obligation that the GP must fulfil during the lifecycle of a fund. The ontology encodes these relationships and invariants so that systems can understand how the domain behaves rather than simply storing records about it.

Once this structure exists, systems can validate actions in the same way a compiler validates code. In practice, this means encoding lifecycle state, constraints, and relationships directly into the system model so that every action can be evaluated against the same operational graph of the fund.

This becomes especially powerful when paired with AI agents. Instead of asking a model to infer the rules of the domain from raw data every time it acts, we provide a structured environment in which those rules are already defined. The agent reads the ontology, understands the primitives and constraints that govern the system, and performs reasoning within those boundaries. In doing so, the space for error narrows considerably, reducing hallucination risk whilst making automated decisions far more reproducible and defensible.

We are modelling the ontology of the industry, not just the firm

Most players building ontologies today are doing so at the firm level — their own, or for a client. Vega is building an ontology at the industry level, mapping the fundamental primitives and constraints of private markets fund operations as a whole.

In order to build infrastructure that lasts in a new era, and to act as a true partner for GPs as they navigate a changing competitive landscape, our approach to innovation needs to simultaneously reflect the strategic priorities of GPs and the most minute levels of detail that they handle every day.

Whilst building an industry-level ontology is more complex, it unlocks new possibilities.

Portability: A firm-level ontology only works for that firm. An industry-level ontology means every GP using Vega can operate with a shared understanding of how private markets work — the same primitives, the same lifecycle logic, the same constraint model. This is structural intelligence that has been designed as the new industry standard, and built into the fabric of the products that GPs buy from Vega.

Interoperability: Workflows don’t just happen inside one firm. A shared industry ontology makes true ecosystem connectivity possible — it becomes a common language. Private markets needs its public markets moment to move in concert and standardise together — GPs, LPs, admins, counsel, etc — and Vega acts as a vehicle for consensus to drive that change.

AI as an industry participant: Humans in fund operations are not just individuals in their firm — they are participants in the industry. The AI workforce for private markets needs to be the same, and thus able to reason within the overarching industry’s conventions — not just the firm. When agents truly understand how the broader industry works, they can better handle edge cases, cross-lifecycle logic, and novel situations that a firm-level model might break on. Particularly as GPs begin to navigate more uncharted waters (in retail and wealth, for example), being able to navigate the new becomes just as important as working flawlessly in the old.

This requires a player that is at once an outside disruptor and an expert insider.

Building the industry’s intelligence layer is a gargantuan task. It is not primarily a machine learning problem. It is a systems problem and a domain problem, and that combination of capabilities rarely exists together.

The concepts involved are not obscure. Ontologies, knowledge graphs, and constraint engines are established tools. The hard part is having the vantage point required to use them correctly. You need to have seen enough of the industry, at sufficient depth, across enough firms, and have the right technical know-how to distinguish to map it in a constructive machine-readable way. Whilst it is a technical challenge, it is also an access challenge, a trust challenge, and fundamentally a time challenge. We have spent the last three years laser-focused exclusively on the client operating model for alternative asset managers, and building the trust of the most sophisticated managers in the world.

We built Vega for a new era of private markets characterised by scale, complexity, and technology. Fund operations appear chaotic when viewed through individual workflows and isolated systems — which is the view for most, including GPs themselves. We have observed and built for structural patterns across the entire investor lifecycle, across asset classes, jurisdictions, fund types, and manager structures.

Three capabilities — each hard to build in isolation, and harder still to combine — make this possible:

(1) Cross-firm operational visibility Vega operates across the investor lifecycle, across multiple GPs. This gives us a rare scale and birds-eye view: recurring lifecycle structures, common investor rights, consistent obligation patterns and constraints. What looks like a unique workflow inside one fund is often a variation of a deeper universal concept, and what looks like standard process at another might be a local artefact of a legacy system. To distinguish the primitive from the parochial requires operating at the level of the industry.

(2) Deep domain knowledge Private markets fund operations are legally binding, state-dependent, and highly regulated. The difference between two clauses in a side letter, for example, can determine capital allocation rights, reporting obligations, or eligibility to participate in investments. To capture this correctly requires more than just structuring data, but the institutional knowledge that comes from working directly alongside legal, fund operations, and investor relations teams across the entire fund lifecycle. That knowledge is not found in documentation, it is earned in practice.

(3) Systems thinking Constructing an ontology is a fundamental architectural exercise. It requires identifying stable primitives, defining the lifecycle states they move through, modelling relationships between funds, investors, commitments, and obligations, and encoding the invariant constraints that must always hold.

Knowledge graphs allow complex relationships to be expressed and traversed. Constraint engines ensure actions respect the rules of the system. Agent schemas define how automation interacts with the operational model. Together, these tools allow institutional knowledge to be expressed not just as documentation, but as an executable structure that both humans and machines can reason over.

Private markets have reached a scale where operational fragmentation is becoming untenable. AI has reached a point where systems are expected to act, not just store. Vega sits at the intersection of both: close enough to the operational core to observe the industry’s structure, whilst sitting at the bleeding-edge of innovation in the space to codify it. This requires being at once an outside disruptor with the engineering ambition to reimagine infrastructure, and a trusted insider with credibility and understanding to get it right.

Building the alternative

About Vega

Vega is the AI-native operating system for alternative asset managers to service and scale their client base.

The founding team consists of alternatives specialists from investment firms such as KKR, Blackstone, Elliott, and Goldman Sachs, along with top product and engineering talent from fintech scale-ups like Revolut and Trade Republic. Vega has raised over $28M in funding from Apollo, Motive, Picus Capital, Citi Ventures, and 60+ senior executives from the alternative investments industry.

For more information, please visit vega-alts.com

Authors

Bart Zuber Bart is a Staff Engineer and Chapter Lead at Vega, leading on AI architectural efforts

Christy Nganjimi Christy is an AI Researcher at Vega and PhD candidate at the University of Oxford in the Department of Engineering Science, focusing on AI research

Sara Saloo Sara is the Head of Strategy & Growth at Vega and former strategy consultant at BCG

This article was originally published on Fintech Tag and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →