Start now →

The Pearl Report ¶PRL

By Xavi · Published May 14, 2026 · 32 min read · Source: Cryptocurrency Tag
BitcoinEthereumAI & Crypto
The Pearl Report ¶PRL

The Pearl Report ¶PRL

AI Native Digital Money

XaviXavi26 min read·1 hour ago

--

Press enter or click to view image in full size
How Pearl changes the unit economics of AI

1. The Pitch

Every era has a resource so fundamental it becomes money. Bitcoin turns energy into currency. Trustless, global and censorship-resistant.

AI is doing something even more powerful: it turns energy into intelligence. Deployable anywhere and useful for almost everything.

Pearl is the first asset natively produced by AI and natively secured by AI. Every GPU cycle producing LLM tokens can simultaneously mint Pearl tokens with marginal extra electricity, zero wasted compute and one unified primitive. 2-for-1.

Proof of work represents humanity’s demand for energy, monetized. Pearl represents humanity’s demand for intelligence, monetized.

Pearl Research Labs @prlnet, April 27, 2026

2. Quick Facts

Protocol & tokenomics

Mainnet, week 2 (April 27, 2026 launch)

Resources

3. Two Markets, One Grid: A Fight for Megawatts

Two of the largest energy-consuming digital markets on earth are now bidding for the same megawatts. Bitcoin mining, after a decade of growth, draws over 2% of US electricity to compute hashes whose only purpose is the proof itself. AI data centers, on a steeper curve, draw electricity to compute matrix multiplications that produce inference and training. Both demands keep rising. The grid does not.

Press enter or click to view image in full size
An energy collision

AI is buying every megawatt it can find

The most authoritative recent projection comes from the International Energy Agency’s Energy and AI report. Total data center electricity consumption is on track to roughly double between 2025 and 2030, from about 485 TWh to 950 TWh, reaching around 3% of global demand. The AI subset of that total is projected to triple over the same period. AI-focused data center electricity grew 50% in 2025 alone, three times the rate of data centers overall. Power density per AI server has gone up roughly 11x between 2020 and 2025 and is projected to grow another 4x by 2027.

The capital response on the AI side is at a scale not seen for new electricity buildout in living memory. In September 2024, Microsoft signed a 20-year power purchase agreement with Constellation Energy to restart Unit 1 of the Three Mile Island nuclear plant for 835 MW dedicated entirely to Microsoft’s AI data centers, backed by a $1.6 billion revival cost and a $1 billion federal loan. Meta followed in late 2024 with a nuclear request for proposals targeting 1–4 GW of new generation, then announced deals in January 2026 with Oklo, TerraPower, and Vistra for up to 6.6 GW by 2035. The OpenAI / Oracle / SoftBank Stargate Project, formally announced January 21, 2025, intends to invest $500 billion over four years for AI infrastructure in the United States, with around 7 GW of planned capacity already mapped across six sites.

Bitcoin miners chase the margin

On the Bitcoin side, something stranger is happening: Texas is paying its biggest Bitcoin miners to stop mining. The state’s grid is so overloaded that the operator now writes checks to miners in exchange for switching their machines off during peak demand hours. One miner pocketed over $32 million in the summer of 2023 for shutting its rigs down during a heat wave. By 2024, these shutdown payments covered roughly 15% of Riot Platforms’ total annual electricity bill. The largest customers of the grid have become the largest beneficiaries of not using it. The trend is accelerating: by 2025, miners and AI data centers together account for about 10% of total grid consumption, up nearly 60% in one year.

The bigger signal is what these mining operators are now doing with their facilities. Core Scientific signed multi-tranche HPC hosting contracts with CoreWeave totaling around 500 MW and over $4.7 billion in projected revenue across 12 years, then accepted CoreWeave’s all-stock acquisition offer in 2025. Hut 8 signed a 15-year, $7 billion lease with Fluidstack for 245 MW of AI capacity at its Louisiana River Bend site. Iris Energy (now IREN) signed a $9.7 billion AI cloud services pact with Microsoft. TeraWulf signed a $9.5 billion agreement with Fluidstack for 168 MW in Texas, alongside an earlier 70 MW lease to G42 in New York. Applied Digital has gone further, purpose-building HPC sites from the ground up rather than retrofitting mining warehouses. The infrastructure operators most committed to Bitcoin mining have already voted with their power contracts that the future of those buildings is AI compute.

Compute is leaving the planet

If you want a measure of how severe the energy bottleneck has become, look at where the world’s largest AI buildouts are starting to be planned. In January 2026, SpaceX filed plans with the FCC for up to a million Starlink-connected satellites for an orbital AI data center constellation. Blue Origin announced its 5,400-satellite TeraWave constellation around the same time. China announced a 200,000-satellite buildout focused on in-orbit processing. Starcloud, a Y Combinator-backed startup, deployed an NVIDIA H100-class system in space in 2025 and trained a Google Gemini variant in orbit. Google itself published Project Suncatcher in November 2025, a feasibility study arguing that solar-powered orbital data centers become cost-competitive with terrestrial ones once Starship-class launch reaches around $200/kg and ~180 launches per year, projected for the mid-2030s. Latitude Media reported in May 2024 that energy had become “the primary bottleneck” for AI; eighteen months later, the response is launching the data centers off the planet.

The asymmetry

Pearl Research Labs has framed the collision in a single sentence:

Bitcoin’s security is competing with AI for energy. Pearl’s security scales with AI adoption.

Two systems are bidding for the same input. Bitcoin’s proof has no downstream value, and the operators most committed to it have already started reassigning their facilities to AI workloads. Pearl is the protocol that acknowledges the vote they’ve already cast.

4. What Pearl Is

Bitcoin pulled off the first transmutation: it turned electricity into digital scarcity, and the trick worked. Pearl is the second-order transmutation. The same kilowatt-hour now produces both intelligence (an AI inference or a training step) and scarcity (a Pearl block) in one motion.

The protocol is a Proof-of-Useful-Work blockchain built as a fork of Bitcoin. Where Bitcoin miners compute SHA-256 hashes that have no value beyond securing the network, Pearl miners compute matrix multiplications they would already be doing for AI training or inference. The block opening condition is satisfied as a side effect of the multiply, which means the energy that secures the chain is the same energy that produced an AI token, a fine-tuning gradient step, or any other GEMM-based workload.

Press enter or click to view image in full size
Pearl’s cuPOW consensus algorithm: the 6-step flow

How cuPOW actually works

Step 1. Pick the matrices. The miner picks any two large tables of numbers, A and B. In practice, these are almost always the actual numbers an AI model is working with: the model’s weights and the input it’s about to process.

Step 2. Lock the inputs. The protocol generates a unique fingerprint of A, B, and the current state of the blockchain, using a hash function called BLAKE3. What this enforces: the miner can’t swap in different matrices halfway through, and can’t reuse the same work for a different block.

Step 3. Add anti-cheat noise. Two small random adjustment tables E and F are generated automatically from the fingerprint. The miner computes A’ = A + E and B’ = B + F, and multiplies the adjusted tables instead of the originals. What this enforces: a dishonest miner can’t shortcut by picking trivially easy inputs like all zeros. The adjustments make the work as hard as multiplying two completely random tables, no matter what the miner submitted.

Step 4. Multiply and hash. The miner multiplies the adjusted tables piece by piece, breaking the work into small chunks called tiles. After each tile is computed, its result gets fingerprinted. Once in a while, a tile’s fingerprint matches the current difficulty target. When that happens, the miner has won the right to publish the next block. What this enforces: the running fingerprint depends on every step of the computation. Skipping any step, or shortcutting the math, produces a different fingerprint that won’t match. The only way to find a winning tile is to actually do the multiplication.

Step 5. Recover the AI calculation. Because the noise added in Step 3 has simple structure, the miner can subtract it out at the end to recover the original A × B result: the actual AI calculation they wanted. This subtraction step is fast, much smaller than the cost of the multiplication itself.

Step 6. Prove the work. The miner generates a compact mathematical proof (under 60 KB) that all six steps were done correctly, using a cryptographic technique called a zk-SNARK. The rest of the network checks this proof in milliseconds, without re-doing the multiplication. Trust without redo.

The result is one of those rare engineering objects where the marginal overhead of mining is small. The whitepaper estimates around 10% added cost over a vanilla GEMM, which is the basis of the “two-for-one” framing.

Blind verification of useful compute

The ZK proof structure does one additional thing worth naming. The proof certifies that a valid MatMul happened with the correct perturbation; it does not reveal what A and B were. From the chain’s perspective, Pearl is blind to the contents of the workload it just secured. A bank can mine while running a proprietary model on internal trading signals. A pharma company can mine while running drug-target inference. The chain learns that compute happened, to spec. The contents of that compute stay private to whoever ran it. This property is blind verification of useful compute, not end-to-end private inference. Full-LLM proof-of-inference remains an open problem the team has published as part of the Polymath challenge.

Chain specs

Above the cuPOW protocol, Pearl is a deliberately conservative Bitcoin fork with several modern upgrades: faster blocks (194 seconds vs Bitcoin’s 10 minutes), a smoother emission curve with no halving shocks, a supply cap of 2.1 billion coins with half emitted in the first 4 years, and a more responsive difficulty adjustment. The chain also ships with post-quantum cryptography wired but not yet activated, so the upgrade can flip without a hard fork the day quantum becomes a practical threat. Full technical specifications are in Quick Facts and the whitepaper.

Fair launch

Pearl launched on April 27, 2026 as a fair-launch chain. No premine, no insider allocation, every coin emitted through Proof-of-Work from genesis. The team mined from day one to bootstrap the chain ahead of the public launch, consistent with the whitepaper’s intentionally low initial difficulty (nbits=0x1b00ffff). That bootstrap window is normal for a fair-launch chain (someone has to mine the first blocks), and the absence of a premine or allocation means the team’s only avenue to ¶PRL was the same as anyone else’s: turn on a GPU and mine. Two weeks in, the chain has 7,793 unique mining addresses across the first 49,147 blocks, with the top single address holding 2.64% of blocks mined to date and the top 10 combined at roughly 11.9%. That’s a healthier early distribution than most fair-launch chains achieve at this stage, and notably more spread out than Bitcoin’s pool concentration today.

5. The Theoretical Foundation

“It would be exciting to come up with a scheme in which evaluating the pricing function serves some additional purpose,” wrote Cynthia Dwork and Moni Naor in 1992, in the same paper that introduced the concept of a proof of work. Three decades later, in a 2019 blog post, Vitalik Buterin called the same idea “probably not feasible.” In April 2025, three researchers published a paper that quietly answered the question affirmatively, and Pearl is that paper’s reference implementation.

The paper is “Proofs of Useful Work from Arbitrary Matrix Multiplication” by Ilan Komargodski and Omri Weinstein, published as IACR ePrint 2025/685 and on arXiv as 2504.09971. Its main result, stated in plain language, is the construction of a Proof-of-Useful-Work protocol for the task of matrix multiplication with 1+o(1) multiplicative overhead over a naïve MatMul, where the prover (the miner) can pick the input matrices themselves. In plain words: a way to make useful matrix multiplication count as a proof of work for blockchain consensus, with essentially zero added cost compared to just doing the multiplication.

The “prover picks the input” property is what makes this hard. Most prior PoUW proposals required the verifier or the network to assign the computational task. That works for some research distributions like folding proteins or screening drug candidates, but it does not compose with the way production AI work happens. An AI lab’s matrices are determined by what it is training or inferring, not by what a blockchain network decides it would like computed today. The breakthrough in the Komargodski-Weinstein paper is that the miner can submit any A and B and the protocol still produces a PoW certificate of prescribed hardness, with negligible overhead, with no way for a malicious miner to game the system by picking easy inputs.

The security argument rests on a hardness conjecture. Breaking the protocol’s soundness, meaning finding a way to fool the verifier into accepting work that wasn’t done, would require solving a batched version of a low-rank linear-equation problem faster than is currently believed possible. In plain words: cheating the system would require solving a math problem that the cryptography community currently believes has no fast solution. The paper does not claim certainty. It claims a precise conjecture: no significantly faster algorithm exists for this problem, and breaking the protocol would require finding one. This is the same epistemic stance Bitcoin’s security has on SHA-256: provable resistance to currently-known attacks, with an open question about future advances.

A reader who wants to verify any of the above can read the paper directly. It is 23 pages, written in standard cryptography-paper style, and assumes graduate-level familiarity with cryptographic protocols and linear algebra. The Polymath project’s open-problems page, where Pearl Research Labs has opened seven mathematical and two economic problems to the community, is written for a broader audience and is a good entry point for non-specialists.

That “probably not feasible” verdict had a price tag. Ethereum spent seven years migrating from Proof-of-Work to Proof-of-Stake, completing The Merge in September 2022. The public rationale rested on two pillars: environmental concern (the same critique major asset managers were leveling at Bitcoin at the time, citing the energy intensity of SHA-256 mining) and the working assumption that useful Proof-of-Work was unreachable. Both pillars have since aged poorly. BlackRock, one of the loudest pre-2021 critics of Bitcoin’s energy footprint, filed for a spot Bitcoin ETF in June 2023 and now runs IBIT, the largest spot Bitcoin ETF by AUM. Meanwhile AI’s electricity demand has overtaken crypto’s by orders of magnitude with a fraction of the ESG attention. And the PoUW impossibility argument itself was disproved by the paper this section opens with.

The pricing consequence of choosing PoS over PoW is visible after the fact. As the influential Frax founder @samkazemian put it, “the genius of PoW over PoS isn’t decentralization but that PoW forces the L1 asset to be valued as a SoV scarce commodity rather than a P/E DCF asset, since you can’t coherently model cash flows in PoW. ETH was the only other credible SoV digital silver/oil in the world pre-PoS.” ETH has corrected roughly 74% against BTC since The Merge, the kind of re-rating you’d expect when a digital commodity gets reclassified as a yield-bearing cash-flow asset. Pearl ships as the thing ETH was once a candidate to be: a Proof-of-Work asset whose security mechanism happens to produce the work the world is already paying for.

6. Team

Pearl is the rare chain whose founders’ published research reads like the protocol’s design document.

In theoretical cryptography, a productive senior researcher publishes 5 to 10 strong papers a year. Ilan Komargodski has averaged 11 a year for the last five years, with 17 in 2023 and 13 in 2025. Komargodski’s pre-Pearl career has been almost entirely devoted to the exact primitives Pearl ships: BLAKE3-style commitments, oblivious computation, secret sharing, multi-party computation, time-lock puzzles, verifiable delay functions, program obfuscation, post-quantum schemes. He is a Scientist at NTT Research (since 2019) and faculty at the Hebrew University, Weizmann Ph.D. in 2017, two postdoc years at Cornell Tech.

Omri Weinstein sits on the complexity side of the same axis. Associate Professor at the Hebrew University, currently on leave from Columbia where he has been on the CS faculty since 2017. Princeton Ph.D. in 2015, then Simons Society Junior Fellow at NYU’s Courant Institute. NSF CAREER Award (2018, $500K) and Packard Fellowship finalist in 2019. His research covers complexity, communication, and information theory, with major work on the direct sum and product conjectures, the KRW composition conjecture, and dynamic data structure lower bounds. He also spent 2017–18 as Chief Scientist at Vast Data, the enterprise SSD storage company, and more recently served as a Senior Research Scientist at NVIDIA after their $300M acquisition of Deci AI in 2024. He knows what high-performance systems look like outside the seminar room.

Pearl is what happens when one of the most prolific cryptographers of the decade and a CAREER-award complexity theorist build a chain together. The protocol is the natural endpoint of research they have been pointing toward for years. The legal entity behind the operation is Impossible Labs Ltd., domiciled in Israel.

Two weeks in, an ecosystem has already begun forming around the chain. The Discord has grown to about 750 members, and independent contributors have shipped tools the official site doesn’t yet match. Lord of Pearls, built by @LordKuba, runs an independent full node and publishes real-time network metrics: top miners, holder rankings, orphan attribution, peer geographic distribution, refreshed every five seconds. The mining-address and distribution numbers I cited in §4 come from this explorer; methodology is documented and the underlying data is auditable against any pearld node.

7. The Two-for-One in Practice

The cleanest demonstration of what Pearl actually does sits in Table 1 of the whitepaper. On April 27, 2026, the team published Pearl-certified versions of Meta’s Llama 3.3 70B and 32B: modified so the model mines Pearl while serving inference. On a standard 4×H200 GPU server, the certified 70B model produces 18,291 tokens per second.

The chart below compares four ways of running the model across the server’s four H200 GPUs. TP (tensor parallelism) splits each matrix operation across GPUs. PP (pipeline parallelism) divides the model’s layers sequentially across GPUs. DP (data parallelism) replicates the model on each GPU and processes different inputs in parallel. Each strategy trades memory against throughput, and the right choice depends on model size and hardware budget.

Press enter or click to view image in full size
Llama benchmarks

The headline 18,291 tokens/sec is the DP=4 result. Meta’s bf16 weights can’t run DP=4 on this hardware (four full model replicas don’t fit in VRAM). Pearl-certified can. In the two configurations both versions report data for (PP=4 and TP=4), Pearl-certified throughput equals or beats Meta’s reference. MMLU accuracy stays within a fraction of a percent of Meta across all tested configurations (MMLU is a standard benchmark that measures language model knowledge across academic subjects). So Pearl-certified Llama is at least as fast as running Meta’s own weights, and mines roughly 981 TMADs (Tera Multiply-Add operations) per second on the side in its fastest config.

The economic implication is straightforward. The whitepaper estimates the marginal cost overhead of running cuPOW at roughly 10% of an H100-hour, which at current cloud GPU prices works out to a fraction of a dollar per hour. The first-year emission rate is 59,931 ¶PRL per hour, distributed across the global hashrate. At current OTC pricing, the dollar value of that emission stream runs into the tens of thousands of dollars per hour across the entire network. Any GPU operator who allocates capacity to Pearl mining captures a share proportional to their fraction of the global hashrate. The threshold question for an AI lab considering integration is whether that share exceeds the ~10% overhead cost. For operators with non-trivial GPU capacity at today’s token price, the math closes.

Worth noting on the model side: Simran Arora, an ML systems researcher at Together AI and incoming Caltech assistant professor, appears as a public contributor on the Pearl-certified Llama model card on HuggingFace. Her academic work on Monarch Mixer and other GEMM-based architectures sits in the same efficiency-research domain Pearl operates in. Together AI is one of the largest neutral GPU and inference platforms in the market.

The Pearl-certified Llama 3.3 70B and 32B models are ready to download and run today. The mining software is shipping in the open-source repository with native integration into vLLM (the standard tool for serving large language models in production), and a follow-on integration with SGLang (another widely-used inference framework) has been announced. An AI inference provider could start running this in production the same day the chain went live.

8. Pricing Pearl

The whitepaper proves a single equilibrium claim that does most of the work for understanding what ¶PRL should be worth. At any given moment, the marginal cost of mining equals the value of the tokens being minted:

τ · G(t) · D(t) = p(t) · e(t)

τ is the mining overhead fraction (~10% of an H100-hour for useful miners). G(t) is the GPU count devoted to mining. D(t) is the cost of compute. p(t) is the token price. e(t) is the emission rate.

Press enter or click to view image in full size
Pearl’s mining equilibrium graph

Translated, the equation says the dollar cost of all mining at time t equals the dollar value of all tokens emitted at time t. If the left side were larger, mining would be unprofitable and miners would leave. If the right side were larger, mining would be profitable enough that new miners would enter until the equilibrium reasserts itself.

Rearranged, the equation gives p(t) = τ · G(t) · D(t) / e(t). The token price is proportional to the total mining cost (which is the total useful compute price of the network) divided by the emission rate. As the network grows (more GPUs in G) and as the global price of compute holds or rises (D stable or higher), the token price rises with the total dollar value of compute committed to the network.

Where the model breaks

The model assumes equilibrium. In the early life of a chain that no AI lab has integrated in production yet, the equilibrium is mechanically broken. The token price sits well above the cost-of-compute floor because expectations of future growth are baked into today’s bids. The whitepaper calls this the “future expectations multiplier,” F = p·e / (τ·G·D), and notes that for a new chain F is typically much greater than 1. The price stays high on belief alone until enough miners enter to bring G(t) up to its equilibrium level. If belief evaporates before the miners arrive, F collapses and the price falls back to the compute-cost floor.

The honest version of the thesis is therefore conditional. If Pearl reaches and holds equilibrium, the token’s reference price is tied to global useful compute spend, which is one of the largest capital expenditure lines being deployed today. If Pearl fails to reach equilibrium (because no AI lab integrates the vLLM miner in production within a reasonable window, or because the FP4 and training upgrade gets stuck on unsolved Polymath problems), the token’s reference price collapses to whatever the rump of speculation supports. The claim holds if the integration happens. The integration is the gate.

9. Three Types of Players

The interesting analysis lives in the three-way tension among incumbent mining operators, AI labs and inference providers, and the neutral GPU clouds in between. Pearl changes the payoff matrix for each of them in a different way, and the order in which they integrate is the story of the next 12 months.

Press enter or click to view image in full size
The path to adoption

Mining operators

The largest publicly-traded mining operators have already started repositioning, as the AI hosting deals catalogued in §3 (Core Scientific, Hut 8, IREN, TeraWulf, Applied Digital) make clear. Several of these firms now publicly expect 50–70% of revenue from AI by end of 2026.

The macro signal is less dramatic but pointing the same direction. Bitcoin’s network hashrate peaked around 1.1 ZH/s in late 2025 and has been grinding sideways-to-lower since, sitting around 950 EH/s on 7-day averages in mid-May 2026, roughly 15% off the high. That isn’t a 2021-style shock collapse, when regulatory shutdowns in China cut hashrate 60% in weeks before a V-shaped recovery in five months via ASICs migrating to North America. This is the longest retrace the network has seen, with Q1 2026 marking the first quarterly hashrate decline since 2020. BTC drew down from around $125k in October 2025 toward the $60k–$86k range, pushing hashprice (revenue per PH/s per day) to multi-year lows. To be fair to the cycle, Bitcoin’s built-in difficulty adjustment will eventually do its work and a BTC price recovery would pull hashrate back up. Even so, the operators most exposed to BTC mining have built a structural hedge that no longer depends on Bitcoin’s price cycles.

Press enter or click to view image in full size
What’s pushing miners toward AI hosting: the longest hashrate retrace since 2020, down 15% and profitability at multi-year lows.

That reassignment is the part that matters for Pearl, and worth being precise about what reassignment means. Bitcoin’s SHA-256 ASICs cannot mine Pearl, which runs on GPUs. The pivot here is at the facility layer: the same operators converting their buildings to host AI and GPU workloads. The GPU capacity that comes online inside those buildings can mine Pearl on the side. The operator’s edge is the building, the power contract, the cooling, the operational staff, and a growing portfolio of GPU hosting deals with AI customers. Any of those GPU-hours can mine Pearl with zero marginal capital cost. For operators who have already pivoted, Pearl is pure additive revenue on top of the AI hosting business they’re already running. They have the operational discipline and a strategic incentive to capture the new monetary primitive native to their new business. Most likely first integrators.

AI labs and inference providers

AI labs and inference providers face the question from the opposite direction. Their unit economics tighten every quarter as models grow faster than hardware does, and any rebate on COGS is meaningful. The §7 threshold question resolves in favor of mining for operators with non-trivial GPU capacity. The brake is reputational and operational. Some AI labs may not want their compliance regime entangled with crypto mining at this stage. Some may not want the public association. Some may need their inference SLAs guaranteed without the ~10% overhead floor.

Once one major AI lab runs Pearl in production publicly, the cost of not mining becomes more visible than the cost of mining for the rest.

Neutral GPU clouds

The GPU clouds in between (CoreWeave, Lambda, Crusoe, and the long tail of regional operators) are the most interesting actor. They sell GPU-hours to AI labs at a markup over their cost. Pearl raises the floor on every hour they sell because an idle hour, instead of being lost revenue, becomes a meaningful share of ¶PRL emission. Better, they can offer Pearl mining as a value-added bundle: “Run your inference on us, we mine Pearl in your idle cycles, the rebate accrues to you (or to us, depending on the contract).”

Their pace of integration depends on customer demand. If AI labs ask for it, they integrate fast. If labs treat Pearl as a compliance question, the clouds wait.

10. What’s Strong, What’s Open

Pearl’s strengths cluster around one observation: every marketing claim is grounded in either a peer-reviewable proof (§5) or a shipping implementation with measurable benchmarks (§6-§7). The fair launch produced healthy early distribution (§4). Two-week traction is exceptional by prior PoW-launch standards. The chain ships with post-quantum opcodes wired and reserved.

Two open items matter. The tokens-to-compute loop is half-closed at compute.pearlresearch.ai, where buyers pay USD for Pearl-mining compute and receive ¶PRL. The full vision is bidirectional settlement (¶PRL itself spent on compute, the chain settling the contract trustlessly), at which point Pearl evolves into something resembling an OpenRouter for GPUs with ¶PRL as the currency of computation. Until that ships, ¶PRL is a store of value tied to compute costs, not yet a working currency that flows back into compute. And cuPOW as shipped handles INT8 inference well, but FP8/FP4 inference and full training workloads require the Pearl-GEMM extension currently in research; until those land, large training runs and frontier FP4 inference cannot mine Pearl, and the window for frontier-AI-lab integration is narrower today than it will be in 12 months.

The Polymath project is where the team has published nine open problems: a faster noise scheme via Hadamard random rotation, FP8 and FP4 inference support, training support, a TensorCore-native hash function, a quantified ASIC-resistance analysis, a zero-overhead proof-of-inference scheme for entire LLM forward passes, plus two economic problems (what ¶PRL should be worth in a hypothetical zero-overhead regime, and what utilities a verifiable AI compute ledger enables: model tracing, AI-agent voting, a “stablecoin” for trading compute outside the dollar system). Anyone with a background in cryptography, GPU systems engineering, or mechanism design can submit work directly via pearlpolymath.com.

11. The Trade

The simplest way to describe the trade is this: Pearl is AI-native money. Its equilibrium price is tied to global AI compute spend, and global AI compute spend is one of the largest capital expenditure lines in human history.

Press enter or click to view image in full size
FDV Comparison (for fun)

The mechanism comes from the equation in §8. As the network grows and as the cost of compute holds or rises, the token price rises with the total dollar value of compute committed to the network (under standard AI compute trends, where compute doubles annually and costs halve bi-annually, the whitepaper’s model implies the token price slightly more than doubles every two years).

The reference surface is every MatMul performed on the global GPU base. Pearl’s addressable market is the operation that defines the arithmetic of the AI economy itself. Prior decentralized AI tokens have addressed narrower surface areas. Bittensor (TAO), the most prominent comparable, covers a network of subnets running heterogeneous ML tasks; its fully diluted valuation peaked above $15B in April 2024 and sits around $6.7B today. I covered Bittensor’s architecture in my TAO report from 2023. Filecoin, Render, and Akash address narrower depin verticals (storage, rendering, general compute marketplaces). Pearl’s surface area is larger than any of them: every matrix multiplication on earth, which under current AI growth trends is on track to be the majority of all compute.

How the trade actually behaves over the next 12 months

In equilibrium, the model in §8 says ¶PRL trades at the cost-anchored price. In disequilibrium (which is where the chain is today and will be for some time), ¶PRL trades on the future-expectations multiplier F. The interesting moments are the catalysts that move F:

Anyone considering an active position should track the four catalysts above with explicit milestones. Nothing here is investment advice.

12. Forward Outlook & What to Watch For

Beyond the four price-catalysts named in §11, two other signals will tell whether Pearl matures into a first-class L1 over the next 12 months.

Distribution maturation. At week 2, ¶PRL trades only OTC. There are no major centralized exchange listings yet, which means liquidity is thin, price discovery is slow, and any meaningful capital allocator faces execution risk on entries and exits. This typically resolves within months for a chain of Pearl’s caliber, but until it does, the practical investability ceiling stays low. Every fair-launch chain progresses through OTC-only at launch, first DEX listings, first tier-2 CEX, then progression toward tier-1. Ravencoin took about 9 months to reach Binance. Kaspa took roughly 2 years before Kraken. Pearl’s setup is faster than either, in part because the AI x crypto narrative is now well-developed and exchange listing teams have built the diligence muscle. A reasonable expectation is first major venue within 6 months, with progression beyond that depending on velocity of adoption and team relationships.

Network and security maturation. Pearl ships with post-quantum opcodes (OP_CHECKXMSSSIG and Pay-to-Merkle-Root) wired but not active. Activation timing depends on the security community’s threat assessment and would happen through a coordinated soft-fork. Hashrate distribution data published over time will confirm or break the early decentralization signal documented in §4; the Lord of Pearls explorer already exposes per-address share. A meaningful concentration attack surface only opens if a single operator passes ~25% of network hashrate, worth tracking.

13. Series Note

The Ravencoin Report (2018) and the Kaspa Report (2023) were both written at moments that turned out to look like this one: a Proof-of-Work chain with an outstanding team and a genuinely novel consensus algorithm, caught early. Both projects went on to reach multi-billion-dollar valuations. Setups like that come around once every three to five years, and this one feels bigger than either of the prior two. That’s also why I want to be careful about what could go wrong, which is what the next section is about.

14. Author’s Notes

The bull case for Pearl is the bulk of this report. Three risks worth naming before signing off.

The integration gate (§8): if no major AI lab integrates the vLLM miner in production within 9 months, ¶PRL stays on belief alone. The roadmap gate (§10): if Polymath problems on FP4 and training stay unsolved, Pearl misses the window of relevance for the largest GPU workloads. The cryptanalysis risk: an unforeseen attack on cuPOW’s hardness conjecture could surface. The cryptographic community has a history of finding such attacks years after a scheme ships, and Pearl’s soundness rests on a direct-product hardness assumption without decades of public scrutiny yet.

The §11 catalysts are the inverse of the first two risks. What would also turn me bearish, outside that catalyst list: visible team turnover, or a CEX listing that fails to sustain volume after the pop fades.

The chain is at week 2. Most of what I’d want to see settled isn’t yet. But what is settled is unusually strong: published cryptography, shipped implementation, healthy early distribution, and active research engagement. Most chains can’t claim that at any stage of their lifecycle, let alone at the start.

Disclaimer

Nothing in this report is investment advice. The information here is for general informational purposes only. Investing in cryptocurrencies involves significant risk, including the potential loss of capital. Anyone considering exposure to ¶PRL or any other cryptocurrency should do their own research, consult appropriate professional advisors, and never commit more capital than they can afford to lose entirely.

The cuPOW protocol’s security rests on a hardness conjecture that has not yet been subjected to multi-year public scrutiny. The chain is two weeks into its mainnet life. Several technical capabilities discussed in this report are on the project’s roadmap rather than in production. The analysis here is based on information available as of May 2026 and may not reflect material developments that occur after publication.

Sources

Pearl official

Prior research by the author (PoW chains and decentralized compute)

Founder backgrounds

Energy and AI

Nuclear and grid response

Orbital data centers

Bitcoin miner pivots

Foundational / academic

This article was originally published on Cryptocurrency Tag and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →