Start now →

How Soroban’s CAP-0066 Killed My LayerZero Finding

By Dan23RR · Published April 10, 2026 · 11 min read · Source: DeFi Tag
RegulationSecurity

How Soroban’s CAP-0066 Killed My LayerZero Finding

Dan23RRDan23RR9 min read·Just now

--

An honest write-up of a High-severity finding I thought I had, and why the platform killed it before I could submit.

The moment

It was late. I was three files deep into the LayerZero V2 Stellar endpoint for the Code4rena audit, reading the DVN contract’s authorization flow line by line. My eyes landed on hash_call_data in dvn.rs:

fn hash_call_data(env: &Env, vid: u32, expiration: u64, calls: &Vec<Call>) -> BytesN<32> {
let mut writer = BufferWriter::new(env);
let data = writer
.write_u32(vid)
.write_u64(expiration)
.write_bytes(&calls.to_xdr(env))
.to_bytes();
env.crypto().keccak256(&data).into()
}

Standard stuff. A multisig signs this hash, the contract recomputes it inside __check_auth, and checks the result against a UsedHash map to block replays. I scrolled to the storage declaration and stopped.

#[persistent(bool)]
#[default(false)]
UsedHash { hash: BytesN<32> },

Persistent storage. In Soroban, persistent storage has a Time-To-Live. If a key is not touched for long enough, the entry gets archived. And the declaration has a default: false. My pattern-matcher went off. Archived entry plus default value equals replay window. I spent the next four hours writing it up as High.

I was wrong. This post is about the reasoning that got me there, the thing Soroban does that I didn’t know about, and why a four-hour mistake is more valuable to publish than another confirmed Low.

Why I’m publishing a false positive

Published security research skews heavily toward confirmed findings. Those are the ones that pay, the ones that prove competence, the ones that fill bounty reports. False positives rarely escape private notes. The aggregate effect is that new auditors read confirmed finding after confirmed finding and come away thinking the job is a three-step loop: pattern-match, write up, collect.

The actual job is closer to the opposite. Most of the work is pattern-match, doubt, verify, kill your own finding. Publishing only the survivors creates a distorted picture of the craft and, worse, sets an expectation that a competent auditor is one who is never wrong.

I was wrong on this one. The reasoning chain that killed it is more useful — to me, and to anyone auditing Soroban with an EVM background — than another confirmed Low would have been. So this post exists.

Context, for readers who don’t live in Stellar

LayerZero V2 is an omnichain messaging protocol. Messages sent from one chain to another get verified off-chain by a set of Decentralized Verifier Networks (DVNs) that the application configures. Each DVN signs attestations and submits them on-chain through its own contract. On Stellar, LayerZero runs on Soroban, the Rust-based smart contract platform, and each DVN instance implements Soroban’s Custom Account Interface to authorize its own operations.

The replay protection lives in that authorization flow. When a DVN operation needs to run, a multisig of signers signs a hash of the call data. The DVN contract recomputes that hash inside __check_auth, looks it up in the UsedHash map, and reverts if the hash is already there. Otherwise it stores the hash as true and proceeds. Standard replay-guard pattern. The concern I had was not with the pattern itself, but with what happens to the UsedHash entry when Soroban's storage lifecycle touches it.

The finding as I first built it

Here is the check in auth.rs:

// 4. Replay protection
let hash = Self::hash_call_data(&env, vid, expiration, &calls);
if DvnStorage::used_hash(&env, &hash) {
return Err(DvnError::HashAlreadyUsed);
}
DvnStorage::set_used_hash(&env, &hash, &true);

The macro-expanded storage declaration makes used_hash return bool, defaulting to false when the key is absent.

My argument went like this. Soroban persistent storage has a TTL. Persistent entries that are not touched get archived after the TTL expires, and the TTL is finite and short enough that an attacker with patience could reasonably wait it out. A UsedHash entry written today will, if nobody touches it, eventually have its TTL expire. Once the entry is archived, the contract reading it should — according to my EVM-shaped intuition — see the default value, which is false. And false means the hash has never been used. Which means the same signed attestation could be replayed against a DVN that had processed it weeks or months earlier.

I went looking for reasons this wouldn’t work. The first one I hit was sitting two dozen lines above the replay check, also in auth.rs:

if expiration <= env.ledger().timestamp() {
return Err(DvnError::AuthDataExpired);
}

The TransactionAuthData struct carries its own expiration timestamp, and the contract rejects anything past it. That looked like it killed the attack immediately: if the signed authorization expires before the storage entry's TTL expires, the replay window is sealed by the application layer before the platform layer ever gets involved.

I should have stopped there. Instead I talked myself past it. The expiration field is set by whoever constructs the TransactionAuthData, and nothing in the contract forces it to be short. A DVN operator who wanted to pre-authorize a long-lived operation could in principle set expiration to a timestamp months or years in the future. In that edge case, the expiration check would not fire before the storage TTL had already archived the UsedHash entry. So, I reasoned, the storage-lifecycle angle was still live, even if narrower than I first thought. I wrote the finding up as High severity with a note about the preconditions, made myself coffee, and went to bed.

What Soroban actually does

The flaw was not in the code. It was in my mental model of Soroban storage.

On EVM, storage is binary. A slot either has a value or it returns zero, and zero is indistinguishable from “never written.” My entire reasoning chain was built on transplanting that model onto Soroban: if the key is archived, the read returns the default, and the default means “not used.” That transplant is wrong, and it has been wrong since before Protocol 23.

Soroban has three states for a persistent entry, not two. The entry can be live (readable and writable normally), archived (TTL expired, not directly readable by contract code), or evicted (long-archived, moved out of active validator state). An archived entry is not the same thing as an absent entry, and it has never been the same thing. The state transition between live and archived is handled by the protocol, not by the contract, and the protocol has always refused to let contract code read through archival as if it were a return-to-default.

The piece I didn’t know sits in CAP-0066, which activated on Stellar mainnet with the Protocol 23 upgrade on September 3, 2025. CAP-0066 introduced automatic in-place restoration of archived persistent entries during InvokeHostFunctionOp. The mechanism has two branches, and both matter here — each independently kills the attack I was chasing.

Branch one: the transaction’s footprint declares the archived UsedHash entry in its restore list. When the client simulates the transaction against the Stellar RPC, the simulation auto-populates this list for any archived entry the contract is going to touch. Before the host function runs, the protocol restores the entry from the archive, charges the restoration fee to the transaction's resource budget, and makes the original value — true — available to the contract. DvnStorage::used_hash() then returns true, and the contract reverts with HashAlreadyUsed.

Branch two: the attacker hand-builds a transaction that does not declare the archived entry in the restore list, trying to force the contract to “see” the archived state as a default. The transaction does not run. The protocol rejects the invocation at the host-function boundary because the footprint refers to an archived entry that has not been restored. The contract code never executes.

There is no third branch. The attacker cannot construct a path where the contract observes false for a key that was previously true and is now archived. I had spent four hours writing up a High-severity vulnerability that was ruled out twice over by the platform and once more by the expiration field in TransactionAuthData, and I had not fully understood any of the three at the moment I started writing.

What makes this more embarrassing in hindsight, and more interesting to publish, is that CAP-0066 did not create this protection. It streamlined it. Before Protocol 23, a developer who wanted to access an archived entry had to call RestoreFootprintOp manually to bring it back into live state before reading it. The manual step was friction, but the underlying invariant was the same: you could not read an archived entry as if it were a default. Soroban never had EVM's binary storage semantics. My mental model was not out of date relative to Protocol 23 — it was structurally incompatible with the platform from day one. CAP-0066 was simply the document I happened to read that forced me to notice.

The generalizable lesson

The specific mistake was about CAP-0066. The general mistake was larger: I was auditing a non-EVM platform with EVM-trained pattern recognition, and my patterns did not compose with the target’s semantics. Three things I’m taking forward from this, in descending order of how often I think they’ll matter.

The first is that storage lifecycle semantics are not portable across platforms. On EVM, “absent” means zero, and zero is a value the contract can read. On Soroban, “archived” does not mean “absent” and has never meant “absent”: the original value is preserved in the archive, and the only question is whether the archive entry has to be pulled back in manually (pre-Protocol 23) or gets pulled back in automatically via the footprint (Protocol 23 and later). Any finding that depends on a storage entry “being gone” must be verified against the platform’s archival rules, not against intuition carried over from another platform. If the reasoning chain contains the phrase “after the TTL expires, the contract will see the default value,” the next sentence should be a link to a protocol document.

The second is that when a finding dies, it dies. The instinct to soften it into the nearest plausible-sounding smaller claim is the same instinct that produced the finding in the first place, and it produces bad reports. When I first realized the auto-restoration killed the replay, my reflex was to reach for a consolation prize: “OK but an attacker can still force extra fees on the DVN somehow, right?” No, they cannot. In this specific case, if an attacker submits a replay against an archived hash, the attacker pays the restoration fee and then hits HashAlreadyUsed; if they try to skip the restore, their transaction is rejected before the contract runs. Nobody can force the DVN operator to pay anything. The severity does not "drop from High to Low" — it drops to zero. I had to sit with that rather than patch together a smaller finding to justify the four hours. Reclassifying dead findings as live-but-smaller is how auditors accumulate Lows they should not have submitted.

The third is that the protocol version matters at least as much as the contract version. Protocol 23 activated on Stellar mainnet on September 3, 2025, and that is the upgrade that shipped CAP-0066’s automatic restoration. The DVN contract looks identical against Protocol 22 and Protocol 23, but the operational attack surface is different between the two: pre-P23 a legitimate operator who let a UsedHash entry expire had to remember to call RestoreFootprintOp before reusing the same call path, while post-P23 the footprint pulls the entry back in on its own. Neither version ever let contract code observe the archived key as false. The lesson is not that the contract was safe only after P23 — it is that I had to pin my reasoning to the active protocol version of the target network before I was allowed to claim anything about what the contract could or could not observe. I had not done that work.

What I’m doing differently

Three concrete changes to my process after this one.

Before writing up any finding that depends on storage expiration, eviction, or reset behavior on a non-EVM platform, I will spend ten minutes searching the platform’s recent protocol proposals for anything related to the storage model. The search would have killed this finding in minute two. It cost me zero minutes to skip and four hours to recover from.

I will add an explicit “model check” step for every finding on an unfamiliar platform. For each assumption the finding relies on, I mark whether it is a generic correctness assumption or a platform-specific one. EVM training generates a lot of the second kind disguised as the first kind, and this mistake shows how easily they slip through when the pattern is strong enough to feel universal.

Before I format and submit any finding I’ve rated as High, I will spend thirty minutes adversarially trying to kill it myself. Not checking for typos. Trying to kill it. If it survives thirty minutes of hostile reading by its own author, it goes to the report. If it does not, it stays in my notes as a learning artifact. This post is what one of those learning artifacts looks like when it escapes my notes and becomes something I share.

Closing

The adrenaline spike in the late-night reading was real. The vulnerability wasn’t.

The four hours were not wasted. They produced this write-up, and a sharper mental model for the next Soroban audit I do, which is probably worth more than the bounty I thought I was chasing. The next time I see a persistent storage entry on a non-EVM chain and feel the pattern-match firing, I will pause longer and read the protocol docs first. Next audit, I’ll probably still find another finding that looks like a replay. I’ll be faster to kill it.

This article was originally published on DeFi Tag and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →