Is It Time to Let AI Take Over Code Auditing?
Ananthan R18 min read·Just now--
By Ananthan Indu Rajasekharan | April 2026
The math has already changed. The real question isn’t whether AI belongs in the auditor’s chair; it’s how much of the chair we hand over, and what we keep behind glass.
What broke the old way of doing security
A “code audit” is exactly what it sounds like: experts read through software line by line, looking for bugs that an attacker could exploit. For DeFi protocols and crypto bridges, this is the last line of defence before real money is at risk. Until very recently, this work was something only highly trained humans could do well.
That changed in late 2025. In December, Anthropic’s security research team, the Frontier Red Team, published a study showing that today’s leading AI models, including Claude Opus 4.5, Claude Sonnet 4.5, and OpenAI’s GPT-5, could break smart contracts on their own. The team built a test set called SCONE-bench, comprising 405 real smart contracts that had been hacked between 2020 and 2025. The AI models, working autonomously, produced fully working attacks for 207 of them, a 51% success rate, equivalent to about $550 million in stolen funds in simulation [1].
The more striking number was the one that came next. The same researchers pointed the AI at 2,849 newly deployed contracts on BNB Chain that had no known bugs. The AI found two genuinely new vulnerabilities and built working attacks for them. The total compute bill for scanning all 2,849 contracts: $3,476. That works out to $1.22 per contract [1][2].
Four months later, on April 7, 2026, Anthropic announced something more powerful. Claude Mythos Preview is a frontier model that the company chose not to release publicly because of its cyber capabilities. Instead, it shared the model under a defensive program called Project Glasswing — a coalition that includes AWS, Apple, Google, Microsoft, JPMorgan, NVIDIA, and over 40 other organisations responsible for critical software [3][4]. To show what changed: when the previous model, Claude Opus 4.6, was given known Firefox bugs to exploit, it succeeded twice in several hundred tries. Mythos succeeded 181 times on the same test [5]. Anthropic put it bluntly: the same upgrades that make the AI better at fixing bugs make it better at exploiting them [5].
This isn’t a vacuum. Security firm CrowdStrike publishes a yearly report tracking how fast attackers move once they’re inside a network. In 2021, the average attacker took 98 minutes to spread from one compromised computer to others. In 2024, that fell to 48 minutes. By 2025, it was 29 minutes. The fastest case CrowdStrike has on record was 27 seconds. AI-enabled attacks rose 89% year over year [6].
April 2026 made it real. On April 1, attackers drained Drift Protocol, a major Solana-based exchange, of roughly $285 million in 12 minutes. Investigators at Elliptic and TRM Labs traced it to North Korea’s Lazarus Group [7][8]. Seventeen days later, the Kelp DAO bridge was compromised for another $293 million [9]. Two attacks, half a billion dollars, in three weeks.
This is the world AI auditing has to be evaluated against.
The economy has already broken
Here is the cleanest way to see how far the math has shifted.
A traditional smart contract audit from a top-tier firm costs $50,000 to $500,000 and takes weeks. The AI study above scanned a smart contract for $1.22. Even allowing for the fact that the cheap scan and the expensive audit aren’t doing exactly the same thing, the gap is enormous and getting wider. The same Anthropic research found that the cost of producing a working exploit fell roughly 70% over the course of four Claude releases. An attacker today can achieve about 3.4 times as many successful attacks per dollar of compute as six months ago [1][2]. Anthropic’s projection: simulated exploit revenue is doubling every 1.3 months [10].
What this means in practice: every smart contract ever deployed in DeFi can now be scanned for less than the cost of a single human audit. There is no version of human-led auditing at any reasonable budget or headcount that can keep up with how fast new contracts are shipping.
The defensive side has the same math working in its favour and is starting to use it. In April 2026, Coinbase published an internal AI auditor called Frosty that completes a full smart contract audit in one to two hours at roughly 1/100th the cost of a manual review. According to Coinbase’s internal benchmarks, its F1 score (which balances catching real bugs with false alarms) was 1.5x that of the next-best tool [11]. The same month, blockchain security firm CertiK opened its AI Auditor for public testing. After 6 months of internal use, the tool correctly flagged the correct vulnerability in 88.6% of 35 real-world Web3 hacks from 2026 [12].
Both teams went out of their way to say the same thing. Coinbase’s blog explicitly states that Frosty “does not replace the traditional human-led smart contract auditing process” and “can still miss deep and complex vulnerabilities” [11]. CertiK’s tool is positioned for “pre-deployment self-review, protocol upgrade diffs, pre-audit triage, and post-audit” translation: it helps developers clean up obvious problems before and after human auditors do the deep work [19].
So if the question is whether AI belongs in code auditing, the economics has already answered it. The harder question is what role it should play.
The capability is real — but cuts both ways
The full list of what AI can now do in security is, frankly, hard to believe.
Mythos Preview’s confirmed findings include a 27-year-old denial-of-service flaw in OpenBSD (an operating system used in security-critical systems), a 17-year-old remote code execution flaw in FreeBSD’s network file sharing system, and thousands of additional high- and critical-severity bugs across major operating systems, web browsers, and core open-source infrastructure [5]. In one demonstration, the AI used a known vulnerability ID and the corresponding code change to produce a working Linux exploit that gave it root access, the highest level of system control, in under a day, for under $2,000 in compute [13].
To check whether the AI’s bug reports were trustworthy, Anthropic hired professional security contractors to manually review 198 of them. The contractors agreed exactly with the AI’s severity rating in 89% of cases and were within one severity level in 98% [5]. The UK’s AI Security Institute ran its own tests and found that Mythos succeeded on 73% of expert-level “capture the flag” hacking challenges, exercises that no AI model had been able to complete at all before April 2025 [14].
But the same capability that helps a defender find bugs is what an attacker can rent for $1.22 per contract. Anthropic has been transparent that disclosing the thousands of vulnerabilities Mythos found follows a coordinated 90+45 day cycle (the standard window: 90 days to fix, 45 more if needed). Translation: while the disclosure process catches up, the overwhelming majority of these bugs remain unpatched [5].
Here is the structural problem this creates. Finding bugs is no longer the bottleneck. Fixing them is. Coordinating disclosure, getting patches written, getting them tested, getting them deployed across thousands of systems that all share the vulnerable code — that part is still mostly done by humans, and it doesn’t get faster just because the AI got faster at finding things.
This matters to anyone considering handing auditing to AI. The value of the next bug found by AI is approaching zero — there are already too many to fix. The value of the next bug triaged, prioritised, and actually fixed is enormous. An AI auditor that just produces a longer list of findings without telling anyone which ones matter is adding noise, not removing risk.
The other half of the problem: AI also writes the bugs
Any honest answer about AI auditing has to address the uncomfortable fact that the same kind of AI is also used to write the buggy code in the first place.
Veracode is one of the world’s largest application security companies. In 2025, it ran a study of over 100 different AI models across 80 coding tasks. The result: about 45% of all AI-generated code contained security flaws that match the OWASP Top 10 — a widely recognised list of the most dangerous web application vulnerabilities. For Java specifically, the failure rate was over 70%. For Python, C#, and JavaScript, it ranged from 38% to 45%. When the test focused on defending against cross-site scripting (one of the most common web attacks), the AI failed 86% of the time [15][16].
Importantly, bigger AI models did not perform meaningfully better than smaller ones on security. This isn’t a problem that fixes itself by waiting for GPT-6 [16].
The picture inside actual companies looks worse. Application security firm Apiiro studied tens of thousands of code repositories at Fortune 50 enterprises between December 2024 and June 2025. Among developers using AI assistants, the number of monthly security findings rose from about 1,000 to over 10,000 — a tenfold increase in six months [17]. The AI did help with simple errors: typos in code dropped by 76%, and basic logic bugs dropped by 60%. But the dangerous mistakes went the other way. Privilege escalation paths — the kind of flaw that lets an attacker promote themselves from a normal user to an administrator — rose 322%. Architectural design flaws — the kind of problem that’s expensive to fix because it affects how the whole system fits together — rose 153%. Apiiro summarised it in one line: “AI is fixing the typos but creating the timebombs” [17][18]. The same study found AI-assisted developers accidentally exposed sensitive cloud credentials and API keys at nearly twice the rate of developers who didn’t use AI [17].
Put these two findings together. The same broad type of AI is being used to write code — and is introducing 10,000 new security findings a month — and to break code, at $1.22 per scan. The cheapest way to attack many smart contracts in 2026 is to ask an AI to find the holes in code that another AI just wrote. “Vibecoded” smart contracts — protocols thrown together with AI assistance and minimal review — are the easiest targets in the entire ecosystem.
This is not an argument against AI auditing. It’s the strongest possible argument for it. If AI is writing the code, AI has to be reading the code. The combination people should fear is AI generation paired with human-only auditing — and that, unfortunately, is the setup most projects are currently shipping.
What “AI takes over” actually looks like
The phrase “let AI take over” suggests a simple yes-or-no choice. The actual industry doesn’t work that way. A smarter question: at which parts of the auditing pipeline can AI run on its own, and which parts still need humans (or mathematical proof techniques) at the wheel?
The answer, looking across 2025–26, is now visible — and it’s bigger than any single AI model.
The starting gun: DARPA’s AI Cyber Challenge. In August 2025, the U.S. Defence Advanced Research Projects Agency held the final round of a two-year competition, AIxCC, at DEF CON 33, the world's largest hacking conference. Seven teams built fully autonomous AI systems — no humans allowed once the systems were running — that competed to find and fix bugs in critical open-source software. The teams collectively found 54 of 70 planted vulnerabilities (77%), patched 43 of them (61%), and as a bonus discovered 18 brand-new real-world vulnerabilities in production code. Team Atlanta (a partnership of Georgia Tech, Korea’s KAIST and POSTECH, and Samsung Research) won the $4 million first prize with a system called ATLANTIS. Trail of Bits, a respected New York security firm, won $3 million for second place with a system called Buttercup. Theori took $1.5 million for the third [22][23]. Crucially, all seven systems were released as open-source software after the competition, and DARPA added another $1.4 million in prizes specifically to push the technology into real-world critical infrastructure use [22]. The point of AIxCC was to prove that AI vulnerability-finding had crossed a viability threshold — and to seed open-source defensive tools for everyone else to build on.
Trail of Bits goes AI-native. In a remarkably candid March 31, 2026, retrospective, Trail of Bits described what happened next inside its own walls. A year earlier, only 5% of the firm was on board with using AI in audits; the other 95% ranged from sceptical to actively opposed. As of March 2026, the firm runs 94 plugins, 201 specialised skills, and 84 specialised AI agents internally. On the right type of engagement, AI-augmented auditors are finding up to 200 bugs a week [24]. The firm open-sourced its skills repository so other teams can use it with Claude Code [25]. This isn’t vendor marketing — it’s a 14-year-old security firm restructuring its core paid audit work around AI tools and publishing the playbook for others. Telling detail: Coinbase’s Frosty actually uses Trail of Bits’ open-sourced skills in two of its workflow phases [11], which is exactly the cross-pollination that AIxCC’s open-source-everything policy was designed to create.
Real-time defence at the transaction level. While auditors review code before it ships, a different category of companies watches contracts after they go live. Hypernative is the leader in this space. The platform monitors 75+ blockchains in real time, identifies 300+ threat types, and reports detecting 98% of hacks more than 2 minutes before the malicious transaction is finalised — protecting roughly $100 billion in customer assets across 200+ customers [26][27]. In June 2025, the company raised a $40 million in a Series B funding round led by Ten Eleven Ventures and Ballistic Ventures [26]. On April 1, 2026, Hypernative announced a partnership with TRM Labs (a major blockchain investigations firm). The combined system can simulate and block a suspicious transaction before it executes, instead of just flagging it after the fact [27]. Forta Network, OpenZeppelin’s Defender platform, and Hexagate offer comparable real-time monitoring across overlapping chains [28].
The developer-side toolkit has caught up, too. Cyfrin’s Aderyn is an open-source code analyser for Solidity (the main programming language for Ethereum smart contracts). In late 2025, Cyfrin added support for the Model Context Protocol (a standard for AI tools to communicate), so AI agents like Claude Code and Cursor can now call Aderyn directly as a tool [29]. OpenZeppelin — the firm whose code libraries are used in roughly half of all serious Ethereum contracts — shipped a similar tool, Contracts MCP, to bring its security standards into AI workflows [28]. Sherlock launched an AI auditing tool in beta in September 2025, trained on findings from its own audit competitions [28]. Remix IDE, the most popular development environment for Solidity, now ships with a built-in AI assistant that supports Anthropic, OpenAI, and Mistral models [28]. Even Certora — the leading firm in formal verification (a mathematical technique for proving that a contract behaves correctly) — partnered with Cork and Hypernative in November 2025 to align its mathematical proofs with AI-driven security tooling rather than compete against it [30].
This collective picture shows that AI auditing is not driven by any single model release. It’s a multi-vendor, multi-discipline build-out:
- Government-funded foundational research (AIxCC)
- Open-source tools any team can use (Buttercup, Aderyn, Trail of Bits’ skills repo)
- Commercial defensive products (Frosty, CertiK AI Auditor, Hypernative)
- Integration into the everyday tools developers already use (Remix, OpenZeppelin Contracts, MCP, Aderyn’s MCP server)
And every serious team across this stack agrees on one specific thing: AI helps with the parts of auditing that scale poorly with human attention, but it does not replace human judgment on the highest-stakes decisions.
What “highest-stakes” means here is specific. AI is not yet trusted to:
- Reason about how a protocol’s economics will hold up under adversarial conditions
- Decide whether the assumptions on which a contract is built are actually safe
- Design how the protocol’s governance and upgrade process should work
- Tell the difference between a theoretical bug and one that’s actually exploitable in the wild
What AI is trusted to do, and now does at machine speed:
- Scan code line by line for known bug patterns
- Check whether the new code introduces problems that have appeared in past hacks
- Watch live transactions for signs of an active attack
- Generate proof-of-concept exploits to confirm whether a flagged finding is real
- Triage long lists of findings into prioritised lists that humans can actually act on
The cleanest way to put this: AI is taking over code review, not code auditing. Those two have always been technically distinct, but the distinction collapsed in practice because human auditors did both. Splitting them apart again — letting AI handle breadth and pattern-matching at machine speed, while humans and mathematical proof tools handle the deepest reasoning — is the architecture that survives the new threat environment.
Where this leaves protocol teams
There’s a version of the title question with a clear answer: Should AI handle the bulk of vulnerability scanning, regression testing, exploit verification, and real-time threat monitoring? Yes. The economics and the capability case are both decisive, and any protocol holding meaningful capital that doesn’t have AI-led monitoring built in is operating on a security budget that no longer reflects how attackers work.
There’s another version that has a much harder answer: Should AI be the final word on whether a protocol is safe to deploy? Almost certainly not — and possibly not for a long time.
The Drift Protocol drain is the cleanest illustration of why. The attack wasn’t a smart contract bug. It was a multi-week social engineering operation that abused a legitimate Solana feature called “durable nonces” — a way to pre-sign transactions so they execute later instead of immediately. The attackers tricked members of Drift’s 5-person Security Council into signing transactions whose effects they didn’t fully understand, while also planting a fake collateral token via manipulated price feeds [7][8]. No code-level AI auditor would have caught this. The exploit lived in governance design, operational security, and human judgment — exactly the layers AI auditing is least equipped to handle.
The institutional response has converged on roughly this view. On April 7–8, 2026, the U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent closed-door meeting at Treasury headquarters with the CEOs of Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo to discuss systemic cybersecurity risks from Mythos Preview and similar models [20][21]. The CrowdStrike report’s framing — 29-minute breakout time, 89% increase in AI-enabled attacks — sits behind that meeting [6]. None of the policy responses suggests a future where AI runs auditing alone. They suggest a future where AI handles everything humans can’t scale to, while humans and formal verification get pulled up to the layers where their judgment still matters most: the protocol-level economic reasoning, the adversarial assumption review, and the design of the upgrade and disclosure pipelines that decide whether a vulnerability becomes a $285M loss or a quiet patch note.
The minimum security stack in 2026
For protocols holding meaningful capital, the practical implication is a layered stack rather than a single decision. The baseline now looks like this:
- Multiple recent audits from top-tier firms. Still table stakes.
- Formal verification of critical invariants. Mathematical proofs that the most important properties of the contract hold under all possible inputs. CertiK has been explicit that its approach combines AI auditing with formal verification rather than replacing it [31].
- Real-time monitoring integration with platforms like Hypernative, Forta, or OpenZeppelin Defender. The difference between Drift’s 12-minute drain and a contained incident is whether something is actually watching the transactions live.
- Meaningful timelocks on upgrades. Drift had moved its Security Council to a 2-of-5 signing threshold and removed its timelock entirely on March 27 — four days before the exploit ran [8]. The pre-signed transactions executed instantly because there was no delay window left to catch them.
- Multi-signature thresholds that assume signers can be socially engineered. Drift’s signers signed transactions they didn’t fully understand. Any signing scheme that doesn’t account for that is one social engineering campaign away from an exploit [7].
The point is that no single layer of this stack is sufficient on its own, and AI auditing is one layer — probably the highest-leverage — but not a replacement for the others.
Closing
The honest answer to the title question is that AI has already taken over code review in any system that takes its security seriously. The slower, harder takeover of the parts of auditing that involve deep reasoning, economic modelling, governance design, and judgment about which findings actually matter has not happened, and may not happen for some time.
What has changed is that “letting AI in” is no longer optional. The structural advantage attackers now hold — finding bugs decoupled from fixing them, attacker economics improving 3.4x every six months, $1.22-per-contract scanning available to anyone with an API key — means defenders without AI are operating on a budget the attackers have already abandoned. The interesting question for protocol teams, auditors, and regulators in the next twelve months isn’t whether to use AI in auditing. It’s which layers of the audit stack are still load-bearing on humans, and how to keep those layers from quietly eroding as the AI tools around them keep maturing.
The defensive side is real. It is well-funded. It is also structurally at a disadvantage. That’s the situation everyone is working from. The protocols that survive the next phase will be the ones that internalise that asymmetry early and architect their security stack accordingly.
“The defender must defend every point; the attacker need only find one weak point.” — The Art of War
References
[1] Anthropic Frontier Red Team, “AI agents find $4.6M in blockchain smart contract exploits” (SCONE-bench research), December 2025. https://red.anthropic.com/2025/smart-contracts/
[2] CryptoSlate, “AI agents spend just $1.22 to shatter smart contract security, exposing a terrifying economic reality,” December 4, 2025. https://cryptoslate.com/anthropic-ai-smart-contract-exploits/
[3] Anthropic, “Project Glasswing: Securing critical software for the AI era,” April 7, 2026. https://www.anthropic.com/glasswing
[4] Fortune, “Anthropic is giving some firms early access to Claude Mythos to bolster cybersecurity defences,” April 7, 2026. https://fortune.com/2026/04/07/anthropic-claude-mythos-model-project-glasswing-cybersecurity/
[5] Anthropic Frontier Red Team, “Claude Mythos Preview” technical assessment, April 7, 2026. https://red.anthropic.com/2026/mythos-preview/
[6] CrowdStrike, “2026 Global Threat Report: AI Accelerates Adversaries and Reshapes the Attack Surface,” February 24, 2026. https://www.crowdstrike.com/
[7] CoinDesk, “Here is how Drift attackers drained more than $270 million using a Solana feature designed for convenience,” April 2, 2026. https://www.coindesk.com/tech/2026/04/02/how-a-solana-feature-designed-for-convenience-let-an-attacker-drain-usd270-million-from-drift
[8] News.Bitcoin.com, “Drift Protocol Hack 2026: What Happened, Who Lost Money, and What’s Next,” April 2026. https://news.bitcoin.com/drift-protocol-hack-2026-what-happened-who-lost-money-and-whats-next/
[9] Blockchain Reporter, “Drift Protocol And KelpDAO Lead April 2026’s Biggest DeFi Exploits,” April 21, 2026. https://blockchainreporter.net/drift-protocol-and-kelpdao-lead-april-2026s-biggest-defi-exploits
[10] AnChain.AI, “Anthropic Can Now Crack Smart Contracts — What AI Agents Mean for the Web3 Security Industry in 2026,” December 6, 2025. https://www.anchain.ai/blog/anthropic-red
[11] Coinbase Engineering Blog, “Consumer Protection Tuesday: AI-Powered Smart Contract Auditing at Coinbase” (Frosty), April 2026. https://www.coinbase.com/blog/consumer-protection-tuesday-ai-powered-smart-contract-auditing-at-coinbase
[12] Crypto Briefing, “CertiK unveils AI Auditor to improve early detection of blockchain vulnerabilities,” April 2026. https://cryptobriefing.com/certik-launches-ai-auditor-strengthen-real-time-web3-security-workflows/
[13] Help Net Security, “Anthropic’s new AI model finds and exploits zero-days across every major OS and browser,” April 8, 2026. https://www.helpnetsecurity.com/2026/04/08/anthropic-claude-mythos-preview-identify-vulnerabilities/
[14] Decrypt, “Anthropic Claude Mythos: Serious Threat or Overhyped? AI Security Institute Weighs In,” April 2026. https://decrypt.co/364141/anthropic-claude-mythos-serious-threat-overhyped-ai-security-institute
[15] Veracode press release, “AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks,” July 30, 2025. https://www.businesswire.com/news/home/20250730694951/en/
[16] Veracode, “2025 GenAI Code Security Report,” July 2025. https://www.veracode.com/wp-content/uploads/2025_GenAI_Code_Security_Report_Final.pdf
[17] Apiiro, “4x Velocity, 10x Vulnerabilities: AI Coding Assistants Are Shipping More Risks,” September 4, 2025. https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/
[18] The Register, “AI code assistants improve production of security problems,” September 5, 2025. https://www.theregister.com/2025/09/05/ai_code_assistants_security_problems/
[19] CoinTrust coverage of CertiK AI Auditor positioning, April 2026. https://www.cointrust.com/market-news/certik-launches-ai-auditor-to-strengthen-web3-security
[20] CNBC, “Powell, Bessent met with U.S. Bank CEOs over Anthropic’s Mythos AI cyber risks,” April 10, 2026. https://www.cnbc.com/2026/04/10/powell-bessent-us-bank-ceos-anthropic-mythos-ai-cyber.html
[21] Sullivan & Cromwell LLP memo, “Treasury Secretary and Federal Reserve Chair Warn Bank CEOs About Cybersecurity Risks Posed by Anthropic’s New AI Model,” April 15, 2026. https://www.sullcrom.com/insights/memo/2026/April/Treasury-Secretary-Federal-Reserve-Chair-Warn-Bank-CEOs-About-Cybersecurity-Risks-Posed-Anthropics-New-AI-Model
[22] DARPA, “AI Cyber Challenge marks pivotal inflexion point for cyber defence” (AIxCC final results, DEF CON 33), August 8, 2025. https://www.darpa.mil/news/2025/aixcc-results
[23] Trail of Bits Blog, “Trail of Bits’ Buttercup wins 2nd place in AIxCC Challenge,” August 9, 2025. https://blog.trailofbits.com/2025/08/09/trail-of-bits-buttercup-wins-2nd-place-in-aixcc-challenge/
[24] Trail of Bits Blog, “How we made Trail of Bits AI-native (so far),” March 31, 2026. https://blog.trailofbits.com/2026/03/31/how-we-made-trail-of-bits-ai-native-so-far/
[25] Trail of Bits, open-source skills repository for Claude Code. https://github.com/trailofbits/skills
[26] Hypernative / Business Wire, “Hypernative Raises $40M Series B to Remove Security Barriers to Web3 Mass Adoption,” June 10, 2025. https://www.businesswire.com/news/home/20250610162307/en/
[27] TRM Labs, “TRM Labs and Hypernative Partner to Deliver Pre-transaction Enforcement and On-chain Security for Web3,” April 1, 2026. https://www.trmlabs.com/resources/blog/trm-labs-and-hypernative-partner-to-deliver-pre-transaction-enforcement-and-on-chain-security-for-web3
[28] SigIntZero, “AI in Smart Contract Auditing: Closing the Web3 Security Gap” (industry survey of AI auditing tools including OpenZeppelin Defender, OpenZeppelin Contracts MCP, Sherlock AI, Forta, Remix AI copilot), March 2026. https://sigintzero.com/blog/ai-auditing
[29] Cyfrin, Aderyn, open-source Rust-based static analyser with Model Context Protocol server. https://github.com/Cyfrin/aderyn — see also https://www.cyfrin.io/blog/top-10-smart-contract-auditing-companies
[30] Tech Startups, “Certora Partners with Cork and Hypernative to Set a New Standard for Web3 Security,” November 13, 2025.
[31] CertiK, “Smart Contract Audit” methodology (combination of AI auditing and formal verification). https://www.certik.com/en/products/smart-contract-audit