Start now →

Researchers Propose New Way to Manage Financial Risk When AI Agents Fumble Trades

By Jason Nelson · Published April 8, 2026 · 3 min read · Source: Decrypt
PaymentsAI & Crypto
Researchers Propose New Way to Manage Financial Risk When AI Agents Fumble Trades
NewsArtificial Intelligence

Researchers Propose New Way to Manage Financial Risk When AI Agents Fumble Trades

A newly proposed agentic settlement standard would hold fees in escrow and bring underwriters into AI agent transactions.

Jason NelsonBy Jason NelsonEdited by Guillermo JimenezApr 8, 2026Apr 8, 20262 min read
Image: Decrypt
Image: Decrypt
Create an account to save your articles.Add on GoogleAdd Decrypt as your preferred source to see more of our stories on Google.

In brief

As AI agents begin to handle payments, financial trades, and other transactions, there’s growing concern over the financial risks that fall on the human behind the agent when those systems fail. A consortium of researchers argues that current AI safety techniques do not address that risk, and new insurance-style techniques need to be considered.

In a recent paper, researchers from Microsoft, Google DeepMind, Columbia University, and startups Virtuals Protocol and t54.ai proposed the Agentic Risk Standard, a settlement-layer framework designed to compensate users when an AI agent misexecutes a task, fails to deliver a service, or causes financial loss.

“Technical safeguards can offer only probabilistic reliability, whereas users in high-stakes settings often require enforceable guarantees over outcomes,” the paper said.

The authors argue that most current AI research focuses on improving how models behave, including reducing bias, making systems harder to manipulate, and making their decisions easier to understand.

“These risks are fundamentally product-level and cannot be eliminated by technical safeguards alone because agent behavior is inherently stochastic,” they wrote. “To address this gap between model-level reliability and user-facing assurance, we propose a complementary framework based on risk management.”

The Agentic Risk Standard adds financial safeguards to how AI jobs are handled. For simple tasks where the user only risks paying a service fee, payment is held in escrow and released only after the work is confirmed. For higher-risk tasks that require releasing money upfront, such as trading or currency exchanges, the system brings in an underwriter. The underwriter evaluates the risk, requires the service provider to post collateral, and repays the user if a covered failure happens.

The paper noted that non-financial harms such as hallucination, defamation, or psychological harm remain outside the framework.

The researchers said the system was tested using a simulation that ran 5,000 trials, adding that the experiment was limited and not designed to reflect real-world failure rates.

“These results motivate future work on risk modeling for diverse failure modes, empirical measurement of failure frequencies under deployment-like conditions, and the design of underwriting and collateral schedules that remain robust under detector error and strategic behavior,” the study said.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Looking for a crypto payment gateway?

NexaPay lets merchants accept card payments and receive crypto. No KYC required. Instant settlement via Visa, Mastercard, Apple Pay, and Google Pay.

Learn More →
This article was originally published on Decrypt and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →