The AI Trading Workflow That’s Going Viral Right Now
AI in Trading5 min read·Just now--
Recently, I’ve been using AI trading agents more seriously in my research workflow.
Not because ai can print bunch of money. Nope. Traders using AI for trading are still just generating random indicators, curve-fitting backtests, and convincing themselves they found edge.
Cool part isn’t “AI writes a strategy.”
Cool part is goal-driven trading research.
Instead of asking:
“Can you make me a profitable trading bot?”
You define a measurable goal:
“Reduce max drawdown by 20% while keeping net profit factor above 1.4 after fees, slippage, and walk-forward validation.”
That difference matters.
Because the first prompt has no stop condition. The second one has a measurable target.
But Why? What’s the point?
Trading work usually falls into two buckets.
The first bucket is well-defined work.
Example:
“Add Binance futures support to my backtesting engine.”
You know the shape of the solution. You need API integration, order models, fee logic, funding rates, position sizing, and error handling.
That can be planned upfront.
The second bucket is exploratory work.
Example:
“Find a way to reduce false breakout entries in my BTC momentum system without killing upside.”
You don’t know the answer upfront.
Maybe the solution is volatility filtering.
Maybe it is session filtering.
Maybe it is volume confirmation.
Maybe the strategy has no edge and should be killed.
This is where goal-based AI work becomes powerful.
You give the agent a metric, a dataset, constraints, and permission to test multiple paths. Then it explores.
Bad trading goals vs good trading goals
Bad goal:
“Make this strategy better.”
This is useless.
Better means nothing. Better on what metric? Sharpe? CAGR? Drawdown? Win rate? Profit factor? Lower variance? Less exposure? More trades? Fewer trades?
A better goal:
“Improve the strategy’s out-of-sample Sharpe from 0.85 to at least 1.15 on BTC/ETH 4H data from 2019–2025, including 0.06% fees, 0.03% slippage, no lookahead bias, and max drawdown under 22%.”
Now the system knows when to stop.
Even better:
“If the target cannot be reached without overfitting, produce a rejection report explaining why the edge is likely not robust.”
That protects you from the most expensive failure in trading research: false confidence.
The key rule: do not accept proxy signals
A lot of trading research dies because people optimize proxy metrics.
High win rate does not mean edge.
High backtest return does not mean robustness.
Low drawdown on one period does not mean risk is controlled.
Great in-sample performance usually means nothing.
The agent should not accept proxy signals.
A valid result needs:
- out-of-sample validation
- transaction costs
- slippage
- position sizing rules
- drawdown limits
- regime testing
- no lookahead bias
- no survivorship bias
- clear failure conditions
If uncertainty remains, the goal is not achieved.
That one principle alone deletes most garbage trading systems.
Give the agent a real surface to act on
A trading agent is only as useful as the environment it can touch.
If you ask it to improve a strategy but only give it a price CSV, it will probably overfit indicators.
use real docuements
For real work, give it:
- your broker documentations
- clean historical data
- execution logs
- fee model
- slippage assumptions
- spread data if available
- existing backtest engine
- walk-forward framework
- risk limits
- benchmark strategy
- rejected previous experiments
- broker sandbox, not live capital
Do not give it live trading access until the research loop is boring, repeatable, and heavily constrained.
Live money is not a testing environment.
The right workflow
Use this sequence:
- Exploration branch
Let the agent test hypotheses aggressively. - Research report
Force it to summarize what worked, what failed, and what was overfit. - Clean spec
Convert the useful findings into a precise trading system spec. - Reimplementation
Build the strategy again cleanly from the spec. - Validation
Run walk-forward, stress tests, cost sensitivity, and regime splits. - Paper trading
Only after the system survives research. - Tiny capital
Only after paper trading matches expected behavior.
The first branch is allowed to be messy.
But the shipped trading system is not.
The Actual Trading Goal System
If you read this and still make the same mistake.
You probaly ask AI to “find a strategy.”
That is still too vague.
The better move is to give the model a research contract.
A research contract has four parts:
- Objective
- Constraints
- Validation
- Rejection condition
If one is missing, the output is probably useless.
Here is the exact structure I would use.
Prompt 1: Turn a vague idea into a real trading goal
Use this before you let AI build anything.
👉The full research contract prompts are on Substack:
- the goal-definition prompt,
- strategy kill test,
- long-running research prompt,
- clean rebuild prompt,
- and the final checklist before paper trading.
- etc secrets.