Paper Review: TradeTrap – Evaluating the Reliability and Faithfulness of LLM‑Based Trading Agents

The article introduces TradeTrap, a unified framework that systematically stress‑tests large‑language‑model‑based autonomous trading agents by injecting component‑level perturbations—such as data falsification, prompt injection, and state tampering—into a historical US‑stock back‑test, revealing how small disturbances can cascade into extreme risk exposure, portfolio drawdown, and performance collapse.

Bighead's Algorithm Notes
Bighead's Algorithm Notes
Bighead's Algorithm Notes
Paper Review: TradeTrap – Evaluating the Reliability and Faithfulness of LLM‑Based Trading Agents

Background

LLM‑based autonomous trading agents (e.g., AI‑Trader, NoFX, ValueCell) are increasingly used in real‑world finance, but their reliability under adversarial or failure conditions has not been systematically evaluated.

Problem Definition

The study evaluates robustness of two architectural families—adaptive agents that select tools dynamically and procedural agents that follow a fixed analysis‑decision‑execution pipeline—by exposing component‑level vulnerabilities.

Method

TradeTrap Evaluation Framework

TradeTrap wraps any agent in a four‑component pipeline: market intelligence, strategy formulation, portfolio & ledger handling, and trade execution. Controlled disturbances are injected at a single component while keeping all other factors identical. Experiments run on a closed‑loop historical back‑test of ≈100 NASDAQ‑100 stocks from 2023‑10‑01 to 2023‑10‑31. Each agent starts with $5 000 capital and zero positions.

Threat Model

Market intelligence : data falsification, MCP tool hijacking.

Strategy formulation : prompt injection.

Portfolio & ledger : memory poisoning, state tampering.

Trade execution : delayed‑flood or tool misuse.

Attack Modules

Data falsification replaces genuine news and social‑media signals with coordinated fake narratives while leaving price series unchanged. MCP hijacking inserts a malicious tool‑server that returns adversarial payloads after a timed delay. Prompt injection flips key directional cues while preserving prompt structure. Memory poisoning appends forged trade records to the persistent position file. State tampering hooks the position‑reading API to report zero holdings.

Evaluation Protocol

Nine metrics are recorded per run: total return (%), annualized return (%), maximum drawdown (MDD, %), volatility (%), position utilization (PU, %), Sharpe ratio, Calmar ratio, average position concentration (%), and maximum position concentration (%). Each experiment activates only one attack module to preserve causal attribution.

Experiments

Market‑Intelligence Attacks

Injecting clean news improves adaptive agents’ returns but dramatically raises volatility; fake news causes the adaptive agent to over‑react to scripted crises and fail to recover, while the procedural agent remains more stable. Quantitatively, the adaptive agent’s total and annualized returns increase with clean news but its risk‑adjusted metrics deteriorate sharply under fake news. The procedural agent’s returns stay near baseline, with lower volatility and a better Sharpe ratio.

MCP Tool Hijacking

Under a “volatility trap” configuration the agent misidentifies a market dip as a buying opportunity, then liquidates completely during the V‑shaped recovery, leading to a “strategic paralysis” where subsequent decisions are based on a fabricated portfolio.

Prompt Injection

Reversing directional cues causes the adaptive agent’s total return, annualized return and Sharpe ratio to collapse, while trade frequency spikes and position concentration approaches 100 %. The procedural agent also degrades but more gradually, with metrics staying closer to the clean baseline.

Memory & State Attacks

Memory poisoning adds unauthorized trades to the persistent position file, causing both agents to diverge permanently from the clean baseline. State tampering forces the adaptive agent to perceive zero holdings, prompting endless buying and a monotonic increase in real exposure; the procedural agent repeatedly sells, accumulating a large short position and suffering catastrophic loss.

Cross‑Agent Comparison

Adaptive agents achieve higher baseline returns but exhibit larger volatility and concentration, making them vulnerable to information‑level attacks. Procedural agents sacrifice peak performance for steadier risk profiles and better resistance to market‑intel attacks, yet they can suffer disastrous losses when internal state is corrupted.

Code and data are available at https://github.com/Yanlewen/TradeTrap.

LLMstress testingrobustnessfinancial AITradeTraptrading agents
Bighead's Algorithm Notes
Written by

Bighead's Algorithm Notes

Focused on AI applications in the fintech sector

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.