Paper Reading: TiMi – An Inference‑Driven Multi‑Agent System for Quantitative Trading

TiMi is a reasoning‑driven multi‑agent framework that decouples strategy development from minute‑level deployment, leverages LLMs for semantic analysis, code generation and mathematical reasoning, and achieves stable profits, high execution efficiency and strong risk control across more than 200 stock and crypto trading pairs.

Bighead's Algorithm Notes
Bighead's Algorithm Notes
Bighead's Algorithm Notes
Paper Reading: TiMi – An Inference‑Driven Multi‑Agent System for Quantitative Trading

Background Recent breakthroughs in large language models (LLMs) have enabled autonomous decision‑making agents, but existing financial trading agents rely on role‑playing, suffer from emotional bias, depend on unstructured peripheral data, and require continuous inference during deployment.

Problem Definition The authors identify three core issues: (1) market‑analysis paradigm – human‑like simulation introduces bias; (2) data selection – reliance on noisy, delayed unstructured information; (3) deployment efficiency – long‑running multi‑agent reasoning increases computational cost and latency, causing slippage.

Method

TiMi (Trade in Minutes) proposes a reasoning‑driven multi‑agent system that models each trading environment as a tuple (M, W, S, F, J), where M is the market, W the time window, S the strategy space, F the feedback signal, and J the evaluation function. The system implements three functions: analysis M×W → S, deployment M×S → F, and optimization S×F → S*. The goal is to maximize J(π_Θ) where π is a policy parameterized by Θ.

TiMi consists of four specialized agents:

Macro‑analysis agent A<sub>ma</sub> : identifies macro market patterns, defines technical indicators I, and generates a generic strategy set S.

Strategy‑adaptation agent A<sub>sa</sub> : customizes S for a specific trading pair P, producing S_P and initializing parameters Θ_P.

Robot‑evolution agent A<sub>be</sub> : creates and refines a programmable trading robot B based on the strategy and feedback.

Feedback‑reflection agent A<sub>fr</sub> : decomposes feedback F into refined feedback F* and optimized parameters Θ*.

The overall system can be expressed as a composition of these agents (see Figure 1). Decoupling is achieved through three stages:

Strategy stage : offline complex reasoning builds prototype robots B with initial parameters Θ.

Optimization stage : offline simulation collects feedback F, iterates agent interactions, and produces higher‑level robots.

Deployment stage : optimized robots run in real‑time with low latency, eliminating continuous inference.

Hierarchical Programming Design splits a robot into three layers:

Strategy layer : decision logic derived from S_P (signal generation, position sizing, entry/exit rules).

Function layer : reusable components for technical indicators, data preprocessing, order execution.

Parameter layer : externally managed tunable values.

The authors also propose three programming laws:

Functional cohesion : each component handles a single responsibility.

Unidirectional dependency : dependencies flow from high‑level to low‑level.

Parameter externalization : all adjustable values are extracted from code and centrally managed.

Mathematical‑driven Closed‑Loop Optimization uses the feedback‑reflection agent to formulate risk scenarios as linear programs, solving for feasible parameter spaces and maximizing performance (see Figure 2).

Implementation Details

Backbone LLMs: DeepSeek‑V3 for semantic analysis, Qwen2.5‑Coder‑32B‑Instruct for code generation, DeepSeek‑R1 for mathematical reasoning.

Hybrid inference combines local small models with API‑based large models for flexible performance‑efficiency trade‑offs.

Agents communicate via a mixed XML/JSON protocol, ensuring deterministic execution and post‑hoc verification.

Deployment runs on a CPU‑only environment; robots are implemented in Python and connect to exchange APIs via standardized connectors.

Experiments

Evaluation on 2024 historical data across US index futures, major cryptocurrencies and alt‑coins (over 200 trading pairs) follows a three‑step pipeline: offline strategy development → historical simulation → real‑time trading.

Back‑test results show TiMi achieving high profitability and strict risk control, especially on volatile alt‑coins.

Real‑time trading yields annual returns of 6.4 % (US index futures), 8.0 % (major crypto) and 13.7 % (alt‑coins), with competitive drawdown control and the ability to exploit minute‑level market inefficiencies.

Action efficiency matches traditional quantitative methods, while capital utilization (profit‑to‑loss ratio = 1.53) outperforms grid‑based and existing agent baselines.

Analysis

Performance variance σ = 11.03 % with tail events < 2 %, indicating stable returns under market dynamics.

Ablation on the crypto market shows that removing the strategy‑adaptation agent A<sub>sa</sub> nearly doubles maximum drawdown, and a baseline without optimization is unstable, confirming the necessity of the full strategy‑deployment‑optimization chain.

Visualization of 15‑minute candlesticks for four representative crypto pairs demonstrates TiMi’s adaptive order strategy and effective risk management via the refined parameter matrices M_P and M_Q.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

LLMrisk controlFinancial AImulti‑agent systemquantitative tradinginference-drivenTiMi
Bighead's Algorithm Notes
Written by

Bighead's Algorithm Notes

Focused on AI applications in the fintech sector

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.