Quantitative Finance Paper Roundup (Nov 15‑21, 2025)

This roundup presents six recent arXiv papers covering crypto portfolio optimization, Sharpe‑driven stock selection with liquidity constraints, ensemble deep reinforcement learning for stock trading, dynamic machine‑learning‑based stock recommendation, a risk‑sensitive trading framework, and a generative AI model for limit order book messages, each with reported empirical results.

Bighead's Algorithm Notes
Bighead's Algorithm Notes
Bighead's Algorithm Notes
Quantitative Finance Paper Roundup (Nov 15‑21, 2025)

Tfin Crypto: From Speculation to Optimization in Risk Managed Crypto Portfolio Allocation (paper link: https://arxiv.org/pdf/2511.13239v1) by Thanh Nguyen. The authors introduce Tfin Crypto, an end‑to‑end crypto portfolio allocation framework that shifts from speculation to optimization. The workflow comprises four stages: universe selection, Alpha backtesting, volatility‑aware portfolio optimization, and dynamic‑drawdown risk management. In a 30‑day live test on Binance Futures the system achieved ROI +16.68%, Sharpe 5.72, and maximum drawdown 4.56%. It executed 227 trades, 131 of which were profitable (win rate 57.71%), earning +1,137.49 USDT. These results surpass a buy‑and‑hold baseline (Sharpe 1.79, ROI 4.36%, MDD 4.96%) and several top leader‑copy bots on Binance.

Sharpe‑Driven Stock Selection and Liquidity‑Constrained Portfolio Optimization: Evidence from the Chinese Equity Market (paper link: https://arxiv.org/pdf/2511.13251v1) by Thanh Nguyen. The study proposes a three‑stage framework: (1) Sharpe‑ratio‑based universe selection, (2) liquidity‑adjusted mean‑variance optimization, and (3) multi‑level risk management implemented in an automated trading robot. Using daily price and volume data from the Chinese A‑share market (2023‑2025), the strategy delivers an annual return of 25%, Sharpe 1.71, and maximum drawdown 8.2%, outperforming a buy‑and‑hold benchmark (annual return 21%, Sharpe 1.62, drawdown 7.6%). The authors argue that incorporating liquidity‑aware risk‑adjusted selection enhances profitability and stability in emerging markets.

Deep Reinforcement Learning for Automated Stock Trading: An Ensemble Strategy (paper link: https://arxiv.org/pdf/2511.12120v1) by Hongyang Yang, Xiao‑Yang Liu, Shan Zhong, Anwar Walid. The paper presents an ensemble trading strategy that combines three actor‑critic algorithms—Proximal Policy Optimization (PPO), Advantage Actor‑Critic (A2C), and Deep Deterministic Policy Gradient (DDPG)—into a single agent. To mitigate memory consumption in continuous‑action spaces, on‑demand loading is employed. Experiments on 30 highly liquid Dow Jones stocks show that the ensemble outperforms both the Dow Jones Industrial Average and a traditional minimum‑variance portfolio in Sharpe‑ratio‑adjusted risk‑adjusted returns.

A Practical Machine Learning Approach for Dynamic Stock Recommendation (paper link: https://arxiv.org/pdf/2511.12129v1) by Hongyang Yang, Xiao‑Yang Liu, Qingwei Wu. The authors propose a workflow that dynamically selects the top 20% of S&P 500 stocks. First, representative explanatory indicators are chosen. Five common machine‑learning models—linear regression, ridge regression, stepwise regression, random forest, and gradient‑boosted regression trees—are trained on rolling windows to predict quarterly log‑returns. For each period, the model with the lowest mean‑square error ranks the stocks. Portfolio allocation is then evaluated using equal‑weight, mean‑variance, and minimum‑variance schemes. Empirical results indicate superior Sharpe ratio and cumulative return compared with a long‑term single‑strategy benchmark on the S&P 500.

FINRS: A Risk‑Sensitive Trading Framework for Real Financial Markets (paper link: https://arxiv.org/pdf/2511.12599v1) by Bijia Liu, Ronghao Dang. The authors note that existing LLM‑based trading agents focus on single‑step prediction and lack integrated risk management. FINRS introduces a hierarchical market analysis, dual‑decision agents, and multi‑time‑scale reward feedback to align trading actions with return targets and downside‑risk constraints. Experiments across multiple stocks and market conditions demonstrate that FINRS achieves higher profitability and stability than state‑of‑the‑art baselines.

LOBERT: Generative AI Foundation Model for Limit Order Book Messages (paper link: https://arxiv.org/pdf/2511.12563v1) by Eljas Linna, Kestutis Baltakys, Alexandros Iosifidis, Juho Kanniainen. Modeling the dynamics of financial limit‑order books at the message level is challenging due to irregular event timing and rapid regime shifts. Prior LOB models require cumbersome data representations and lack adaptability. LOBERT addresses this by introducing a novel tokenization scheme that adapts the original BERT architecture: each multidimensional message becomes a single token while preserving continuous price, quantity, and time information. The model achieves leading performance on mid‑price movement prediction and next‑message tasks, while requiring shorter context lengths than previous approaches.

machine learningDeep Reinforcement Learningcryptocurrencyquantitative financeportfolio optimizationlimit order book
Bighead's Algorithm Notes
Written by

Bighead's Algorithm Notes

Focused on AI applications in the fintech sector

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.