Quantitative Finance Paper Digest (Dec 27 2025 – Jan 2 2026)

This article curates recent quantitative finance research, summarizing five papers that explore generative‑AI‑enhanced portfolio construction, LLM‑driven alpha screening with reinforcement learning, statistical tests for look‑ahead bias in LLM forecasts, and a non‑stationarity‑complexity trade‑off framework for return prediction, each with links to the original arXiv PDFs and code.

Bighead's Algorithm Notes
Bighead's Algorithm Notes
Bighead's Algorithm Notes
Quantitative Finance Paper Digest (Dec 27 2025 – Jan 2 2026)

Generative AI‑enhanced Sector‑based Investment Portfolio Construction

Paper link: https://arxiv.org/pdf/2512.24526v1

Large language models (LLMs) from OpenAI, Google, Anthropic, DeepSeek and xAI are prompted to select and weight 20 stocks within each S&P 500 industry index. The selected stocks are combined with classic portfolio‑optimization techniques. Out‑of‑sample evaluation uses two periods in 2025: a stable market (Jan–Mar) and a turbulent market (Apr–Jun). In the stable period, LLM‑weighted portfolios often achieve higher cumulative returns and Sharpe ratios than the corresponding industry index. In the turbulent period many LLM portfolios underperform, indicating difficulty adapting to regime shifts or high volatility. Hybridizing LLM‑based stock selection with traditional optimization improves both performance and consistency.

Alpha‑R1: Alpha Screening with LLM Reasoning via Reinforcement Learning

Paper link: https://arxiv.org/pdf/2512.23515v1

Code link: https://github.com/FinStep-AI/Alpha-R1

Alpha‑R1 is an 8‑billion‑parameter reasoning model trained with reinforcement learning to perform context‑aware alpha screening. The model reasons over factor logic and real‑time news, evaluates the relevance of each alpha, and activates or deactivates factors based on contextual consistency. Empirical tests across multiple asset pools show Alpha‑R1 consistently outperforms benchmark strategies and exhibits greater robustness to alpha decay.

A Test of Lookahead Bias in LLM Forecasts

Paper link: https://arxiv.org/pdf/2512.23847v1

The authors propose a statistical test that estimates the probability a prompt appears in an LLM’s training corpus (Lookahead Propensity, LAP). They demonstrate a positive correlation between LAP and forecast accuracy, confirming the existence and magnitude of look‑ahead bias. The test is applied to two tasks: news‑headline forecasting of stock returns and earnings‑call transcript forecasting of capital expenditures.

The Nonstationarity‑Complexity Tradeoff in Return Prediction

Paper link: https://arxiv.org/pdf/2512.23596v1

The study identifies a trade‑off: more complex models reduce specification error but require longer training windows, which increase exposure to non‑stationarity. A model‑selection framework jointly optimizes model class and training‑window size via an adaptive competition on non‑stationary validation data. Theoretical analysis shows the method balances specification error, estimation variance, and non‑stationarity, achieving performance close to the hindsight‑optimal model. Applied to 17 industry‑portfolio returns, the approach improves out‑of‑sample R² by 14‑23 % relative to standard rolling‑window baselines. During NBER‑identified recessions (Gulf War, 2001, 2008), the method yields positive R² where baselines are negative and adds at least 80 basis points of R² in 2001. Corresponding trading strategies generate cumulative returns more than 31 % higher than industry benchmarks.

Generative AIquantitative financeportfolio optimizationAlpha ScreeningLLMsLookahead BiasNonstationarity
Bighead's Algorithm Notes
Written by

Bighead's Algorithm Notes

Focused on AI applications in the fintech sector

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.