Recent Time-Series Paper Summaries (Sep 13‑19, 2025)

This article summarizes four recent time‑series forecasting papers, covering a universal delay‑embedding foundation model, a dual causal network that leverages exogenous variables, a distribution‑aware alignment plug‑in called TimeAlign, and a shapelet‑based framework for interpretable directional forecasting in noisy financial markets.

Bighead's Algorithm Notes
Bighead's Algorithm Notes
Bighead's Algorithm Notes
Recent Time-Series Paper Summaries (Sep 13‑19, 2025)

A Time‑Series Foundation Model by Universal Delay Embedding

Paper link: http://arxiv.org/pdf/2509.12080v1

Authors: Zijian Wang, Peng Tao, Jifan Shi, Rui Bao, Rui Liu, Luonan Chen

Universal Delay Embedding (UDE) is a pretrained foundation model that combines delay‑embedding representations with Koopman‑operator prediction. Leveraging Takens' embedding theorem, the method constructs two‑dimensional patches from Hankel matrices of the observed series; these patches preserve the underlying dynamics and topology. The patches are treated as image tokens for a self‑attention encoder, enabling a finite‑dimensional Koopman operator to perform linear prediction in a latent space. Benchmarks on multiple datasets and real‑world climate data show a mean‑squared‑error reduction of more than 20 % relative to state‑of‑the‑art foundation models, and fine‑tuning experiments demonstrate superior generalization. The learned dynamic representations and Koopman predictions provide interpretable subspaces and invariant dynamic codes.

UDE illustration
UDE illustration

DAG: A Dual Causal Network for Time Series Forecasting with Exogenous Variables

Paper link: http://arxiv.org/pdf/2509.14933v1

Authors: Xiangfei Qiu, Yuhan Zhu, Zhengyu Li, Hanyin Cheng, Xingjian Wu, Chenjuan Guo, Bin Yang, Jilin Hu

DAG introduces a universal framework that exploits both future exogenous variables and causal relations between endogenous and exogenous series. The architecture contains two parallel causal pathways:

Temporal‑causal pathway: a causal‑discovery module learns how historical exogenous variables affect future exogenous variables; a causal‑injection module incorporates this discovered influence when predicting future endogenous variables from historical endogenous data.

Channel‑causal pathway: a similar causal‑discovery module learns how historical exogenous variables influence historical endogenous variables; the corresponding injection module uses this information to enhance predictions that also leverage future exogenous variables.

The dual design enables the model to capture cross‑dimensional causality and to integrate future covariates, addressing the two limitations of existing TSF‑X methods.

DAG architecture
DAG architecture

Bridging Past and Future: Distribution‑Aware Alignment for Time Series Forecasting

Paper link: http://arxiv.org/pdf/2509.14181v1

Code link: https://github.com/TROUBADOUR000/TimeAlign

TimeAlign is a lightweight, plug‑and‑play framework that learns auxiliary features through a simple reconstruction task and feeds them back to any base predictor. Extensive experiments on eight benchmark datasets show consistent performance gains over strong baselines. Ablation analysis attributes the improvement mainly to correcting frequency mismatches between historical inputs and future targets. The authors provide a theoretical argument that the alignment increases the mutual information between learned representations and prediction targets. Because TimeAlign adds negligible computational overhead and does not depend on a specific architecture, it can be inserted into modern deep‑learning time‑series forecasting pipelines.

TimeAlign diagram
TimeAlign diagram

From Patterns to Predictions: A Shapelet‑Based Framework for Directional Forecasting in Noisy Financial Markets

Paper link: http://arxiv.org/pdf/2509.15040v1

Authors: Juwon Kim, Hyunwook Lee, Hyotaek Jeon, Seungmin Jin, Sungahn Ko

The proposed two‑stage framework first applies SIMPC to segment and cluster multivariate financial series, extracting cyclic patterns that are invariant to amplitude scaling and time warping, even when window sizes vary. In the second stage, JISC‑Net, a shapelet‑based classifier, consumes the initial segment of each extracted pattern and predicts the subsequent segment, yielding short‑term directional forecasts. Experiments on Bitcoin and three S&P 500 indices cover twelve metric‑data combinations; the method ranks first or second on eleven of them, outperforming existing deep‑learning baselines. The approach also provides explicit pattern structures that explain each prediction, offering transparent decision‑making.

Shapelet framework illustration
Shapelet framework illustration
forecastingtime seriesrepresentation learningfoundation modelfinancial marketscausal network
Bighead's Algorithm Notes
Written by

Bighead's Algorithm Notes

Focused on AI applications in the fintech sector

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.