Time Series Paper Digest (Oct 11‑17 2025): FIRE, CauchyNet, EvoRate, CoRA

From Oct 11‑17 2025, this digest presents four recent AI papers on time‑series forecasting: FIRE introduces a frequency‑domain decomposition with independent amplitude‑phase modeling and adaptive weighting; CauchyNet leverages holomorphic activations for compact, data‑efficient learning; the EvoRate framework quantifies learnability via mutual information; and CoRA adds covariate‑aware adaptation to foundation models, all reporting significant accuracy gains and enhanced interpretability.

Bighead's Algorithm Notes
Bighead's Algorithm Notes
Bighead's Algorithm Notes
Time Series Paper Digest (Oct 11‑17 2025): FIRE, CauchyNet, EvoRate, CoRA

FIRE: A Unified Frequency Domain Decomposition Framework for Interpretable and Robust Time Series Forecasting

Current time‑series forecasting methods, whether in the time or frequency domain, mainly use deep learning models based on linear layers or transformers. These approaches encode series in a black‑box manner and rely on trial‑and‑error optimization driven by prediction performance, limiting interpretability and theoretical understanding. Moreover, dynamic changes in data distribution across time and frequency pose a key challenge for accurate forecasting. The authors propose FIRE, a unified frequency‑domain decomposition framework that provides a mathematical abstraction for different types of series to achieve interpretable and robust forecasting. FIRE introduces four innovations: (i) independent modeling of amplitude and phase components, (ii) adaptive learning of frequency‑basis component weights, (iii) a targeted loss function, and (iv) a new training paradigm for sparse data. Extensive experiments show that FIRE consistently outperforms state‑of‑the‑art models on long‑term forecasting benchmarks, delivering superior prediction performance and markedly improved interpretability.

CauchyNet: Compact and Data‑Efficient Learning using Holomorphic Activation Functions

The paper introduces a novel neural network inspired by the Cauchy integral formula, named CauchyNet, for function approximation tasks such as time‑series forecasting and missing‑data imputation. By embedding real‑valued data into the complex plane, CauchyNet captures complex temporal dependencies and surpasses traditional real‑valued models in both prediction performance and computational efficiency. CauchyNet is grounded in the Cauchy integral formula and is supported by a universal approximation theorem, providing strong theoretical guarantees. The architecture employs complex‑valued activation functions, enabling robust learning from incomplete data while keeping the parameter count compact and reducing computational overhead. Experiments across diverse domains—including traffic, energy consumption, and epidemiological data—demonstrate that CauchyNet consistently achieves up to 50 % lower mean absolute error and uses fewer parameters than the best existing models, highlighting its potential for data‑driven predictive modeling in resource‑constrained and data‑scarce settings.

How Patterns Dictate Learnability in Sequential Data

Sequential data—from financial series to natural language—has driven the rise of autoregressive models, yet these algorithms depend heavily on latent patterns that are often identified by human experts. Misunderstanding these patterns can cause model misspecification, increasing generalization error and degrading performance. The recently proposed Evolutionary Rate (EvoRate) metric addresses this by using the mutual information between the next data point and its past to guide regression order estimation and feature selection. Building on this idea, the authors introduce a universal framework based on predictive information, defined as the mutual information between past and future. This quantity naturally yields an information‑theoretic learning curve that quantifies the amount of predictive information available as the observation window grows. Using this formulation, the authors show that the presence or absence of temporal patterns fundamentally limits the learnability of sequence models: even an optimal predictor cannot exceed the intrinsic information constraints imposed by the data. Experiments on synthetic data validate the framework, demonstrating its ability to assess model sufficiency, quantify inherent dataset complexity, and reveal interpretable structure within sequential data.

CoRA: Covariate‑Aware Adaptation of Time Series Foundation Models

Time‑Series Foundation Models (TSFMs) have shown strong impact through model capacity, scalability, and zero‑shot generalization. However, most TSFMs are pretrained on univariate series because heterogeneous variable dependencies and backbone scalability on large multivariate datasets limit their applicability. This limitation ignores crucial covariate information present in real‑world prediction tasks. To enhance TSFM performance, the authors propose CoRA, a generic covariate‑aware adaptation framework. CoRA leverages the pretrained backbone while efficiently integrating exogenous covariates from various modalities—including time series, language, and images—to improve prediction quality. Technically, CoRA maintains equivalence of initialization and parameter consistency during adaptation. By freezing the backbone as a feature extractor, the authors empirically show that the backbone’s output embeddings contain more information than raw data. CoRA also introduces a novel Granger Causal Embedding (GCE) to automatically assess the predictive causality of covariates relative to the target. These weighted embeddings are combined with a zero‑initialized conditional injection mechanism, preventing catastrophic forgetting of the pretrained backbone and gradually incorporating exogenous information. Extensive experiments reveal that CoRA reduces mean‑squared error by 31.1 % compared to covariate‑aware deep predictors with full or few‑shot training, and demonstrates strong compatibility with various advanced TSFMs while extending covariate coverage to additional modalities, offering a practical paradigm for TSFM applications.

deep learningtime series forecastingAI researchcovariate-aware adaptationfrequency domain decompositionholomorphic neural networksinformation-theoretic learnability
Bighead's Algorithm Notes
Written by

Bighead's Algorithm Notes

Focused on AI applications in the fintech sector

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.