How a 0.5 MB AI Model Tackles Global Supply‑Chain Challenges: Li‑Net in Action

Li‑Net, a 0.5 MB multi‑channel time‑series model co‑developed by SF Technology and Chinese universities, achieves state‑of‑the‑art accuracy with linear‑complexity attention, runs on edge devices, and has been deployed across SF's global supply‑chain for demand forecasting, inventory optimization, and capacity planning, delivering measurable cost reductions.

DataFunTalk
DataFunTalk
DataFunTalk
How a 0.5 MB AI Model Tackles Global Supply‑Chain Challenges: Li‑Net in Action

Background and Publication: Li‑Net was jointly created by SF Technology, Zhejiang University, and Zhejiang University of Technology, targeting industrial‑grade multi‑channel time‑series prediction. The work was peer‑reviewed and presented at the ICDE 2026 conference, marking a rare blend of academic innovation and real‑world logistics applications.

Industry Pain Points: Global supply chains face three major challenges: (1) coupling of multiple variables, (2) strong non‑stationary fluctuations, and (3) difficulty integrating multimodal data. Conventional models suffer from channel‑wise bias, O(n²) attention complexity, and oversized architectures that cannot be deployed on edge devices, limiting real‑time prediction and large‑scale node scheduling.

Technical Innovation: Li‑Net adopts an Encoder‑Processor‑Decoder pipeline. It replaces simple feature concatenation with multimodal embeddings that guide attention. A dual‑dimensional (time + channel) Top‑K sparse attention reduces computational complexity to linear time. The backbone relies on a lightweight MLP, enabling high precision under low‑compute constraints.

Model Performance: Across five public datasets and 24 configuration settings, Li‑Net secured 20 state‑of‑the‑art results, achieving an average MAE of 0.3443. It outperformed iTransformer, PatchTST, and TFT. The model size is only 0.5 MB, with inference latency between 0.4 s and 0.56 s and minimal training memory consumption, delivering an optimal balance of accuracy and efficiency.

Deployment and Business Value: The model has been rolled out at scale within SF's global supply‑chain for sales forecasting, inventory optimization, and capacity planning. It effectively lowers stock‑out and overstock costs, boosts turnover and fulfillment rates, and provides a quantifiable, reproducible AI solution that can be deployed on edge devices.

Audience Takeaways:

Acquire a practical solution to the core challenges of multi‑channel, multimodal, long‑horizon forecasting in industrial settings.

Learn the three key techniques—dual‑dimensional Top‑K sparse attention, multimodal navigation, and lightweight MLP backbone—that enable SOTA accuracy with a 0.5 MB model.

Understand the complete pipeline from research innovation to engineering implementation and business impact, and how to measure AI‑driven cost reduction and efficiency gains.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

AISupply Chaintime series forecastingedge deploymentlightweight modelLi-Net
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.