How a 0.5 MB AI Model Tackles Global Supply‑Chain Challenges: Li‑Net Technology and Applications
The article presents Li‑Net, a 0.5 MB lightweight time‑series model co‑developed by SF Technology and universities, accepted at ICDE 2026, which overcomes multi‑channel, non‑stationary, multimodal forecasting difficulties, achieves state‑of‑the‑art accuracy with low latency, and is deployed across SF’s global logistics to improve demand, inventory and capacity planning while cutting costs.
Paper background and ICDE acceptance: Li‑Net was jointly created by SF Technology, Zhejiang University, and Zhejiang University of Technology, focusing on industrial‑grade multi‑channel time‑series prediction. The work was published at the ICDE 2026 conference, marking a rare combination of academic innovation and real‑world logistics deployment.
Industry pain points and challenges: Global supply chains suffer from three major issues: (1) coupling of multiple variables, (2) strong non‑stationary fluctuations, and (3) difficulty integrating multimodal data. Conventional models exhibit channel‑wise bias, O(n²) attention complexity, and oversized SOTA models that cannot run on edge devices, limiting real‑time prediction and large‑scale node scheduling.
Technical innovation and methodology: Li‑Net adopts an Encoder‑Processor‑Decoder pipeline. It replaces simple feature concatenation with multimodal embeddings that guide attention. A dual‑dimensional (time + channel) Top‑K sparse attention yields linear‑time computation. The backbone relies on a lightweight MLP, enabling high precision under low‑compute constraints.
Model performance and results: Across five datasets and 24 configurations, Li‑Net achieved 20 SOTA records, with an average MAE of 0.3443, substantially surpassing iTransformer, PatchTST, and TFT. The model size is only 0.5 MB, inference latency ranges from 0.4 s to 0.56 s, and training memory usage is minimal, delivering an optimal balance of accuracy and efficiency.
Deployment and enterprise value: Li‑Net has been rolled out at scale in SF’s global supply‑chain scenarios, including sales forecasting, inventory optimization, and capacity planning. The deployment reduces stock‑out and overstock costs, improves turnover and fulfillment rates, and provides a quantifiable, reproducible, edge‑deployable AI solution for cost reduction and efficiency gains.
Audience takeaways: Attendees learn (1) industrial solutions for multi‑channel, multimodal, long‑horizon forecasting that can be directly applied to logistics, retail, and warehousing; (2) how to implement dual‑dimensional Top‑K sparse attention, multimodal navigation, and lightweight MLP backbones to achieve SOTA accuracy with a 0.5 MB model on edge devices; (3) a complete pipeline from research innovation to engineering implementation and measurable business impact.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
