Ant Group’s AntBaiLing Model: Pushing AI Scaling Limits with Trillion‑Parameter Efficiency

Ant Group’s President Luo Ji outlined how the AntBaiLing suite, featuring trillion‑parameter open‑source models, three efficiency breakthroughs, and a domestic compute cluster, is advancing AGI research and inclusive applications, especially in healthcare, while emphasizing ethical, trustworthy AI.

AntTech
AntTech
AntTech
Ant Group’s AntBaiLing Model: Pushing AI Scaling Limits with Trillion‑Parameter Efficiency

On November 8, at the 2025 World Internet Conference · Wuzhen Summit Frontier Artificial Intelligence Model Forum, Luo Ji, President of Ant Group’s Platform Technology Business Group, delivered a keynote on Ant’s continuous technological innovations that drive breakthroughs in large‑scale models.

He noted that General‑Purpose AI (AGI) has rapidly advanced under the “Scaling Law”, with mainstream flagship language models now trained on over 20 TB of data and reaching the “trillion‑parameter” era, while tightening compute resources and rising costs constrain further progress.

Ant Group is pursuing a series of innovations—model‑parameter efficiency, data‑application efficiency, and compute efficiency (“three efficiencies”)—to enhance two key factors of intelligent experience: model response speed and knowledge scale, aiming to promote inclusive AGI applications.

Using the Ant BaiLing model as an example, Luo described concrete results. Ant BaiLing has built a full‑size, full‑modal, fully open‑source model suite covering language, reasoning and multimodal capabilities. In October, the world’s first open‑source trillion‑parameter “thinking” model Ring‑1T was released, demonstrating leading logical reasoning, code generation, and mathematics performance comparable to an International Mathematical Olympiad silver medal, as well as strong abilities in healthcare and creative writing. Innovative training methods stabilized both training and inference sequence lengths, nearly doubling training efficiency.

Ant also achieved a breakthrough in controlling the number of tokens generated during inference: Ling‑1T adopts an evolutionary chain‑of‑thought approach, maintaining correctness while significantly reducing token consumption, thus attaining a Pareto‑optimal balance between task effectiveness and compute cost.

On the compute side, Ant has deployed a domestic cluster with tens of thousands of cards, supporting both self‑developed and mainstream open‑source models. Training stability exceeds 98 %, and performance rivals international clusters, with the infrastructure fully applied to large‑model training and inference services in security risk control.

Luo emphasized that the ideal future is not AI replacing humans but AI amplifying human capabilities, achieving high‑level human‑machine collaboration while ensuring ethical, trustworthy, and secure AI.

Ant BaiLing’s technology has been deployed across multiple domains, especially in medical health. This year Ant launched its first AGI‑native medical application, AQ, serving nearly 800 million insured‑code users, connecting 5 000 hospitals, and advancing inclusive medical intelligence.

As a major technological achievement of Ant Group, the BaiLing model, with its “double‑trillion” parameter scale, cutting‑edge architecture, and deep domestic compute integration, showcases China’s competitiveness in AGI research. The World Internet Conference · Wuzhen Summit not only serves as a global platform for internet innovation but also highlights Chinese enterprises’ solid progress in AI technology and its inclusive applications.

large language modelsOpen-sourceAGIModel Efficiency
AntTech
Written by

AntTech

Technology is the core driver of Ant's future creation.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.