Breaking the AGI Wall: Scaling Laws, Multi‑Agent Collaboration & RL Insights
The Inclusion·外滩大会 forum explored how diminishing returns from massive models demand a shift toward cognitive reasoning, autonomous evolution, multi‑agent coordination, reinforcement learning, high‑quality data, and MoE diffusion models to bridge digital AI with the physical world.
AGI After Hitting the Wall: Cognitive Reasoning, Autonomous Evolution, Multi‑Agent Collaboration
After large‑scale models surpassed one trillion parameters, the marginal gains predicted by scaling laws have diminished, making brute‑force approaches insufficient for achieving general intelligence. Researchers argue that the next evolution requires AI systems that can reason, learn autonomously, and coordinate multiple agents, forming a “plug‑and‑play” AI grid.
Insights from Leading Experts
Qiao Yu, chief scientist of the Shanghai AI Lab and vice‑dean of Shanghai Chuangzhi Academy, described the transition from single‑model optimization to system‑level intelligence characterized by deep cognitive reasoning, continuous self‑learning, and physical‑world interaction.
Zhou Jun, vice president of Ant Group’s Platform Technology Division, introduced the “Bailing” large model, emphasizing a virtuous cycle of high‑quality data, rigorous evaluation standards, and efficient algorithms that enable the model to self‑evolve across modalities (text, image, audio, video).
Reinforcement Learning: Turning Agent Orchestration into a Drag‑and‑Drop Game
Reinforcement learning, once the secret weapon behind AlphaGo, is now seen as the key to giving large models “hands and feet.” Wu Yi, assistant professor at Tsinghua University’s Institute of Interdisciplinary Information, presented AReaL, a reinforcement‑learning framework that simplifies multi‑agent workflow orchestration and fosters complex multi‑step reasoning.
Bridging the Simulation‑to‑Reality Gap
In safety‑critical domains such as autonomous driving and robotics, real‑world trial‑and‑error is prohibitively costly. Researchers advocate extensive simulation (“crashing enough walls”) before deploying agents in the physical world. Xiong Xi, distinguished researcher at Tongji University’s School of Transportation, demonstrated how virtual traffic scenarios can teach safe driving strategies that transfer to real roads.
High‑Quality Data as a Strategic Asset
Liao Yunfa, deputy dean of China Academy of Information and Communications Technology (CAICT) East China Branch and deputy general manager of Shanghai Gongchuang Center, highlighted the industry‑wide “data‑cultivation” phase, where enterprises must build petabyte‑scale, high‑quality corpora. He outlined a three‑step strategy based on data‑governance standards (DCMM, DSMM) to create robust datasets that become a competitive moat for AI applications.
MoE Diffusion Language Model Breakthrough
Ant Group and Renmin University jointly released a MoE‑based diffusion language model (LLaDA‑MoE) trained on ~20 TB of data. The model surpasses previous dense diffusion models and comparable autoregressive models while offering several times faster inference, and it will be open‑sourced soon.
The conference concluded that the future of AI lies in constructing “intelligent agent cities” where AI operates like power grids or transportation networks, linking the digital and physical worlds.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
