Celebrating 20 Years of Linux Kernel Innovation: Highlights from the 20th CLK Conference
The 20th China Linux Kernel (CLK) Conference in Shenzhen gathered nearly 500 developers on‑site and over 170,000 online viewers, showcasing five technical sub‑forums, keynote insights on AI‑driven kernel challenges, RISC‑V adoption, and a 31‑fold rise in Chinese kernel contributors, underscoring the rapid evolution of China’s kernel ecosystem.
On 1 November 2025, the 20th China Linux Kernel (CLK) Conference was successfully held in Shenzhen. Organized by vivo and sponsored by Alibaba Cloud, Huawei, Volcano Engine, Loongson, Ant Open Source, OPPO and Tencent Cloud, the event marked the 20‑year anniversary with the theme “Stay true to the original, embark on a new covenant”, focusing on Linux kernel evolution in the AI era.
Attendance reached nearly 500 developers on‑site, while the live‑stream attracted more than 170,000 online viewers.
The conference, initiated by Tsinghua University, Intel, Huawei, Alibaba Cloud, Fujitsu, Digi‑Tech, Tencent Cloud, OPPO, ByteDance, vivo, Ant Group and Loongson, aims to promote open‑source technology and foster community exchange, and has become the most influential Linux kernel summit in China.
Committee chair Lü Huijing and chief mentor Wu Fengguang reviewed the conference’s growth: from a small technical seminar at Tsinghua in 2006 to today’s flagship event, speaker numbers have increased tenfold, the number of Chinese kernel developers has grown 31 times, and patch contributions have risen 36 times. Since 2021, Chinese developers have contributed over 20 % of global kernel patches, placing China in the world’s leading tier.
Opening remarks by Professor Xia Wen (Harbin Institute of Technology, Shenzhen) highlighted future technical challenges and emphasized that every line of code laid a stronger foundation for the Linux ecosystem.
Keynote speaker Chen Junyan, senior director of vivo’s Software System Architecture Center, presented “Linux Kernel in the AI Agent Era”, analyzing bottlenecks in scheduling, memory, storage and metrics, and stressing the need for low‑latency, high‑concurrency execution environments for AI.
ByteDance engineer He Zhongkun shared “Large‑Scale Kernel Version Migration Practices”, detailing systematic stability building, fallback mechanisms, code integration, release verification, and demonstrating how to achieve “stable and fast” version iteration in massive data‑center clusters.
Alibaba Cloud senior expert Song Zhuo, together with ByteDance engineer Cui Yunhui and Xuantie kernel lead Guo Ren, delivered “Advancing RISC‑V into High‑Performance Data‑Center and Cloud Scenarios”. They reported 240+ RISC‑V‑related patches from Alibaba, discussed RAS, virtualization and performance monitoring, and presented practical experiences from both companies.
Guo Hanjun, chief architect of Huawei’s OS Kernel Lab, introduced “UB Bus OS Support and Key Scenarios”, describing unified addressing, memory pooling and low‑latency communication, and showcasing deployments in high‑performance databases, AI inference and big‑data workloads.
In a round‑table chaired by Liu Ruyi (vivo kernel team lead), six experts from Alibaba Cloud, Huawei, ByteDance, OpenCloudOS, Loongson and OPPO debated AI‑era kernel opportunities and challenges. They agreed that scheduling and memory management are core challenges for AI workloads, that heterogeneous architectures such as RISC‑V and ARM are reshaping kernel design, and that open‑source collaboration remains the cornerstone of progress.
The conference featured five sub‑forums:
Memory Management & Optimization – covering ZRAM multi‑compression, PMR parallel reclamation and ZCACHE asynchronous file compression.
File System & Storage – presenting F2FS Large Folios, XMFS cross‑node pooling, and Zoned Storage performance tuning.
Scheduling Performance & Debugging – discussing eevdf scheduler optimizations, BPF user‑space tracing and continuous profiling systems.
Hardware Architecture & Heterogeneous Computing – showcasing RISC‑V vector extensions, the UB bus, and LoongArch binary translation.
AI Infrastructure & eBPF Applications – focusing on GD2FS distributed file system, eBPF performance analysis for large models, and GPU profiling.
The event concluded with a special contribution award presented to 17 outstanding experts, reflections on the 20‑year journey, and a pledge to continue advancing China’s Linux kernel technology to new heights.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
