GIAC 12th Conference Highlights: AI, Cloud‑Native, and Large‑Model Innovations
The GIAC 12th Global Internet Architecture Conference in Shenzhen gathered experts from Kuaishou, ByteDance, Tencent, iFlytek and Bilibili to discuss large‑model industrialization, generative AI, AI safety, cloud‑native and multimodal technologies, presenting 59 cutting‑edge architectural cases and insightful talks from leading innovators.
GIAC 12th Global Internet Architecture Conference Overview
On June 13‑14, msup and High Availability Architecture co‑hosted the 12th GIAC Global Internet Architecture Conference at the Sheraton Shenzhen Longbo Lin Tianrui. The event brought together technical experts from Kuaishou, ByteDance, Tencent, iFlytek, Bilibili and other leading companies to explore the core theme of large‑model industrialization, focusing on generative AI, AI safety, cloud‑native, and multimodal technologies across 59 frontier architecture cases.
Opening Ceremony and Keynote Speakers
The opening ceremony aimed to help architects fully embrace AIGC, understand technology directions for 2025, and provide practical insights to over 500 attendees. Five distinguished speakers shared their expertise.
Ye Lin – Kuaishou
Topic: "Reshaping AI Future: How Kuaishou Builds an Intelligent Productivity Base"
Kuaishou adopts a pragmatic approach to large‑model AI by focusing on deep vertical applications and broad horizontal business enablement rather than heavy investment in pre‑training. The company addresses three major challenges of large‑model deployment: cost reduction, application optimization, and out‑of‑the‑box usability. To achieve this, Kuaishou has built capabilities such as model engine optimization, knowledge‑base enhancement, fine‑tuning tools, and a development platform.
The AI foundation is constructed in stages: first, solidifying infrastructure with efficient liquid‑cooling data centers, self‑designed servers, and high‑performance networking; second, establishing a unified scheduling engine for diverse workloads; third, creating an AI application platform that integrates open‑source model marketplaces, knowledge bases, toolchains, low‑code orchestration, and promotes serverless AI applications for higher density and agility.
Kuaishou embeds large models into both platform and business layers, driving intelligent operations, resource scheduling, AI‑generated anchors, e‑commerce content creation, automated product descriptions, intelligent comment interaction, and content moderation, dramatically improving efficiency.
Ye Lin concluded that 2025 will be the "AI Application Year" and highlighted the need for AI systems with human‑like memory to advance intelligent assistants.
Liu Fuqiang – msup
Topic: "Engineer Culture Drives Organizational Innovation"
Liu introduced the "DRIVE" model and outlined five key practices:
Strategic Decoding and "Technology × Business" Re‑engineering: Reshape technical committees to align technology with business, achieving cost reduction and efficiency.
Team Refresh – Linking Strategy to Organizational Capability: Build training camps for key roles, define standards, and create learning paths to enhance team skills.
Innovation Mechanism – Cultivating Continuous Smart Emergence: Introduce external innovations and benchmark global best practices.
Bridging Gaps – From Technical Management to Tech‑Business Management: Develop entrepreneurial leadership and learn from benchmark cases.
Efficiency Focus – Eliminating Waste and Enhancing Customer Value: Build a digital‑efficiency platform to drive cost reduction and high‑efficiency flow.
Chen LIDONG – Tencent Cloud
Topic: "Technical Innovation Practices of TencentOS Linux Server OS"
TencentOS, developed since 2010, has become a mature, secure, and high‑performance operating system supporting Tencent's massive services (WeChat, QQ, games, ads, payments). It has contributed to open‑source virtualization and OpenJDK communities and replaced internal OS across Tencent.
Key achievements include:
Security and stability for financial institutions and critical industries.
Cost‑efficiency improvements: 15‑45% CPU utilization boost, 30% memory cost reduction, 5‑30% server power savings.
AI innovation: qGPU for precise GPU slicing, TACO‑LLM acceleration module achieving up to 6.25× latency reduction and 2× throughput increase.
Seamless migration for legacy workloads with 100% compatibility with RHEL/CentOS.
Wang Baoping – AI Product Humanization
Topic: "How AI Products Can Have Humanity"
Wang emphasized creating immersive environments (cinemas, churches, offline conferences) that restore users' "attention freedom" instead of competing for attention. He showcased products like Cursor, immersive translation, and YouMind, which lower creation barriers, assist users, and promote a flow state where creation becomes consumption.
Li Chaoming – AI + Data + MCP Redefining API
Topic: "AI + Data + MCP, Redefining API"
Leveraging 15 years of logistics data, FastExpress built a nationwide intelligent logistics graph and introduced the first MCP Server for the industry. The API evolves from simple data exchange to an intelligent, predictive service powered by AI and MCP, enabling autonomous planning and execution of API calls.
Vibe Coding Roundtable
Tim Yang moderated a discussion with Pu Songyang (Alibaba Cloud Front‑End), Zhu Hailin (Auto‑Coder author), and Li Yafei (ClackyAI CEO). They highlighted breakthroughs such as Cursor's "auto‑pilot" mode, AutoCode's large‑scale code generation, and ClackyAI's AI‑driven Python module solutions. Participants agreed that Vibe Coding is reshaping development paradigms, with AI agents increasingly handling coding tasks, shifting developers toward requirement definition, architecture design, and result verification.
Conclusion
The opening ceremony concluded with a sense of abundant knowledge and anticipation for next year’s gathering.
High Availability Architecture
Official account for High Availability Architecture.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
