Artificial Intelligence 13 min read

Tongyi Xingchen Personalized Large Model: Technical Overview and Applications

This article summarizes the development background of large language models, Alibaba's progression in foundational and personalized AI, the design and capabilities of the Tongyi Xingchen personalized model, its multimodal and agent-based architecture, various industry use cases, and the safety and responsibility measures applied to ensure trustworthy AI deployment.

DataFunSummit
DataFunSummit
DataFunSummit
Tongyi Xingchen Personalized Large Model: Technical Overview and Applications

The talk introduces the background of large model development, tracing from early pre‑training models such as BERT and GPT‑1 to the evolution of instruction‑following, alignment, and plugin ecosystems, highlighting the shift toward multimodal and agent‑based capabilities.

Alibaba’s AI research began in 2018 with pre‑training task design, leading to the trillion‑parameter multimodal M6 model, the AliceMind series, and later large‑scale models like PLUG and the 10‑trillion‑parameter version of M6. In 2023, these efforts were consolidated into the Tongyi series, including Tongyi Qianwen (base model), Tongyi Tingwu (speech), and Tongyi Wanxiang (image generation).

The focus then shifts to Tongyi Xingchen, a personalized large model aimed at delivering character‑driven, empathetic, and context‑aware interactions. It builds on the Tongyi Qianwen foundation, fine‑tunes with massive domain‑specific data (e.g., game scripts, character profiles), and supports long‑context (up to 16K tokens), tool usage, and memory mechanisms.

Key application scenarios include emotional companions, customizable virtual pets, intelligent NPCs for games, professional services such as historical or psychological consulting, and IP/character replication for commercial use.

Four core technical challenges are identified: (1) transforming a generic model into a personalized one with human‑like traits; (2) enabling efficient collaboration between large and small models via an AI‑agent paradigm; (3) advancing multimodal capabilities to handle text‑image interactions; and (4) ensuring safety, alignment, and responsible AI behavior.

To address these, the ModelScope‑Agent framework integrates large‑model control with a suite of open‑source small models, providing tool retrieval, memory management, and API integration for complex tasks. The mPLUG‑Owl multimodal model introduces a vision encoder, modality‑adaptive modules, and a visual abstractor to fuse image and text information effectively.

Safety and responsibility efforts include a “100 Bottles of Poison for AI” project that solicits adversarial questions from experts across sociology, psychology, law, and human rights, followed by detoxification methods guided by expert principles, improving the model’s ethical compliance.

Overall, the presentation outlines the technical roadmap, product vision, and responsible AI practices behind the Tongyi Xingchen personalized large model and its supporting ecosystem.

multimodal AIpersonalizationlarge language modelsAI Safetymodel agents
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.