How AI‑Native Transforms User Experience Management in Telecom Networks

This article examines how the AI‑Native approach reshapes the AISWare CEM platform by integrating large language models, Retrieval‑Augmented Generation, and atomic capability decomposition to improve user perception, streamline interactions, and enable intelligent diagnostic assistants for telecom operators.

AsiaInfo Technology: New Tech Exploration
AsiaInfo Technology: New Tech Exploration
AsiaInfo Technology: New Tech Exploration
How AI‑Native Transforms User Experience Management in Telecom Networks

Background

The AISWare Customer Experience Management (CEM) platform is being rebuilt using an AI‑Native approach. The goal is to embed large‑language‑model (LLM) capabilities throughout the product lifecycle so that user‑perception tasks such as experience monitoring, root‑cause analysis, and network‑issue localisation become faster, more accurate, and easier to operate.

AI‑Native Design Principles

Interaction method transformation : replace manual text‑box entry and mouse clicks with LLM‑driven conversational interfaces.

Interaction‑flow recommendation : the model suggests the most appropriate workflow (e.g., quality‑management, perception analysis, smart‑customer‑service) based on the user’s intent.

Automatic capability response : context‑aware function calls are generated automatically, lowering the barrier for users to invoke complex analytics.

Enhanced user experience : AI core technologies are tightly integrated into every stage of the user‑perception lifecycle.

RAG‑Based Knowledge Q&A

To overcome the difficulty of navigating dense product manuals, the platform adopts Retrieval‑Augmented Generation (RAG) . A curated, non‑confidential knowledge base is indexed, and at query time the LLM retrieves the most relevant passages, combines them with the user’s natural‑language question, and generates a step‑by‑step answer. Prompt engineering forces the model to render function names as hyperlinks, enabling one‑click navigation to the corresponding UI component.

RAG‑based CEM knowledge‑Q&A workflow
RAG‑based CEM knowledge‑Q&A workflow

Atomic Capability Decomposition

All CEM functions are broken into atomic capabilities —minimal, independently callable units with explicit boundaries, required parameters, and API signatures. The orchestration pipeline follows four steps:

Select the appropriate atomic capability based on the user’s intent.

Extract the required parameters from the natural‑language request.

Invoke the capability’s REST/GRPC API.

Evaluate the API response and synthesize a user‑friendly answer.

For example, a query “How do I query a KPI?” triggers the selection of the GetKPI atomic capability, automatic extraction of KPI name and time range, an API call to the KPI service, and a formatted answer that includes a direct link to the KPI dashboard.

Atomic capability structure
Atomic capability structure

Diagnostic Assistant

The diagnostic assistant builds on the knowledge‑Q&A layer by adding expert‑curated scenario templates and intent‑recognition logic. When a complaint such as “User 139XXXXXXX reported a network issue on May 20” is received, the assistant:

Identifies the complaint scenario and extracts key entities (phone number, date).

Maps the scenario to a predefined sequence of atomic capabilities (e.g., LocateUserAnalyzeNetworkFaultGenerateReport).

Executes each capability in order, evaluating intermediate results to decide whether further analysis is required.

Stops when a final diagnosis is produced and returns a concise, actionable summary.

CEM perception‑diagnosis scenario
CEM perception‑diagnosis scenario

Implementation Workflow

Atomic capability selection & parameter extraction : The LLM parses the user request, chooses the most relevant atomic capability, and extracts concrete parameters (e.g., KPI ID, time window).

API invocation : The platform calls the capability’s endpoint using the extracted parameters.

Result evaluation & answer generation : The LLM formats the raw data according to a predefined template, adds hyperlinks when appropriate, and returns the final response to the user.

Atomic capability call flow
Atomic capability call flow

Future Directions

Perception evaluation of complaint language : Use LLMs to generate intelligent response scripts for customer‑service agents based on the complaint text and the user’s network profile.

Task decomposition for complex perception problems : Combine RAG with LLM reasoning to automatically break down multi‑step scenarios, select the optimal sequence of atomic capabilities, and close the task loop without manual intervention.

Conclusion

By integrating AI‑Native principles, RAG‑driven knowledge Q&A, and fine‑grained atomic capability orchestration, the CEM platform reduces query latency, eliminates manual navigation across multiple UI screens, and delivers a more intuitive, AI‑assisted experience. Over 20 perception‑focused assistants are already in production, and the roadmap targets fully autonomous capability selection to enable self‑intelligent network evolution.

RAGtelecomAI-nativeAtomic CapabilitiesDiagnostic AssistantKnowledge Q&AUser Experience Management
AsiaInfo Technology: New Tech Exploration
Written by

AsiaInfo Technology: New Tech Exploration

AsiaInfo's cutting‑edge ICT viewpoints and industry insights, featuring its latest technology and product case studies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.