Industry Insights 19 min read

How AI/ML is Transforming 5G RAN: From O‑RAN to Self‑Intelligent Networks

The article examines how the massive rollout of 5G creates complex radio‑network challenges and how AI/ML, combined with O‑RAN standards and open‑ecosystem xAPP/rAPP architectures, can accelerate self‑intelligence in wireless networks, improve performance, reduce energy consumption, and pave the way for future large‑model‑driven automation.

AsiaInfo Technology: New Tech Exploration
AsiaInfo Technology: New Tech Exploration
AsiaInfo Technology: New Tech Exploration
How AI/ML is Transforming 5G RAN: From O‑RAN to Self‑Intelligent Networks

1. AI/ML in 3GPP RAN standards

AI/ML research in the 3GPP RAN domain started with self‑organizing network (SON) and minimisation of drive tests (MDT) in Release 17. Release 18 introduced the FS_NR_AIML_air work item, covering use cases such as CSI‑feedback enhancement, beam management, precise positioning and a generic AI/ML framework for the air interface. Subsequent items address model transfer, network automation and AI/ML management. The standards mainly define the impact on interfaces (e.g., A1, O1, O2) rather than concrete implementation details.

2. O‑RAN RIC control loops

O‑RAN defines three hierarchical control loops that close the loop on radio resource management:

Non‑Real‑Time (Non‑RT) RIC : execution time > 1 s, resides at the top of the O‑RAN stack and is part of the Service Management and Orchestration (SMO) framework.

Near‑Real‑Time (Near‑RT) RIC : execution time 10 ms–1 s, runs alongside O‑CU/O‑DU, collects data via the E2 interface, generates configuration policies and issues commands to the radio.

Real‑Time (RT) RIC : execution time < 10 ms, embedded in the O‑DU, directly configures wireless resources.

These loops can coexist but must be coordinated to avoid conflicting configurations.

O‑RAN control loops diagram
O‑RAN control loops diagram

3. RAN algorithm taxonomy

Algorithms are grouped into two domains:

Device‑domain (L1‑L3) : resource allocation, modulation and coding scheme selection, transmit power, and beam management. These algorithms run on the base‑station hardware and are tightly coupled to vendor‑specific implementations.

Planning‑optimization domain : placement of base‑band units, antenna parameter optimisation, and network‑wide KPI tuning such as load balancing and mobility management. Most current AI/ML work targets this domain because it is less dependent on proprietary hardware.

RAN algorithm classification table
RAN algorithm classification table

4. Integrated self‑intelligence platform

The platform follows the O‑RAN interface model (A1, O1, O2) and adds five core components:

Flexible data foundation : ingests multi‑vendor performance, configuration and alarm data; exposes standardized xDR APIs; supports both batch and real‑time pipelines required by device‑domain AI/ML.

Distributed non‑RT RIC : provides model training, inference and continuous learning (supervised, unsupervised and reinforcement learning) across the network.

Centralized decision engine : a set of AI agents that translate network state or digital‑twin feedback into high‑level policies.

Intent‑to‑policy translation module : maps operator intents (e.g., “reduce energy consumption by 10 %”) to concrete configuration actions.

Digital‑twin simulation environment : a realistic radio‑propagation and protocol simulator used to evaluate reinforcement‑learning policies before deployment.

Next‑generation wireless self‑intelligence platform
Next‑generation wireless self‑intelligence platform

5. xAPP/rAPP ecosystem and performance gains

Non‑RT RIC applications (rAPP) implement macro‑level policies such as radio resource management (RRM) and network‑wide optimisation. By chaining simple rAPPs—e.g., a traffic‑prediction xAPP, an analysis xAPP and an execution xAPP—developers can build complex closed‑loop functions with low‑code tools.

Field trials of a multi‑cell coordinated weight optimisation using multi‑agent reinforcement learning reported:

Coverage increase of 5 percentage points

SINR improvement of 15.4 %

5G traffic uplift of 12.1 %

Massive MIMO antenna optimisation
Massive MIMO antenna optimisation

Energy‑saving loops that combine telemetry collection, a load‑prediction xAPP and a multi‑cell energy‑saving rAPP achieved a 10‑12 % reduction in power consumption , saving more than 200 MWh per year in a commercial network.

Wireless network energy saving
Wireless network energy saving

6. Large language models for wireless self‑intelligence

Foundation models such as GPT‑4 can be fine‑tuned on domain‑specific wireless data to create expert models capable of planning, decision‑making, memory handling and tool‑calling. These models can act as the brain of the centralized decision engine, accelerating intent translation and policy generation.

7. Implementation considerations

To realise the platform, the following technical aspects are critical:

Data foundation : must support heterogeneous data formats, provide high‑throughput ingestion, and expose standardised xDR APIs for downstream AI/ML pipelines.

Distributed non‑RT RIC : should run on scalable compute clusters (CPU/GPU/ASIC) and expose model versioning, A/B testing and rollback mechanisms.

Digital twin : needs accurate channel‑model emulation (e.g., 3GPP TR 38.901) and protocol‑stack fidelity to evaluate reinforcement‑learning policies safely.

Policy coordination : the three O‑RAN control loops must exchange intent and capability information to avoid conflicting configuration actions.

8. Conclusion

The convergence of open O‑RAN interfaces, a modular xAPP/rAPP ecosystem and a robust, standards‑compliant data foundation enables operators to evolve from manual network‑planning tools to fully autonomous, self‑optimising wireless networks.

Network Automation5Gindustry insightsAI/MLWireless NetworksRANO-RAN
AsiaInfo Technology: New Tech Exploration
Written by

AsiaInfo Technology: New Tech Exploration

AsiaInfo's cutting‑edge ICT viewpoints and industry insights, featuring its latest technology and product case studies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.