Artificial Intelligence 9 min read

AR & Lane-Level Navigation Technology Evolution and Practice

In a 2020 Cloud Expo session, Alibaba Amap’s senior map expert Wang Qianwei detailed the evolution and practice of AR‑enabled lane‑level navigation, explaining how real‑time visual perception, cloud‑integrated high‑precision positioning (including MT‑SLAM) and fine‑grained 3‑D road models combine to deliver video‑augmented guidance, lane‑change alerts and traffic‑light status with roughly 30 cm accuracy.

Amap Tech
Amap Tech
Amap Tech
AR & Lane-Level Navigation Technology Evolution and Practice

At the 2020 Cloud Expo (Sept 17‑18, online), Alibaba Amap co‑organized a “Smart Travel” session to share the thinking and practice behind a new generation travel‑life service platform built on DT+AI and a cloud‑native architecture. Topics included high‑precision maps, high‑precision algorithms, intelligent spatiotemporal prediction models, autonomous driving, AR navigation, and lane‑level technology.

This article is the second in a series that documents the speaker’s sharing.

Senior map technology expert Wang Qianwei from Alibaba presented the topic “AR & Lane‑Level Navigation Technology Evolution and Practice”, introducing core technologies, current achievements, and future directions.

The presentation was organized into three parts:

Technical background

Current progress

Core technology

Previously, Amap provided road‑level navigation based on global satellite positioning and digital maps. By introducing visual perception systems and lane‑level data, they have created a real‑scene lane‑level navigation product that offers a “what‑you‑see‑is‑what‑you‑get” experience.

Key AR navigation features include video‑augmented guidance that aligns perfectly with the real world, lane‑change reminders when the vehicle is on the wrong lane, and real‑time traffic‑light status alerts, all of which have received positive user feedback.

Core technology breakdown

AR navigation requires three capabilities:

Real‑time perception of the surrounding environment

Lane‑level high‑precision positioning

Fine‑grained expression of road data

Environment real‑time perception

Amap adopts low‑cost, widely used visual technology and employs lightweight deep‑learning models that can run in real time on low‑compute devices while maintaining high recognition accuracy. Optimization is performed in three areas:

1. Data: massive multi‑source big‑data fusion ensures diverse and comprehensive training samples. 2. Algorithms: network model optimization and feature sharing improve accuracy. 3. Performance: knowledge distillation, model quantization, and multi‑task tracking enable smooth operation on limited hardware.

High‑precision positioning

GPS alone suffers from insufficient accuracy and signal interference, especially in urban canyons or adverse weather. Amap proposes a cloud‑integrated visual positioning technique that regresses the device’s absolute pose from on‑device images combined with cloud‑side view big data via a neural network. Simultaneously, on‑device lane‑line and road‑edge detection provide relative positioning. Fusion of cloud and device results dramatically improves positioning accuracy by an order of magnitude.

When network connectivity is unavailable, Amap employs a Multi‑Source Tightly‑Coupled SLAM (MT‑SLAM) that fuses low‑cost GPS, inertial navigation, and visual sensors to achieve low‑cost high‑precision pose estimation. The relative position accuracy reaches 30 cm for over 82 % of cases.

The integrated positioning engine combines road‑level standard precision with lane‑level high precision, outputting both independently while maintaining correlation, thus supporting navigation and autonomous driving across all scenarios.

Fine‑grained road data expression

With lane‑level high‑precision positioning and real‑time environment perception, the next step is to express standard‑precision data more finely. Amap builds high‑precision road models by leveraging SD point data and lane attribute information, applying algorithms for intersection segmentation, modeling, and reconstruction to create 3‑D road models. For real‑scene guidance, planned path information and guidance cues are combined with real‑time road image features and fused high‑precision positioning to construct corresponding guidance line models.

The model has been deployed in real projects; the 3‑D lane‑level models closely reflect the real world, and AR navigation guidance lines align with most real‑scene roads.

For more animation demos and detailed content, please read the original article and watch the presentation replay.

computer visionmappinghigh precision positioningAR navigationLane-Level NavigationSLAM
Amap Tech
Written by

Amap Tech

Official Amap technology account showcasing all of Amap's technical innovations.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.