Artificial Intelligence 9 min read

FAST‑LIO2 LiDAR‑IMU SLAM Implementation and Evaluation on Qiji Autonomous Patrol Vehicles

This article presents the technical background of SLAM, explains why GNSS‑based navigation fails in complex urban environments, describes the selection and testing of several LiDAR‑IMU SLAM algorithms—including FAST‑LIO2—on Qiji unmanned vehicles, and details the hardware configuration, algorithmic improvements, experimental workflow, and positioning results achieved in a real‑world patrol project.

Zhengtong Technical Team
Zhengtong Technical Team
Zhengtong Technical Team
FAST‑LIO2 LiDAR‑IMU SLAM Implementation and Evaluation on Qiji Autonomous Patrol Vehicles

SLAM (Simultaneous Localization and Mapping) is a mapping and positioning technique widely used in robotics, autonomous driving, and augmented reality; it fuses data from cameras, LiDAR, and IMUs to build maps and estimate motion in real time.

Conventional GNSS‑based combined navigation systems struggle in tunnels, dense urban canyons, and environments with strong multipath or electromagnetic interference, prompting the need for a LiDAR‑IMU SLAM solution that can provide at least 10 Hz updates with sub‑10 cm accuracy for low‑speed Qiji unmanned vehicles.

A survey of existing LiDAR‑SLAM algorithms (Cartographer, hdl_graph_slam, LeGO‑LOAM, FAST‑LIO2) is shown in Table 1; field tests on Qiji platforms demonstrated that algorithms integrating solid‑state LiDAR and IMU data consistently outperformed others in pose accuracy, robustness, and real‑time performance, with FAST‑LIO2 achieving the highest overall precision and compatibility across vehicle models.

FAST‑LIO (Fast LiDAR‑Inertial Odometry) addresses three limitations of earlier methods: degradation in feature‑poor or fast‑moving scenes, high computational load from massive LiDAR feature points, and motion distortion caused by asynchronous laser sampling.

FAST‑LIO mitigates these issues by (1) employing a tightly‑coupled iterative Kalman filter to fuse LiDAR features with IMU measurements, (2) introducing a novel Kalman gain computation that reduces the processing burden while remaining mathematically equivalent to the classic formulation, and (3) adding a backward‑propagation step to compensate for LiDAR scan distortion.

FAST‑LIO2, an upgraded version, further lowers computational load, achieves higher accuracy (e.g., up to 100 Hz odometry), and supports various solid‑state LiDAR scanning patterns, enhancing device compatibility and practical usability.

The Tianjin Taida Street patrol vehicle project equips each unmanned car with four DJI Mid‑70 solid‑state LiDARs (50 Hz IMU, 10 Hz point clouds) and an NVIDIA Jetson Xavier NX edge computing platform (21 TOPS INT8 AI performance, 6‑core Carmel CPU, Volta GPU with 384 CUDA cores and 48 Tensor cores, 2 NVDLA accelerators, 8 GB LPDDR4x memory). SLAM processing runs on the secondary compute node to preserve resources for other autonomous‑driving modules.

SLAM positioning workflow: a start point is set before a GNSS‑degraded segment, the vehicle records SLAM pose at 10 Hz using the hdl_localization algorithm, then proceeds to the end point where GNSS resumes; the resulting map and pose data are visualized in RViz, demonstrating reliable real‑time localization even when GNSS is unavailable.

mappingRoboticssensor fusionautonomous vehiclesLiDARSLAMFAST-LIO2
Zhengtong Technical Team
Written by

Zhengtong Technical Team

How do 700+ nationwide projects deliver quality service? What inspiring stories lie behind dozens of product lines? Where is the efficient solution for tens of thousands of customer needs each year? This is Zhengtong Digital's technical practice sharing—a bridge connecting engineers and customers!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.