Low-Cost Performance Optimization and Long-Term Control for Super Apps: Gaode Map Case Study
Gaode Map’s low‑cost, long‑term performance strategy for super‑apps combines an adaptive resource‑scheduling framework, full‑dimension monitoring, and closed‑loop control to cut startup time over 70%, shrink memory use 30% and binary size 20%, delivering up to three‑fold speed gains on low‑end devices while preserving development efficiency.
With the maturity of mobile Internet, mobile application technology shows diversified trends, and businesses tend to build platforms and super entry points, giving rise to super apps. Rapid business expansion and limited system resources conflict, making it essential to achieve high growth, smooth experience, stable compatibility, and low resource cost.
Since 2019, Gaode Map APP has launched a series of performance‑optimization projects, conducting deep performance analysis and achieving significant improvements. The optimizations covered startup time, core interaction latency, in‑process memory, and package size, delivering multi‑fold performance gains, especially a 3×+ boost on low‑end devices.
Startup Optimization: Reduced startup time by over 70%, achieving map element display within 2 seconds while maintaining a stable low baseline.
Core Interaction Optimization: Achieved sub‑second response on high‑end devices and under 2 seconds on low‑end devices for search and routing, improving overall interaction smoothness.
In‑Process Memory Optimization: Reduced memory usage by ~30% across all device models, enhancing stability.
Package Size Reduction: Decreased the binary size by 20% on both platforms, improving installation conversion rates.
Performance Optimization Business Background
Gaode Map APP faced performance degradation and difficult control, with startup latency visibly increasing over time. The rapid growth of features and user base led to larger codebases (over one million lines), numerous threads, and thousands of tasks, creating sustained performance pressure.
The environment is highly fragmented: Android spans 11 major versions, iOS 14, and device manufacturers apply custom modifications. Additionally, device conditions such as battery level, temperature, and resource contention cause fluctuating CPU, GPU, and memory availability, making consistent performance across all scenarios challenging.
Solution: Low‑Cost Optimization Migration and Long‑Term Control
To address these challenges, a self‑adaptive resource scheduling framework was built. It continuously senses the runtime environment, makes scheduling decisions, generates optimization strategies, and executes them without requiring additional business code, thus preserving development efficiency.
Adaptive Resource Scheduling Framework
The framework perceives four dimensions of the environment: hardware devices, business scenarios, user behavior, and system state. Based on this perception, it applies various scheduling rules:
Degradation Rule: On low‑end devices or when system alerts (e.g., memory, temperature) fire, high‑cost or low‑priority features are disabled.
Avoidance Rule: High‑priority tasks preempt low‑priority ones; for example, background tasks pause while the user interacts with the search UI.
Pre‑processing Rule: Predictive pre‑loading based on user habits, such as pre‑fetching search results if the user typically clicks after 3 seconds.
Congestion Control Rule: When resources are tight, the system reduces resource requests (e.g., limits thread concurrency) to prevent contention.
Strategy execution involves task control via memory caches, databases, thread pools, and network libraries, as well as hardware tuning (e.g., raising CPU frequency and binding high‑priority threads to big cores).
Continuous monitoring feeds back environment status, scheduling decisions, execution records, business impact, and resource consumption to refine future strategies.
Full‑Dimension Resource Monitoring
Given the long and complex technical stack, traditional performance debugging required extensive manual code review. By establishing a code‑module association database and collecting cost and call‑stack data, the system can automatically detect abnormal cost spikes, trace them back to the responsible code, and route the issue to the appropriate owner, enabling parallel problem solving.
Control Process System
A closed‑loop control system is embedded throughout the app lifecycle: pre‑release monitoring catches issues early, integration monitoring validates each change, and online real‑time monitoring with dynamic policy deployment ensures rapid remediation.
Summary and Reflections
Determination Beats Solutions
Super‑app performance issues involve multiple business lines; top‑down commitment and determination are crucial for successful optimization.
Attacking and Defending Are Both Hard
Continuous iteration demands robust control mechanisms to prevent performance regression, balancing standard rules, tooling, and team alignment.
Performance Optimization Is an Ongoing Journey
Future work includes cloud‑native and edge‑intelligent techniques to personalize performance experiences based on user and device characteristics.
Amap Tech
Official Amap technology account showcasing all of Amap's technical innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
