Backend Development 10 min read

Design and Evolution of Vivo Mall Product System: Architecture, Challenges, and Solutions

This article details the evolution of Vivo's e‑commerce product system from a monolithic design to a modular, high‑performance backend, describing architectural changes, challenges such as stability, scalability, and data consistency, and the technical solutions implemented to address them.

Architect
Architect
Architect
Design and Evolution of Vivo Mall Product System: Architecture, Challenges, and Solutions

1. Introduction

With rapid user growth, the Vivo official mall v1.0 monolithic architecture showed drawbacks such as bloated modules, low development efficiency, performance bottlenecks, and difficult maintenance. Since 2017, a v2.0 upgrade has been underway, vertically splitting the system by business modules to create independent services that support the main site.

The product module, being the core of the entire flow, suffered from performance degradation due to its increasing size, making a service‑oriented transformation essential.

2. Product System Evolution

The product module was extracted from the mall and turned into an independent product system, gradually becoming a foundational service for the mall, search, membership, marketing, and other subsystems.

Initially, the product system contained many unrelated business modules (e.g., promotional activities, flash‑sale, inventory). To improve extensibility and maintainability, activities and gifts were separated into the promotion system, flash‑sale was isolated as its own service, and a consignment subsystem was created for third‑party product categories.

Inventory management faced issues such as a single quantity field per product, fragmented activity inventory, and reliance on actual stock for both sales and promotions. An inventory center was established to synchronize with ECMS, calculate expected shipping warehouses, and provide low‑stock alerts.

3. Challenges

As the lowest‑level system, the product service must ensure stability, high performance, and data consistency.

3.1 Stability

Avoid single‑node bottlenecks by scaling nodes based on load testing.

Implement business‑level rate limiting and degradation to prioritize core interfaces.

Set reasonable timeouts for Redis and database calls.

Standardize logging and integrate with monitoring and alerting platforms.

Use circuit breakers for external dependencies.

3.2 High Performance

Multi‑level caching is employed: hot data is served from local cache, otherwise from Redis. The database uses a read‑write separation architecture. Rate‑limiting components protect database interfaces from traffic spikes.

Two solutions were explored for Redis key explosion caused by product list caching:

Solution 1 : Iterate over the parameter list, fetching each key individually from Redis.

This reduces memory pressure but increases network calls.

Solution 2 : Enhance the Redis component by using pipeline (since Redis cluster does not support MGET). Keys are grouped by slot and sent in a single batch, achieving over 50% performance improvement and eliminating the key‑explosion issue.

Hotspot traffic (e.g., new product launches) caused Redis node imbalance. Solutions included key hashing, local caching with Caffeine (later replaced by a custom hotspot cache with dynamic detection and cluster broadcast), and short‑lived caches (≤15 seconds).

3.3 Data Consistency

Redis consistency is handled via the Cache‑Aside pattern: reads check cache first, fall back to DB and populate cache; writes update DB then invalidate cache.

Cross‑database transactions for inventory were initially managed with exception handling and local rollbacks, later replaced by the open‑source Seata distributed transaction framework.

4. Summary

The article describes how Vivo’s product system was split and refined into a dedicated, high‑performance backend service, outlining the architectural decisions, encountered technical challenges, and the solutions adopted, with future work planned for inventory evolution and distributed transaction handling.

backende-commercesystem architectureMicroservicesCachingdistributed transactions
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.