Big Data 18 min read

Practical Experience of an E‑commerce Platform’s Offline and Real‑time Data Warehouse

This article shares the practical architecture, technology selection, implementation details, and evolution of an e‑commerce platform’s offline and real‑time data warehouses, covering data modeling, processing pipelines, system components such as Hive, Spark, Flink, ClickHouse, Doris, and Hudi, and the lessons learned from multiple production deployments.

DataFunTalk
DataFunTalk
DataFunTalk
Practical Experience of an E‑commerce Platform’s Offline and Real‑time Data Warehouse

Introduction – The article presents the author’s experience building an e‑commerce data warehouse, including offline, real‑time, and data‑application layers, and summarizes the key challenges and solutions.

Background – The e‑commerce app has over 200 million registered users and 10 million daily active users. Core data consists of structured order data stored in MySQL (1.5 billion orders to date, ~100 k new orders per day) and semi‑structured click‑stream data (~1 billion records per day, 1.16 TB in ORC format). The existing data marts were fragmented, with duplicated logic and poor consistency.

Technical Selection – The company’s data‑mid platform provides offline clusters (Hive, HDFS, YARN, Spark, Presto) and real‑time clusters (Flink, ClickHouse). Offline jobs run on a YARN‑managed 400‑node fleet, mainly using Spark, with some high‑priority tasks on a custom Presto engine. Real‑time ingestion writes to an rt_ods Hive schema (≈1 000 tables). ClickHouse was introduced for interactive queries due to its speed and stability; the ClickHouse cluster consists of 28 nodes (14 × 2 replicas), each with 84 CPU cores and 256 GB RAM.

Offline Data Warehouse – The warehouse follows a three‑layer model: ODS (raw source), DW (DIM, DWD, DWS), and DM (data‑mart). ODS stores snapshots, incremental data, and history. DW contains dimension tables (DIM), detailed fact tables (DWD), and aggregated tables (DWS) that are synchronized to ClickHouse for self‑service analytics. DM stores data for specific applications in Hive, MySQL, ClickHouse, and HBase. Core e‑commerce tables include order, order‑item, user, product, and click‑stream wide tables.

Real‑time Data Warehouse – Real‑time needs (WeChat performance bot, live dashboards, flash‑sale screens) led to a Lambda‑style architecture. Initial implementation used hourly batch jobs that wrote aggregated results to MySQL for Java‑driven reporting. Later, a second‑generation pipeline was built with FlinkSQL jobs that read CDC order logs and click‑stream logs, enrich them with dimension data (product and user), and write to ClickHouse. Real‑time tables use ReplicatedReplacingMergeTree (orders) and ReplicatedMergeTree (click‑stream). Data is retained for three days in ClickHouse.

Data Applications – Based on the warehouse, several applications were delivered: self‑service analysis and fixed reports via QuickBI, WeChat performance bot (Java), user‑product profiling stored in HBase, and a flash‑sale big‑screen built with Vue. The self‑service dataset includes a 256‑column order wide table with JSON‑encoded user and product tags, heavily used across the organization.

Future Evolution and Streaming‑Batch Unification – After a year of stable operation of the second‑generation real‑time architecture, pain points emerged: delayed offline‑real‑time data reconciliation, once‑daily dimension updates, divergent SQL implementations, ClickHouse storage bottlenecks, and dimension‑driven anomalies. The third‑generation design adopts a lake‑house approach using Apache Hudi for streaming writes and reads, and Apache Doris as the OLAP engine. Hudi provides incremental ingestion, indexing, and compatibility with existing HDFS storage, while Doris offers vectorized execution, storage‑compute separation, and flexible indexing. The new architecture unifies batch and streaming pipelines, resolves data‑lag issues, and improves query performance.

Conclusion – The evolution from a pure batch warehouse to a unified streaming‑batch lake‑house demonstrates how modern big‑data technologies (Flink, Hudi, Doris) can address real‑world e‑commerce analytics requirements, providing low‑latency, high‑throughput, and maintainable data services.

e-commerceBig DataFlinkClickHousedata warehouseHudiDoris
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.