Tencent's Application of Apache Iceberg for Real‑Time Data Lake Ingestion, Governance, and Query Optimization
This article explains how Tencent leverages Apache Iceberg together with Flink to build a real‑time data lake pipeline, covering data ingestion, Iceberg's snapshot‑based read/write model, compaction and governance services, Z‑order based query optimization, performance results, and future roadmap.
The presentation introduces Tencent's use of Apache Iceberg as a table format that sits between storage and compute, providing ACID semantics, snapshot isolation, and multi‑version data management. Iceberg's key features include snapshot‑based read/write separation, stream‑batch unified writes, engine‑agnostic connectors, and support for table, schema, and partition evolution.
Data ingestion is built on Flink, where a Flink Iceberg sink is split into multiple writers and a single committer to ensure that only committed snapshots become visible, enabling incremental reads and real‑time lake ingestion from sources such as Kafka or binlog.
To address challenges of continuous commits in streaming jobs—such as small file explosion and metadata growth—Tencent developed a data‑governance platform comprising four services: Compaction (asynchronous small‑file merging), Expiration (snapshot cleanup), Clustering (multi‑dimensional data re‑distribution using Z‑order), and Cleaning (TTL‑based old data removal).
The overall workflow starts with metric reporting of Iceberg events, rule‑based task scheduling, and execution of compaction, expiration, clustering, and cleaning jobs, all monitored via dashboards. This architecture keeps file counts and metadata size under control while maintaining low‑latency ingestion.
For query performance, the team applies Z‑order indexing based on GeoHash to co‑locate rows that share common query predicates, dramatically reducing the number of files scanned. Benchmarks show a substantial speed‑up for filtered count queries after optimization.
Future plans include enhancing Iceberg's indexing beyond Z‑order, improving incremental read semantics for rewrite operations, extending SQL‑based management, and further integrating the governance platform with more compute engines to provide an end‑to‑end lake‑to‑analysis solution.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.