Interview on Data Lakehouse: Current Applications, Challenges, and Evolution
This interview with NetEase data‑lake technology manager Ma Jin explains the distinction between data lakes and lakehouses, reviews the evolution of table‑format technologies such as Iceberg, Hudi and Delta Lake, evaluates feature maturity and performance trade‑offs, and discusses systematic versus non‑systematic adoption in enterprises.
The article begins by clarifying that data lakes and data lakehouses (lakehouses) are not the same: data lakes focus on storage of structured and unstructured data, while lakehouses add query‑able, ACID‑compliant capabilities through table‑format layers.
It then outlines the evolution of lakehouse technology, noting that the concept originated from Databricks and that open‑source projects like Delta Lake, Apache Hudi and Apache Iceberg each aim to address different gaps—incremental upserts, table‑format standardization, and tighter Spark integration.
The interview assesses the maturity of key lakehouse components: file formats, DML/SQL support are stable; streaming‑batch integration, efficient concurrent updates, and fine‑grained concurrency control are still immature; ACID transactions, rollback, and schema evolution are relatively mature, while change‑data‑capture and time‑travel are of medium maturity.
Feature importance is ranked, with ACID/rollback/concurrency control at the top, followed by change‑data‑flow, time‑travel, and schema evolution. The discussion highlights that table‑format lakehouses enable “stream‑batch integration,” allowing near‑real‑time data processing for AI and BI workloads.
Two user archetypes are described: systematic users (large teams with extensive data‑platform tooling) and non‑systematic users (small, lightweight teams). Systematic users often avoid lakehouse features like ACID because their workloads are primarily batch‑oriented, while non‑systematic users benefit more from the flexibility and cost advantages.
The article concludes that lakehouses offer clear cost savings and a unified data model, but large enterprises may prioritize extreme query performance (favoring OLAP engines like ClickHouse or Doris) over the modest performance gains of lakehouses. Adoption decisions therefore depend on cost‑benefit analysis, team size, and the need for real‑time capabilities.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.