Plato: Tencent’s Open‑Source Engine Cutting Billion‑Node Graph Jobs to Minutes
Plato, the newly open‑sourced high‑performance graph computing framework from Tencent’s TGraph project, delivers industry‑leading speed and memory efficiency for billion‑node social network graphs, achieving minute‑level processing with as few as ten servers, and supports a wide range of graph algorithms and learning tasks.
Introduction
Tencent’s TGraph team has open‑sourced the high‑performance graph computing framework Plato, reaching industry‑leading performance and bringing ultra‑large‑scale graph computation into the minute‑level era.
Significance
Graphs are an effective way to represent and analyze massive data, essential in social networks, recommendation systems, cybersecurity, text retrieval, and biomedical research. Computing performance is a key factor for successful graph mining, especially for Tencent’s social network with over a billion nodes, where existing distributed graph frameworks cannot meet the required speed or resource constraints.
Plato reduces algorithm execution time from days to minutes, improves performance by tens of times, and lowers resource requirements to as few as ten servers, creating substantial business value for core services such as WeChat.
Overview
Plato’s open‑source repository: https://github.com/tencent/plato
Key contributions:
Achieves performance several orders of magnitude higher than Spark GraphX, enabling algorithms that previously took days to run in hours or minutes.
Reduces memory consumption by 1–2 orders of magnitude, allowing large‑scale graph jobs on modest clusters (≈10 servers) instead of hundreds.
Originated from Tencent’s massive social‑network graphs but adapts to other graph types, advancing the state of ultra‑large‑scale graph computing.
Core Capabilities
Plato provides two main capabilities:
Offline graph computation at Tencent‑scale.
Graph representation learning at Tencent‑scale.
The overall architecture runs on generic x86 clusters (Kubernetes, YARN, etc.) and supports multiple file systems such as HDFS and Ceph.
The core is Plato’s adaptive graph computation engine, offering sparse/dense adaptive modes, shared‑memory mode, and pipeline mode, along with graph partitioning, representation, and multi‑level communication scheduling.
On top of the engine, Plato supplies layered APIs, algorithm libraries, and solution toolkits that integrate offline results with other machine‑learning pipelines.
Open‑Source Algorithms
Graph Features: tree depth/width, node/edge counts, density, degree distribution, N‑order degree, HyperANF.
Node Centrality: KCore, PageRank, Closeness, Betweenness.
Connectivity & Community Detection: Connected‑Component, LPA, HANP.
Graph Representation Learning: Node2Vec‑RandomWalk, Metapath‑RandomWalk.
Clustering: FastUnfolding.
Other: BFS, co‑occurrence computation.
Upcoming Open‑Source Algorithms
Network Embedding: LINE, Word2Vec, GraphVite.
GNN: GCN, GraphSage.
Performance Comparison
Plato outperforms mainstream distributed graph frameworks. Benchmarks on PageRank and LPA show Plato is 1–2 orders of magnitude faster than Spark GraphX.
Memory usage is also 1–2 orders of magnitude lower, enabling large‑scale jobs on small clusters.
In real business scenarios, Plato delivers excellent performance on typical Tencent‑scale workloads such as co‑occurrence computation, Node2Vec, LINE, and GraphSage.
Acknowledgments
Thanks to Gemini, KnightKing, and other outstanding graph‑computing frameworks for their contributions to Plato’s engine and algorithms. We hope Plato’s ultra‑large‑scale capabilities benefit developers and researchers in the graph computing community.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
WeChat Backend Team
Official account of the WeChat backend development team, sharing their experience in large-scale distributed system development.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
