Scalable Graph Neural Architecture Search System (PaSca) – WWW 2022 Best Student Paper
PaSca, a scalable graph neural architecture search system that separates message aggregation from updates, explores over 150,000 GNN designs with multi‑objective optimization, delivers models that outperform traditional GNNs in accuracy, memory and speed, has been open‑sourced and deployed at Tencent for risk control, recommendation and fraud detection, and earned the WWW 2022 Best Student Paper award.
The paper "PaSca: a Graph Neural Architecture Search System under the Scalable Paradigm" was jointly developed by Peking University DAIR Lab and Tencent TEG Machine Learning Platform's Angel Graph team, and won the Best Student Paper Award at WWW 2022.
Live broadcast details: Theme "Scalable Graph Neural Architecture Search System | WWW2022", time June 1, 14:30‑16:00, speaker Zhang Wentao (PhD student, PKU, Tencent Angel Graph researcher).
Problem addressed: low scalability and high modeling cost of existing GNNs on massive graphs.
Contributions:
Proposed a new scalable GNN modeling paradigm (SGAP) that separates message aggregation from update, reducing communication in distributed settings.
Designed an extensive search space containing over 150,000 GNN architectures.
Implemented an automated multi‑objective neural architecture search system that optimizes prediction performance, memory usage, and training/inference efficiency.
Open‑sourced the system (GitHub: https://github.com/PKU-DAIR/SGL) and released the code.
System overview: Input graph data and search objectives are fed into the PaSca engine, which searches the space, evaluates candidates with a distributed validation engine, and outputs scalable GNN models.
Experimental results on ten real‑world datasets demonstrate:
The SGAP‑based models achieve higher scalability than traditional message‑passing GNNs (e.g., PaSca‑APPNP vs. GraphSAGE).
The searched models (PaSca‑V1/V2/V3) balance multiple objectives and achieve competitive or superior prediction accuracy with shorter training time.
Pareto analysis shows trade‑offs between accuracy and inference latency.
Industrial impact: The technology has been deployed in Tencent’s internal platforms for financial risk control, video recommendation, social network anti‑fraud, and user similarity recommendation, yielding measurable improvements (e.g., 1.6% CTR increase, 10% fraud detection coverage).
Links: WWW 2022 award page, paper DOI https://dl.acm.org/doi/10.1145/3485447.3511986.
Tencent Cloud Developer
Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.