Graph Attention Multi‑Layer Perceptron (GAMLP) and Node‑Dependent Local Smoothing (NDLS) for Scalable and Flexible Graph Neural Networks
This presentation introduces Tencent Angel Graph's NDLS and GAMLP techniques that address GNN scalability and flexibility by adaptively selecting propagation depth per node, employing node‑wise feature and label propagation with attention mechanisms, and demonstrating superior performance on large‑scale and sparse graph benchmarks.
In real‑world scenarios graph data is massive and heterogeneous, leading to scalability and flexibility challenges for traditional GNNs.
The talk introduces two Tencent Angel Graph solutions: Node‑Dependent Local Smoothing (NDLS) and Graph Attention Multi‑Layer Perceptron (GAMLP). NDLS determines an optimal propagation depth for each node by comparing its intermediate features to a steady‑state feature, thereby mitigating over‑ and under‑smoothing.
GAMLP builds on NDLS by performing node‑wise feature and label propagation, then combines them with attention mechanisms (Recursive Attention and JK‑Attention) before feeding the result to an MLP. The model is fully decoupled, highly scalable, and can be used with any downstream predictor.
Extensive experiments on transductive and inductive benchmarks (including OGB datasets) show that both NDLS and GAMLP consistently outperform baselines, achieve deep propagation without over‑smoothing, and maintain competitive training speed even on billion‑node graphs.
The conclusions highlight the methods’ advantages in scalability, flexibility, and efficiency, and discuss remaining limitations such as the lack of end‑to‑end training and the simple averaging of multi‑hop features.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.