Adaptive Universal Generalized PageRank Graph Neural Network (GPR‑GNN): Solving Generality and Over‑Smoothing in Graph Neural Networks
This article presents the Adaptive Universal Generalized PageRank Graph Neural Network (GPR‑GNN), explains the two main limitations of existing GNNs—lack of generality across homophilic and heterophilic graphs and the over‑smoothing problem—and demonstrates through synthetic and real‑world experiments that GPR‑GNN achieves robust node classification while remaining interpretable and parameter‑efficient.
The talk introduces a recent ICLR paper titled Adaptive Universal Generalized PageRank Graph Neural Network (GPR‑GNN) , authored by Ph.D. students from UIUC. The authors aim to improve graph neural networks (GNNs) for biomedical applications such as gene–disease and drug–disease association prediction.
Background on GNNs : Traditional GNNs stack multiple propagation layers to aggregate neighbor information, achieving strong performance on node classification, graph classification, and link prediction tasks. However, most existing models assume homophily (similar nodes connect) and suffer from two pervasive issues:
Limited generality : they perform poorly on heterophilic graphs where dissimilar nodes are linked.
Over‑smoothing : deep stacking quickly drives node representations to indistinguishable values, restricting practical depth to 2‑4 layers.
Existing GNN architectures such as GCN, GAT, GraphSAGE, JK‑Net, and GCN‑Cheby follow a stacking‑layer paradigm, which does not resolve the above problems.
Proposed solution – GPR‑GNN : The model consists of a single‑layer MLP for feature extraction followed by K steps of graph propagation. Each propagation step produces a representation H₀…H_K, which are linearly combined using learnable GPR weights . This end‑to‑end training jointly optimizes node features and graph propagation, requiring only one additional scalar per step, thus keeping parameter count low.
The authors show that GPR‑GNN can:
Achieve generality by allowing GPR weights to be negative, enabling high‑pass filtering for heterophilic graphs.
Avoid over‑smoothing because gradients suppress unnecessary deep‑step weights during training.
Remain interpretable: learned weights correspond to coefficients of a polynomial graph filter, which can be examined to understand low‑pass vs. high‑pass behavior.
Experiments :
Synthetic data (cSBM) : Varying the homophily/heterophily ratio (ϕ) demonstrates that GPR‑GNN maintains high accuracy across both regimes, while traditional GNNs degrade on heterophilic graphs.
Real‑world benchmarks : GPR‑GNN consistently outperforms baseline GNNs on several public datasets, regardless of graph homophily.
Interpretability study : Learned GPR weights are positive on homophilic graphs (low‑pass) and alternate sign on heterophilic graphs (high‑pass), matching theoretical expectations.
Over‑smoothing test : Initial weights concentrated on the final step cause poor accuracy (~50%); after training, earlier‑step weights increase and final‑step weight shrinks, raising accuracy to ~99% even with 10 propagation steps.
Conclusion and future directions : GPR‑GNN provides a general, over‑smoothing‑resistant, and interpretable GNN framework with few parameters. Future work includes replacing the MLP with more expressive networks, learning GPR weights via attention mechanisms, and extending the architecture to graph‑level representation learning with pooling layers.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.