Advances in Graph Neural Architecture Search: GASSO, DHGAS, GAUSS, GRACES, G‑RNA and the AutoGL Library
This article surveys recent progress in automated graph machine learning, covering graph neural architecture search techniques such as GASSO, DHGAS, GAUSS, GRACES, and G‑RNA, discusses scalability and robustness challenges, and introduces the open‑source AutoGL library and the NAS‑Bench‑Graph benchmark.
Many complex systems can be represented as graphs (social, biological, information networks), whose diverse sizes and structures pose challenges for graph machine learning.
Graph Neural Networks (GNNs) are the dominant paradigm for graph analysis, with message‑passing GNNs extending convolution to irregular data. Traditional graph ML suffers from manual architecture design and task‑specific models, motivating the use of AutoML for graphs.
Graph Neural Architecture Search (Graph NAS) automates the design of GNNs. It consists of a search space (micro, macro, pooling, hyper‑parameters), search strategies (reinforcement learning, evolution, differentiable methods), and performance evaluation strategies (weight sharing, surrogate models).
Graph Structure Construction : The optimality of input graph structures is uncertain, and jointly optimizing graph structure and GNN architecture is essential. The GASSO algorithm (Graph Architecture Search with Structure Optimization) introduces differentiable edge masks and a three‑level optimization (graph structure, GNN parameters, architecture) to find mutually optimal structures and models.
Dynamic Heterogeneous Graphs : DHGAS (Dynamic Heterogeneous Graph Attention Search) addresses time‑varying and multi‑type graphs by using attention mechanisms to capture spatio‑temporal dependencies, defining a parameterization space and a localization space, and employing a one‑shot NAS strategy with a three‑stage training process.
Scalability : GAUSS (Graph Architecture Search at Scale) tackles billion‑node graphs by jointly sampling architectures and sub‑graphs, using importance sampling, peer learning, and back‑propagation on the sampled super‑network, achieving up to 1000× scale improvement on a single V100 GPU.
Robustness : GRACES customizes GNN architectures for each shifted graph distribution via self‑supervised encoding and prototype‑based architecture selection. G‑RNA introduces a robust search space with edge‑mask operators and a KL‑based robustness metric, optimized by multi‑objective evolutionary search.
Experimental results across benchmark datasets show that these methods consistently outperform baselines in accuracy, scalability, and robustness.
AutoGL Library and Evaluation : AutoGL is the first open‑source library dedicated to automated graph machine learning, offering modules for feature engineering, NAS, hyper‑parameter optimization, model training, and ensemble. NAS‑Bench‑Graph provides a standardized benchmark with 26,206 architectures evaluated on nine datasets, enabling reproducible and efficient Graph NAS research.
The article concludes with a Q&A discussing industrial deployments (e.g., Alibaba risk control) and the difficulty of a universal GNN architecture for all graph types.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.