Skip to content

Latest commit

 

History

History
160 lines (127 loc) · 15.7 KB

Awsome-GNN-Acceleration.md

File metadata and controls

160 lines (127 loc) · 15.7 KB

Awsome-GNN-Acceleration

Contents

To scale GNNs to extremely large graphs, existing works can be classified into the following types.


Illustration of different graph sampling methods. Red nodes are selected nodes in the current batch as n(L), blue nodes are nodes sampled in the 1st layer as n(1) and green nodes are sampled in the 2st layer as n(0). n(0) and n(1) form Block(1), n(1) and n(2) form Block(2). The node-wise sampling method samples 2 nodes for each node (e.g. sampling v1 and v6 for v3 in layer 1. The layer-wise sampling method samples 3 nodes for each GNN layer. The graph-wise sampling method samples a sub-graph for all layers.

Node-wise sampling

  1. Inductive Representation Learning on Large Graphs [NIPS 2017] [paper] [code]
  2. Graph Convolutional Neural Networks for Web-Scale Recommender Systems [KDD 2018] [paper][code]
  3. Stochastic Training of Graph Convolutional Networks with Variance Reduction [ICML 2018] [paper] [code]
  4. Blocking-based neighbor sampling for large-scale graph neural networks [IJCAI 2021] [paper]
  5. Bandit samplers for training graph neural networks [NeurIPS 2020][paper][code]
  6. Performance-adaptive sampling strategy towards fast and accurate graph neural networks [KDD 2021][paper][code]
  7. Hierarchical graph transformer with adaptive node sampling [NeurIPS 2022][paper][code]

Layer-wise sampling

  1. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling [ICLR 2018] [paper][code]
  2. Adaptive Sampling Towards Fast Graph Representation Learning [NeurIPS 2018] [paper] [code_pytorch] [code_tentsor_flow]
  3. Layer-dependent importance sampling for training deep and large graph convolutional networks [NeurIPS 2019][paper][code]
  4. GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks [NeurIPS 2023][paper][code]

Graph-wise sampling

  1. Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks [KDD 2019] [paper] [code]
  2. GraphSAINT: Graph Sampling Based Inductive Learning Method [ICLR 2020] [paper] [code]
  3. Large-Scale Learnable Graph Convolutional Networks [KDD 2018][paper][code]
  4. Minimal variance sampling with provable guarantees for fast training of graph neural networks [KDD 2020][paper][code]
  5. Gnnautoscale: Scalable and expressive graph neural networks via historical embeddings [ICLR 2021] [paper][code]
  6. Decoupling the depth and scope of graph neural networks [NeurIPS 2021][paper][code]
  7. Ripple walk training: A subgraph-based training framework for large and deep graph neural network [IJCNN 2021] [paper][code]
  8. LMC: Fast Training of GNNs via Subgraph Sampling with Provable Convergence [ICLR 2023] [paper][code]


Illustration of SGAP for Linear model

Simple model without attention

  1. Simplifying Graph Convolutional Networks [ICML 2019] [paper] [code]
  2. Scalable Graph Neural Networks via Bidirectional Propagation [NeurIPS 2020] [paper] [code]
  3. SIGN: Scalable Inception Graph Neural Networks [ICML 2020] [paper] [code]
  4. Simple Spectral Graph Convolution [ICLR 2021] [paper] [code]
  5. Approximate graph propagation [KDD 2021] [paper][code]
  6. Predict then Propagate: Graph Neural Networks meet Personalized PageRank [ICLR 2018] [paper][code]
  7. Combining Label Propagation and Simple Models out-performs Graph Neural Networks [ICLR 2020] [paper][code]
  8. Adaptive propagation graph convolutional network [TNNLS 2020] [paper][code]
  9. Scaling graph neural networks with approximate pagerank [KDD 2020] [paper][code]
  10. Node Dependent Local Smoothing for Scalable Graph Learning [NeurIPS 2021] [paper] [code]
  11. NAFS: A Simple yet Tough-to-Beat Baseline for Graph Representation Learning [ICML 2022] [paper] [code]

Complex model with attention

  1. Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training [Arxiv 2021] [paper] [code]
  2. Graph Attention Multi-Layer Perceptron [KDD 2022] [paper] [code]
  3. Pasca: A graph neural architecture search system under the scalable paradigm [WWW 2022] [paper][code]
  4. Towards deeper graph neural networks [KDD 2020] [paper][code]
  5. Node-wise Diffusion for Scalable Graph Learning [WWW 2023] [paper][code]
  6. Scalable decoupling graph neural network with feature-oriented optimization [VLDB 2023] [paper][code]
  7. Grand+: Scalable graph random neural networks [WWW 2022] [paper][code]

GNN2GNN

  1. Distilling knowledge from graph convolutional networks [CVPR 2020] [paper][code]
  2. Tinygnn: Learning efficient graph neural networks [KDD 2020] [paper][code]
  3. On representation knowledge distillation for graph neural networks [TNNLS 2022] [paper][code]
  4. Graph-free knowledge distillation for graph neural networks [IJCAI 2021] [paper][code]
  5. Knowledge distillation as efficient pre-training: Faster convergence, higher data-efficiency, and better transferability [CVPR 2022] [paper][code]
  6. Geometric knowledge distillation: Topology compression for graph neural networks [NeurIPS 2022] [paper][code]

GNN2MLP

  1. Graph-mlp: Node classification without message passing in graph [Arxiv 2021] [paper][code]
  2. Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation [ICLR 2021] [paper][code]
  3. Extract the knowledge of graph neural networks and go beyond it: An effective knowledge distillation framework [WWW 2021] [paper][code]
  4. Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency [ICLR 2022] [paper][code]
  5. VQGraph: Graph Vector-Quantization for Bridging GNNs and MLPs [ICLR 2024] [paper][code]
  6. Quantifying the Knowledge in GNNs for Reliable Distillation into MLPs [ICML 2023] [paper][code]
  7. Propagate & Distill: Towards Effective Graph Learners Using Propagation-Embracing MLPs [paper]
  1. Learned low precision graph neural networks [Arxiv 2009] [paper]
  2. Degree-Quant: Quantization-Aware Training for Graph Neural Networks [ICLR 2020] [paper][code]
  3. Sgquant: Squeezing the last bit on graph neural networks with specialized quantization [ICTAI 2020] [paper][code]
  4. VQ-GNN: A universal framework to scale up graph neural networks using vector quantization [NeurIPS 2021] [paper][code]
  5. A2Q: Aggregation-Aware Quantization for Graph Neural Networks [ICLR 2022] [paper][code]
  6. EPQuant: A Graph Neural Network compression approach based on product quantization [NC 2022] [paper]
  7. Low-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message Propagation [CIKM 2023] [paper]
  8. Haar wavelet feature compression for quantized graph convolutional networks [TNNLS 2023] [paper]
  1. A unified lottery ticket hypothesis for graph neural networks [ICML 2021] [paper][code]
  2. Accelerating Large Scale Real-Time GNN Inference using Channel Pruning [VLDB 2021] [paper][code]
  3. Inductive Lottery Ticket Learning for Graph Neural Networks [Openreview 2021] [paper][code]
  4. Early-bird gcns: Graph-network co-optimization towards more efficient gcn training and inference via drawing early-bird lottery tickets [AAAI 2022] [paper][code]
  5. Searching Lottery Tickets in Graph Neural Networks: A Dual Perspective [ICLR 2022] [paper]
  6. Rethinking Graph Lottery Tickets: Graph Sparsity Matters [ICLR 2022] [paper]
  7. The snowflake hypothesis: Training deep GNN with one node one receptive field [ArXiv 2023] [paper]
  1. Bi-gcn: Binary graph convolutional network [CVPR 2021] [paper][code]
  2. Binarized graph neural network [WWW 2021] [paper]
  3. Binary graph neural networks [CVPR 2021] [paper][code]
  4. Meta-aggregator: Learning to aggregate for 1-bit graph neural networks [ICCV 2021] [paper]
  5. BitGNN: Unleashing the Performance Potential of Binary Graph Neural Networks on GPUs [ICS 2023] [paper]
  1. Graph Condensation for Graph Neural Networks [ICLR 2021] [paper][code]
  2. Condensing graphs via one-step gradient matching [KDD 2022] [paper][code]
  3. Graph condensation via receptive field distribution matching [Arxiv 2022] [paper]
  4. Structure-free Graph Condensation: From Large-scale Graphs to Condensed Graph-free Data [Arxiv 2023] [paper][code]
  5. Graph Condensation via Eigenbasis Matching [Arxiv 2023] [paper]
  6. Kernel Ridge Regression-Based Graph Dataset Distillation [KDD 2023] [paper][code]
  7. Graph Condensation for Inductive Node Representation Learning [ICDE 2024] [paper]
  8. Fast Graph Condensation with Structure-based Neural Tangent Kernel [Arxiv 2023] [paper]