Skip to content

Latest commit

 

History

History
56 lines (32 loc) · 2.79 KB

README.md

File metadata and controls

56 lines (32 loc) · 2.79 KB

Description

Official repository implementation for "Characterizing Massive Activations of Attention Mechanism in Graph Neural Networks".

This project analyzes behaviors of models taken from the following three repositories:

which can be found in the directories graphtransformer, SAN, and gnn-lspe, with due modifications.
Some code is integrated from labstructbioinf/EdgeGat.


Setup

The three repositories are based on graphdeeplearning/benchmarking-gnns, please refer to their guide to setup cuda, without installing their environment: 01_benchmark_installation.md.
A conda environment is provided in gnnma_gpu.yml to provide dependencies for this project.

Please refer to the three repositories to download the datasets (except for TOX21 in GraphTransformer, which is loaded using dgllife at runtime).


Run experiments

The model/dataset configurations are assigned to the three repositories in the following way:

  • graphtransformer: GT with ZINC and TOX21
  • SAN: SAN with ZINC
  • gnn-lspe: all the others (GT with PROTEINS, SAN with TOX21 and PROTEINS, GraphiT)

When running an experiments, two paths will be printed by the program just before exiting:

  • MA logs: logfile containing activation values, to be used for plots, it is suggested to store the paths in plot/malogs.py to use them for generating plots
  • checkpoints: directory containing the trained model's weights, to be used for further testing or attacks.

Commands to run experiments can be found in graphtransformer/scripts, SAN/scripts, and gnn-lspe/scripts.
Such commands should be run from the repository's main directory (i.e., graphtransformer, SAN, or gnn-lspe)


Plots

Plots can be made using notebooks in plot.
Logs must be previously generated while running the experiments, and log paths stored in plot/malogs.py.


Attack

The attack results on GT with TOX21 can be reproduced using the commands in graphtransformer/scripts/attack (optionally using models checkpoints produced in experiments previously ran, adding command line parameters --epochs 0 --weights <checkpoint>)