Skip to content
/ gnn-ma Public

Source code for the paper "Characterizing Massive Activations of Attention Mechanism in Graph Neural Networks"

License

Notifications You must be signed in to change notification settings

msorbi/gnn-ma

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Description

Official repository implementation for "Characterizing Massive Activations of Attention Mechanism in Graph Neural Networks".

This project analyzes behaviors of models taken from the following three repositories:

which can be found in the directories graphtransformer, SAN, and gnn-lspe, with due modifications.
Some code is integrated from labstructbioinf/EdgeGat.


Setup

The three repositories are based on graphdeeplearning/benchmarking-gnns, please refer to their guide to setup cuda, without installing their environment: 01_benchmark_installation.md.
A conda environment is provided in gnnma_gpu.yml to provide dependencies for this project.

Please refer to the three repositories to download the datasets (except for TOX21 in GraphTransformer, which is loaded using dgllife at runtime).


Run experiments

The model/dataset configurations are assigned to the three repositories in the following way:

  • graphtransformer: GT with ZINC and TOX21
  • SAN: SAN with ZINC
  • gnn-lspe: all the others (GT with PROTEINS, SAN with TOX21 and PROTEINS, GraphiT)

When running an experiments, two paths will be printed by the program just before exiting:

  • MA logs: logfile containing activation values, to be used for plots, it is suggested to store the paths in plot/malogs.py to use them for generating plots
  • checkpoints: directory containing the trained model's weights, to be used for further testing or attacks.

Commands to run experiments can be found in graphtransformer/scripts, SAN/scripts, and gnn-lspe/scripts.
Such commands should be run from the repository's main directory (i.e., graphtransformer, SAN, or gnn-lspe)


Plots

Plots can be made using notebooks in plot.
Logs must be previously generated while running the experiments, and log paths stored in plot/malogs.py.


Attack

The attack results on GT with TOX21 can be reproduced using the commands in graphtransformer/scripts/attack (optionally using models checkpoints produced in experiments previously ran, adding command line parameters --epochs 0 --weights <checkpoint>)




About

Source code for the paper "Characterizing Massive Activations of Attention Mechanism in Graph Neural Networks"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published