From 2a8b8e7fd71ce8c1faa638a4f9c48c013ed56497 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Thu, 3 Oct 2024 07:20:34 +0000 Subject: [PATCH] build based on 3fe2c76 --- dev/.documenter-siteinfo.json | 2 +- dev/api/basic/index.html | 6 +-- dev/api/conv/index.html | 40 ++++++++--------- dev/api/gnngraph/index.html | 44 +++++++++---------- dev/api/heterograph/index.html | 6 +-- dev/api/messagepassing/index.html | 4 +- dev/api/pool/index.html | 6 +-- dev/api/temporalconv/index.html | 12 ++--- dev/api/temporalgraph/index.html | 10 ++--- dev/api/utils/index.html | 4 +- dev/datasets/index.html | 2 +- dev/dev/index.html | 2 +- dev/gnngraph/index.html | 2 +- dev/gsoc/index.html | 2 +- dev/heterograph/index.html | 2 +- dev/index.html | 2 +- dev/messagepassing/index.html | 2 +- dev/models/index.html | 2 +- dev/search_index.js | 2 +- dev/temporalgraph/index.html | 2 +- dev/tutorials/index.html | 2 +- .../gnn_intro_pluto/index.html | 12 ++--- .../graph_classification_pluto/index.html | 2 +- .../node_classification_pluto/index.html | 10 ++--- 24 files changed, 90 insertions(+), 90 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index a2be8694..2e7fd3d8 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-09-26T09:48:50","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-10-03T07:20:24","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/api/basic/index.html b/dev/api/basic/index.html index 8347c207..f3230c99 100644 --- a/dev/api/basic/index.html +++ b/dev/api/basic/index.html @@ -9,7 +9,7 @@ julia> dotdec(g, rand(2, 5)) 1×6 Matrix{Float64}: - 0.345098 0.458305 0.106353 0.345098 0.458305 0.106353source
GraphNeuralNetworks.GNNChainType
GNNChain(layers...)
+ 0.345098  0.458305  0.106353  0.345098  0.458305  0.106353
source
GraphNeuralNetworks.GNNChainType
GNNChain(layers...)
 GNNChain(name = layer, ...)

Collects multiple layers / functions to be called in sequence on given input graph and input node features.

It allows to compose layers in a sequential fashion as Flux.Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles the input graph as well, providing it as a first argument only to layers subtyping the GNNLayer abstract type.

GNNChain supports indexing and slicing, m[2] or m[1:end-1], and if names are given, m[:name] == m[1] etc.

Examples

julia> using Flux, GraphNeuralNetworks
 
 julia> m = GNNChain(GCNConv(2=>5), 
@@ -41,7 +41,7 @@
  2.90053  2.90053  2.90053  2.90053  2.90053  2.90053
 
 julia> m2[:enc](g, x) == m(g, x)
-true
source
GraphNeuralNetworks.GNNLayerType
abstract type GNNLayer end

An abstract type from which graph neural network layers are derived.

See also GNNChain.

source
GraphNeuralNetworks.WithGraphType
WithGraph(model, g::GNNGraph; traingraph=false)

A type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).

If traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.

Examples

g = GNNGraph([1,2,3], [2,3,1])
+true
source
GraphNeuralNetworks.GNNLayerType
abstract type GNNLayer end

An abstract type from which graph neural network layers are derived.

See also GNNChain.

source
GraphNeuralNetworks.WithGraphType
WithGraph(model, g::GNNGraph; traingraph=false)

A type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).

If traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.

Examples

g = GNNGraph([1,2,3], [2,3,1])
 x = rand(Float32, 2, 3)
 model = SAGEConv(2 => 3)
 wg = WithGraph(model, g)
@@ -51,4 +51,4 @@
 g2 = GNNGraph([1,1,2,3], [2,4,1,1])
 x2 = rand(Float32, 2, 4)
 # WithGraph will ignore the internal graph if fed with a new one. 
-@assert wg(g2, x2) == model(g2, x2)
source
+@assert wg(g2, x2) == model(g2, x2)source diff --git a/dev/api/conv/index.html b/dev/api/conv/index.html index 2b60d070..d336020a 100644 --- a/dev/api/conv/index.html +++ b/dev/api/conv/index.html @@ -10,7 +10,7 @@ l = AGNNConv(init_beta=2.0f0) # forward pass -y = l(g, x) source
GraphNeuralNetworks.CGConvType
CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)
+y = l(g, x)   
source
GraphNeuralNetworks.CGConvType
CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)
 CGConv(in => out, ...)

The crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \sum_{j\in N(i)}\sigma(W_f \mathbf{z}_{ij} + \mathbf{b}_f)\, act(W_s \mathbf{z}_{ij} + \mathbf{b}_s)\]

where $\mathbf{z}_{ij}$ is the node and edge features concatenation $[\mathbf{x}_i; \mathbf{x}_j; \mathbf{e}_{j\to i}]$ and $\sigma$ is the sigmoid function. The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features.

If ein is not given, assumes that no edge features are passed as input in the forward pass.

  • out: The dimension of output node features.
  • act: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.
  • residual: Add a residual connection.

Examples

g = rand_graph(5, 6)
 x = rand(Float32, 2, g.num_nodes)
 e = rand(Float32, 3, g.num_edges)
@@ -20,7 +20,7 @@
 
 # No edge features
 l = CGConv(2 => 4, tanh)
-y = l(g, x)    # size: (4, num_nodes)
source
GraphNeuralNetworks.ChebConvType
ChebConv(in => out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} +y = l(g, x) # size: (4, num_nodes)

source
GraphNeuralNetworks.ChebConvType
ChebConv(in => out, k; bias=true, init=glorot_uniform)

Chebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

Implements

\[X' = \sum^{K-1}_{k=0} W^{(k)} Z^{(k)}\]

where $Z^{(k)}$ is the $k$-th term of Chebyshev polynomials, and can be calculated by the following recursive form:

\[\begin{aligned} Z^{(0)} &= X \\ Z^{(1)} &= \hat{L} X \\ Z^{(k)} &= 2 \hat{L} Z^{(k-1)} - Z^{(k-2)} @@ -34,7 +34,7 @@ l = ChebConv(3 => 5, 5) # forward pass -y = l(g, x) # size: 5 × num_nodes

source
GraphNeuralNetworks.DConvType
DConv(ch::Pair{Int, Int}, k::Int; init = glorot_uniform, bias = true)

Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.

Arguments

  • ch: Pair of input and output dimensions.
  • k: Number of diffusion steps.
  • init: Weights' initializer. Default glorot_uniform.
  • bias: Add learnable bias. Default true.

Examples

julia> g = GNNGraph(rand(10, 10), ndata = rand(Float32, 2, 10));
+y = l(g, x)       # size:  5 × num_nodes
source
GraphNeuralNetworks.DConvType
DConv(ch::Pair{Int, Int}, k::Int; init = glorot_uniform, bias = true)

Diffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.

Arguments

  • ch: Pair of input and output dimensions.
  • k: Number of diffusion steps.
  • init: Weights' initializer. Default glorot_uniform.
  • bias: Add learnable bias. Default true.

Examples

julia> g = GNNGraph(rand(10, 10), ndata = rand(Float32, 2, 10));
 
 julia> dconv = DConv(2 => 4, 4)
 DConv(2 => 4, 4)
@@ -42,7 +42,7 @@
 julia> y = dconv(g, g.ndata.x);
 
 julia> size(y)
-(4, 10)
source
GraphNeuralNetworks.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
+(4, 10)
source
GraphNeuralNetworks.EGNNConvType
EGNNConv((in, ein) => out; hidden_size=2in, residual=false)
 EGNNConv(in => out; hidden_size=2in, residual=false)

Equivariant Graph Convolutional Layer from E(n) Equivariant Graph Neural Networks.

The layer performs the following operation:

\[\begin{aligned} \mathbf{m}_{j\to i} &=\phi_e(\mathbf{h}_i, \mathbf{h}_j, \lVert\mathbf{x}_i-\mathbf{x}_j\rVert^2, \mathbf{e}_{j\to i}),\\ \mathbf{x}_i' &= \mathbf{x}_i + C_i\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_i-\mathbf{x}_j)\phi_x(\mathbf{m}_{j\to i}),\\ @@ -52,7 +52,7 @@ h = randn(Float32, 5, g.num_nodes) x = randn(Float32, 3, g.num_nodes) egnn = EGNNConv(5 => 6, 10) -hnew, xnew = egnn(g, h, x)

source
GraphNeuralNetworks.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

# create data
+hnew, xnew = egnn(g, h, x)
source
GraphNeuralNetworks.EdgeConvType
EdgeConv(nn; aggr=max)

Edge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.

Performs the operation

\[\mathbf{x}_i' = \square_{j \in N(i)}\, nn([\mathbf{x}_i; \mathbf{x}_j - \mathbf{x}_i])\]

where nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • nn: A (possibly learnable) function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -63,7 +63,7 @@
 l = EdgeConv(Dense(2 * in_channel, out_channel), aggr = +)
 
 # forward pass
-y = l(g, x)
source
GraphNeuralNetworks.GATConvType
GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
+y = l(g, x)
source
GraphNeuralNetworks.GATConvType
GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
 GATConv((in, ein) => out, ...)

Graph attentional layer from the paper Graph Attention Networks.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W \mathbf{x}_i; W \mathbf{x}_j]))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(LeakyReLU(\mathbf{a}^T [W_e \mathbf{e}_{j\to i}; W \mathbf{x}_i; W \mathbf{x}_j]))\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • bias: Learn the additive bias if true. Default true.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.
  • dropout: Dropout probability on the normalized attention coefficient. Default 0.0.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
@@ -76,7 +76,7 @@
 l = GATConv(in_channel => out_channel, add_self_loops = false, bias = false; heads=2, concat=true)
 
 # forward pass
-y = l(g, x)       
source
GraphNeuralNetworks.GATv2ConvType
GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
+y = l(g, x)       
source
GraphNeuralNetworks.GATv2ConvType
GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])
 GATv2Conv((in, ein) => out, ...)

GATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.

Implements the operation

\[\mathbf{x}_i' = \sum_{j \in N(i) \cup \{i\}} \alpha_{ij} W_1 \mathbf{x}_j\]

where the attention coefficients $\alpha_{ij}$ are given by

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_2 \mathbf{x}_i + W_1 \mathbf{x}_j))\]

with $z_i$ a normalization factor.

In case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as

\[\alpha_{ij} = \frac{1}{z_i} \exp(\mathbf{a}^T LeakyReLU(W_3 \mathbf{e}_{j\to i} + W_2 \mathbf{x}_i + W_1 \mathbf{x}_j)).\]

Arguments

  • in: The dimension of input node features.
  • ein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).
  • out: The dimension of output node features.
  • σ: Activation function. Default identity.
  • bias: Learn the additive bias if true. Default true.
  • heads: Number attention heads. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • negative_slope: The parameter of LeakyReLU.Default 0.2.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default true.
  • dropout: Dropout probability on the normalized attention coefficient. Default 0.0.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
@@ -93,7 +93,7 @@
 e = randn(Float32, ein, length(s))
 
 # forward pass
-y = l(g, x, e)    
source
GraphNeuralNetworks.GCNConvType
GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

The input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g::GNNGraph, x, edge_weight = nothing; norm_fn = d -> 1 ./ sqrt.(d), conv_weight = nothing) -> AbstractMatrix

Takes as input a graph g, a node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix instead of the weights stored in the model.

Examples

# create data
+y = l(g, x, e)    
source
GraphNeuralNetworks.GCNConvType
GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])

Graph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.

Performs the operation

\[\mathbf{x}'_i = \sum_{j\in N(i)} a_{ij} W \mathbf{x}_j\]

where $a_{ij} = 1 / \sqrt{|N(i)||N(j)|}$ is a normalization factor computed from the node degrees.

If the input graph has weighted edges and use_edge_weight=true, than $a_{ij}$ will be computed as

\[a_{ij} = \frac{e_{j\to i}}{\sqrt{\sum_{j \in N(i)} e_{j\to i}} \sqrt{\sum_{i \in N(j)} e_{i\to j}}}\]

The input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Forward

(::GCNConv)(g::GNNGraph, x, edge_weight = nothing; norm_fn = d -> 1 ./ sqrt.(d), conv_weight = nothing) -> AbstractMatrix

Takes as input a graph g, a node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].

The norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes $\frac{1}{\sqrt{d}}$ i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix instead of the weights stored in the model.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s, t)
@@ -113,7 +113,7 @@
 # Edge weights can also be embedded in the graph.
 g = GNNGraph(s, t, w)
 l = GCNConv(3 => 5, use_edge_weight=true) 
-y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

# create data
+y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.GINConvType
GINConv(f, ϵ; aggr=+)

Graph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.

Implements the graph convolution

\[\mathbf{x}_i' = f_\Theta\left((1 + \epsilon) \mathbf{x}_i + \sum_{j \in N(i)} \mathbf{x}_j \right)\]

where $f_\Theta$ typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.

Arguments

  • f: A (possibly learnable) function acting on node features.
  • ϵ: Weighting factor.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -127,7 +127,7 @@
 l = GINConv(nn, 0.01f0, aggr = mean)
 
 # forward pass
-y = l(g, x)  
source
GraphNeuralNetworks.GMMConvType
GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • residual: Residual conncetion. Default false.

Examples

# create data
+y = l(g, x)  
source
GraphNeuralNetworks.GMMConvType
GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)

Graph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation

\[\mathbf{x}_i' = \mathbf{x}_i + \frac{1}{|N(i)|} \sum_{j\in N(i)}\frac{1}{K}\sum_{k=1}^K \mathbf{w}_k(\mathbf{e}_{j\to i}) \odot \Theta_k \mathbf{x}_j\]

where $w^a_{k}(e^a)$ for feature a and kernel k is given by

\[w^a_{k}(e^a) = \exp(-\frac{1}{2}(e^a - \mu^a_k)^T (\Sigma^{-1})^a_k(e^a - \mu^a_k))\]

$\Theta_k, \mu^a_k, (\Sigma^{-1})^a_k$ are learnable parameters.

The input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual $\mathbf{x}_i$ is added only if residual=true and the output size is the same as the input size.

Arguments

  • in: Number of input node features.
  • ein: Number of input edge features.
  • out: Number of output features.
  • σ: Activation function. Default identity.
  • K: Number of kernels. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • residual: Residual conncetion. Default false.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s,t)
@@ -139,7 +139,7 @@
 l = GMMConv((nin, ein) => out, K=K)
 
 # forward pass
-l(g, x, e)
source
GraphNeuralNetworks.GatedGraphConvType
GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} +l(g, x, e)

source
GraphNeuralNetworks.GatedGraphConvType
GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)

Gated graph convolution layer from Gated Graph Sequence Neural Networks.

Implements the recursion

\[\begin{aligned} \mathbf{h}^{(0)}_i &= [\mathbf{x}_i; \mathbf{0}] \\ \mathbf{h}^{(l)}_i &= GRU(\mathbf{h}^{(l-1)}_i, \square_{j \in N(i)} W \mathbf{h}^{(l-1)}_j) \end{aligned}\]

where $\mathbf{h}^{(l)}_i$ denotes the $l$-th hidden variables passing through GRU. The dimension of input $\mathbf{x}_i$ needs to be less or equal to out.

Arguments

  • out: The dimension of output features.
  • num_layers: The number of recursion steps.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • init: Weight initialization function.

Examples:

# create data
@@ -153,7 +153,7 @@
 l = GatedGraphConv(out_channel, num_layers)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.GraphConvType
GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.GraphConvType
GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)

Graph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.

Performs:

\[\mathbf{x}_i' = W_1 \mathbf{x}_i + \square_{j \in \mathcal{N}(i)} W_2 \mathbf{x}_j\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -165,7 +165,7 @@
 l = GraphConv(in_channel => out_channel, relu, bias = false, aggr = mean)
 
 # forward pass
-y = l(g, x)       
source
GraphNeuralNetworks.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
+y = l(g, x)       
source
GraphNeuralNetworks.MEGNetConvType
MEGNetConv(ϕe, ϕv; aggr=mean)
 MEGNetConv(in => out; aggr=mean)

Convolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x and edge features e and returns updated features x' and e' according to

\[\begin{aligned} \mathbf{e}_{i\to j}' = \phi_e([\mathbf{x}_i;\, \mathbf{x}_j;\, \mathbf{e}_{i\to j}]),\\ \mathbf{x}_{i}' = \phi_v([\mathbf{x}_i;\, \square_{j\in \mathcal{N}(i)}\,\mathbf{e}_{j\to i}']). @@ -173,7 +173,7 @@ x = randn(Float32, 3, 10) e = randn(Float32, 3, 30) m = MEGNetConv(3 => 3) -x′, e′ = m(g, x, e)

source
GraphNeuralNetworks.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input node features.
  • out: The dimension of output node features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

n_in = 3
+x′, e′ = m(g, x, e)
source
GraphNeuralNetworks.NNConvType
NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)

The continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.

Performs the operation

\[\mathbf{x}_i' = W \mathbf{x}_i + \square_{j \in N(i)} f_\Theta(\mathbf{e}_{j\to i})\,\mathbf{x}_j\]

where $f_\Theta$ denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.

Arguments

  • in: The dimension of input node features.
  • out: The dimension of output node features.
  • f: A (possibly learnable) function acting on edge features.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • σ: Activation function.
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

n_in = 3
 n_in_edge = 10
 n_out = 5
 
@@ -192,7 +192,7 @@
 e = randn(Float32, n_in_edge, g.num_edges)
 
 # forward pass
-y = l(g, x, e)  
source
GraphNeuralNetworks.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init: Weight matrices' initializing function.
  • bias: Learn an additive bias if true.

Examples:

# create data
+y = l(g, x, e)  
source
GraphNeuralNetworks.ResGatedGraphConvType
ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)

The residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.

The layer's forward pass is given by

\[\mathbf{x}_i' = act\big(U\mathbf{x}_i + \sum_{j \in N(i)} \eta_{ij} V \mathbf{x}_j\big),\]

where the edge gates $\eta_{ij}$ are given by

\[\eta_{ij} = sigmoid(A\mathbf{x}_i + B\mathbf{x}_j).\]

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • act: Activation function.
  • init: Weight matrices' initializing function.
  • bias: Learn an additive bias if true.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -203,7 +203,7 @@
 l = ResGatedGraphConv(in_channel => out_channel, tanh, bias = true)
 
 # forward pass
-y = l(g, x)  
source
GraphNeuralNetworks.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
+y = l(g, x)  
source
GraphNeuralNetworks.SAGEConvType
SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)

GraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.

Performs:

\[\mathbf{x}_i' = W \cdot [\mathbf{x}_i; \square_{j \in \mathcal{N}(i)} \mathbf{x}_j]\]

where the aggregation type is selected by aggr.

Arguments

  • in: The dimension of input features.
  • out: The dimension of output features.
  • σ: Activation function.
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Add learnable bias.
  • init: Weights' initializer.

Examples:

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 in_channel = 3
@@ -214,7 +214,7 @@
 l = SAGEConv(in_channel => out_channel, tanh, bias = false, aggr = +)
 
 # forward pass
-y = l(g, x)   
source
GraphNeuralNetworks.SGConvType
SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.

Examples

# create data
+y = l(g, x)   
source
GraphNeuralNetworks.SGConvType
SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])

SGC layer from Simplifying Graph Convolutional Networks Performs operation

\[H^{K} = (\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2})^K X \Theta\]

where $\tilde{A}$ is $A + I$.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k : Number of hops k. Default 1.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.

Examples

# create data
 s = [1,1,2,3]
 t = [2,3,1,1]
 g = GNNGraph(s, t)
@@ -233,7 +233,7 @@
 # Edge weights can also be embedded in the graph.
 g = GNNGraph(s, t, w)
 l = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true) 
-y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.TAGConvType
TAGConv(in => out, k=3; bias=true, init=glorot_uniform, add_self_loops=true, use_edge_weight=false)

TAGConv layer from Topology Adaptive Graph Convolutional Networks. This layer extends the idea of graph convolutions by applying filters that adapt to the topology of the data. It performs the operation:

\[H^{K} = {\sum}_{k=0}^K (D^{-1/2} A D^{-1/2})^{k} X {\Theta}_{k}\]

where A is the adjacency matrix of the graph, D is the degree matrix, X is the input feature matrix, and ${\Theta}_{k}$ is a unique weight matrix for each hop k.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Maximum number of hops to consider. Default is 3.
  • bias: Whether to include a learnable bias term. Default is true.
  • init: Initialization function for the weights. Default is glorot_uniform.
  • add_self_loops: Whether to add self-loops to the adjacency matrix. Default is true.
  • use_edge_weight: If true, edge weights are considered in the computation (if available). Default is false.

Examples

# Example graph data
+y = l(g, x) # same as l(g, x, w) 
source
GraphNeuralNetworks.TAGConvType
TAGConv(in => out, k=3; bias=true, init=glorot_uniform, add_self_loops=true, use_edge_weight=false)

TAGConv layer from Topology Adaptive Graph Convolutional Networks. This layer extends the idea of graph convolutions by applying filters that adapt to the topology of the data. It performs the operation:

\[H^{K} = {\sum}_{k=0}^K (D^{-1/2} A D^{-1/2})^{k} X {\Theta}_{k}\]

where A is the adjacency matrix of the graph, D is the degree matrix, X is the input feature matrix, and ${\Theta}_{k}$ is a unique weight matrix for each hop k.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Maximum number of hops to consider. Default is 3.
  • bias: Whether to include a learnable bias term. Default is true.
  • init: Initialization function for the weights. Default is glorot_uniform.
  • add_self_loops: Whether to add self-loops to the adjacency matrix. Default is true.
  • use_edge_weight: If true, edge weights are considered in the computation (if available). Default is false.

Examples

# Example graph data
 s = [1, 1, 2, 3]
 t = [2, 3, 1, 1]
 g = GNNGraph(s, t)  # Create a graph
@@ -243,7 +243,7 @@
 l = TAGConv(3 => 5, k=3; add_self_loops=true)
 
 # Apply the TAGConv layer
-y = l(g, x)  # Output size: 5 × num_nodes
source
GraphNeuralNetworks.TransformerConvType
TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,
+y = l(g, x)  # Output size: 5 × num_nodes
source
GraphNeuralNetworks.TransformerConvType
TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,
     bias_root, root_weight, gating, skip_connection, batch_norm, ff_channels]))

The transformer-like multi head attention convolutional operator from the Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification paper, which also considers edge features. It further contains options to also be configured as the transformer-like convolutional operator from the Attention, Learn to Solve Routing Problems! paper, including a successive feed-forward network as well as skip layers and batch normalization.

The layer's basic forward pass is given by

\[x_i' = W_1x_i + \sum_{j\in N(i)} \alpha_{ij} (W_2 x_j + W_6e_{ij})\]

where the attention scores are

\[\alpha_{ij} = \mathrm{softmax}\left(\frac{(W_3x_i)^T(W_4x_j+ W_6e_{ij})}{\sqrt{d}}\right).\]

Optionally, a combination of the aggregated value with transformed root node features by a gating mechanism via

\[x'_i = \beta_i W_1 x_i + (1 - \beta_i) \underbrace{\left(\sum_{j \in \mathcal{N}(i)} \alpha_{i,j} W_2 x_j \right)}_{=m_i}\]

with

\[\beta_i = \textrm{sigmoid}(W_5^{\top} [ W_1 x_i, m_i, W_1 x_i - m_i ]).\]

can be performed.

Arguments

  • in: Dimension of input features, which also corresponds to the dimension of the output features.
  • ein: Dimension of the edge features; if 0, no edge features will be used.
  • out: Dimension of the output.
  • heads: Number of heads in output. Default 1.
  • concat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.
  • init: Weight matrices' initializing function. Default glorot_uniform.
  • add_self_loops: Add self loops to the input graph. Default false.
  • bias_qkv: If set, bias is used in the key, query and value transformations for nodes. Default true.
  • bias_root: If set, the layer will also learn an additive bias for the root when root weight is used. Default true.
  • root_weight: If set, the layer will add the transformed root node features to the output. Default true.
  • gating: If set, will combine aggregation and transformed root node features by a gating mechanism. Default false.
  • skip_connection: If set, a skip connection will be made from the input and added to the output. Default false.
  • batch_norm: If set, a batch normalization will be applied to the output. Default false.
  • ff_channels: If positive, a feed-forward NN is appended, with the first having the given number of hidden nodes; this NN also gets a skip connection and batch normalization if the respective parameters are set. Default: 0.

Examples

N, in_channel, out_channel = 4, 3, 5
@@ -252,4 +252,4 @@
 l = TransformerConv((in_channel, ein) => in_channel; heads, gating = true, bias_qkv = true)
 x = rand(Float32, in_channel, N)
 e = rand(Float32, ein, g.num_edges)
-l(g, x, e)
source
+l(g, x, e)source diff --git a/dev/api/gnngraph/index.html b/dev/api/gnngraph/index.html index 06fac6df..b14f9e09 100644 --- a/dev/api/gnngraph/index.html +++ b/dev/api/gnngraph/index.html @@ -1,5 +1,5 @@ -GNNGraph · GraphNeuralNetworks.jl

GNNGraph

Documentation page for the graph type GNNGraph provided by GraphNeuralNetworks.jl and related methods.

Besides the methods documented here, one can rely on the large set of functionalities given by Graphs.jl thanks to the fact that GNNGraph inherits from Graphs.AbstractGraph.

Index

GNNGraph type

GNNGraphs.GNNGraphType
GNNGraph(data; [graph_type, ndata, edata, gdata, num_nodes, graph_indicator, dir])
+GNNGraph · GraphNeuralNetworks.jl

GNNGraph

Documentation page for the graph type GNNGraph provided by GraphNeuralNetworks.jl and related methods.

Besides the methods documented here, one can rely on the large set of functionalities given by Graphs.jl thanks to the fact that GNNGraph inherits from Graphs.AbstractGraph.

Index

GNNGraph type

GNNGraphs.GNNGraphType
GNNGraph(data; [graph_type, ndata, edata, gdata, num_nodes, graph_indicator, dir])
 GNNGraph(g::GNNGraph; [ndata, edata, gdata])

A type representing a graph structure that also stores feature arrays associated to nodes, edges, and the graph itself.

The feature arrays are stored in the fields ndata, edata, and gdata as DataStore objects offering a convenient dictionary-like and namedtuple-like interface. The features can be passed at construction time or added later.

A GNNGraph can be constructed out of different data objects expressing the connections inside the graph. The internal representation type is determined by graph_type.

When constructed from another GNNGraph, the internal graph representation is preserved and shared. The node/edge/graph features are retained as well, unless explicitely set by the keyword arguments ndata, edata, and gdata.

A GNNGraph can also represent multiple graphs batched togheter (see MLUtils.batch or SparseArrays.blockdiag). The field g.graph_indicator contains the graph membership of each node.

GNNGraphs are always directed graphs, therefore each edge is defined by a source node and a target node (see edge_index). Self loops (edges connecting a node to itself) and multiple edges (more than one edge between the same pair of nodes) are supported.

A GNNGraph is a Graphs.jl's AbstractGraph, therefore it supports most functionality from that library.

Arguments

  • data: Some data representing the graph topology. Possible type are
    • An adjacency matrix
    • An adjacency list.
    • A tuple containing the source and target vectors (COO representation)
    • A Graphs.jl' graph.
  • graph_type: A keyword argument that specifies the underlying representation used by the GNNGraph. Currently supported values are
    • :coo. Graph represented as a tuple (source, target), such that the k-th edge connects the node source[k] to node target[k]. Optionally, also edge weights can be given: (source, target, weights).
    • :sparse. A sparse adjacency matrix representation.
    • :dense. A dense adjacency matrix representation.
    Defaults to :coo, currently the most supported type.
  • dir: The assumed edge direction when given adjacency matrix or adjacency list input data g. Possible values are :out and :in. Default :out.
  • num_nodes: The number of nodes. If not specified, inferred from g. Default nothing.
  • graph_indicator: For batched graphs, a vector containing the graph assignment of each node. Default nothing.
  • ndata: Node features. An array or named tuple of arrays whose last dimension has size num_nodes.
  • edata: Edge features. An array or named tuple of arrays whose last dimension has size num_edges.
  • gdata: Graph features. An array or named tuple of arrays whose last dimension has size num_graphs.

Examples

using GraphNeuralNetworks
 
 # Construct from adjacency list representation
@@ -35,7 +35,7 @@
 # Both source and target are vectors of length num_edges
 source, target = edge_index(g)

A GNNGraph can be sent to the GPU using e.g. Flux's gpu function:

# Send to gpu
 using Flux, CUDA
-g = g |> Flux.gpu
source
Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

Base.copyFunction
copy(g::GNNGraph; deep=false)

Create a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.

source

DataStore

GNNGraphs.DataStoreType
DataStore([n, data])
 DataStore([n,] k1 = x1, k2 = x2, ...)

A container for feature arrays. The optional argument n enforces that numobs(x) == n for each array contained in the datastore.

At construction time, the data can be provided as any iterables of pairs of symbols and arrays or as keyword arguments:

julia> ds = DataStore(3, x = rand(Float32, 2, 3), y = rand(Float32, 3))
 DataStore(3) with 2 elements:
   y = 3-element Vector{Float32}
@@ -79,8 +79,8 @@
 julia> ds2.a
 2-element Vector{Float64}:
  1.0
- 1.0
source

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
-adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

Query

GNNGraphs.adjacency_listMethod
adjacency_list(g; dir=:out)
+adjacency_list(g, nodes; dir=:out)

Return the adjacency list representation (a vector of vectors) of the graph g.

Calling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.

If nodes is given, return the neighborhood of the nodes in nodes only.

source
GNNGraphs.edge_indexMethod
edge_index(g::GNNGraph)

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.

s, t = edge_index(g)
source
GNNGraphs.edge_indexMethod
edge_index(g::GNNHeteroGraph, [edge_t])

Return a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).

If edge_t is not provided, it will error if g has more than one edge type.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNGraph; edges=false)

Return a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.

source
GNNGraphs.graph_indicatorMethod
graph_indicator(g::GNNHeteroGraph, [node_t])

Return a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.

See also batch.

source
GNNGraphs.has_isolated_nodesMethod
has_isolated_nodes(g::GNNGraph; dir=:out)

Return true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.

source
GNNGraphs.is_bidirectedMethod
is_bidirected(g::GNNGraph)

Check if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge.

source
GNNGraphs.khop_adjFunction
khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)

Return $A^k$ where $A$ is the adjacency matrix of the graph 'g'.

source
GNNGraphs.laplacian_lambda_maxFunction
laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)

Return the largest eigenvalue of the normalized symmetric Laplacian of the graph g.

If the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.

source
GNNGraphs.normalized_laplacianFunction
normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)

Normalized Laplacian matrix of graph g.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • add_self_loops: add self-loops while calculating the matrix.
  • dir: the edge directionality considered (:out, :in, :both).
source
GNNGraphs.scaled_laplacianFunction
scaled_laplacian(g, T=Float32; dir=:out)

Scaled Laplacian matrix of graph g, defined as $\hat{L} = \frac{2}{\lambda_{max}} L - I$ where $L$ is the normalized Laplacian matrix.

Arguments

  • g: A GNNGraph.
  • T: result element type.
  • dir: the edge directionality considered (:out, :in, :both).
source
Graphs.LinAlg.adjacency_matrixFunction
adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)

Return the adjacency matrix A for the graph g.

If dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.

User may specify the eltype T of the returned matrix.

If weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.

source
Graphs.degreeMethod
degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)

Return a vector containing the degrees of the nodes in g.

The gradient is propagated through this function only if edge_weight is true or a vector.

Arguments

  • g: A graph.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.
  • edge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.
source
Graphs.degreeMethod
degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)

Return a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.

Arguments

  • g: A graph.
  • edge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.
  • T: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.
  • dir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.
source
Graphs.neighborsMethod
neighbors(g::GNNGraph, i::Integer; dir=:out)

Return the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.

See also outneighbors, inneighbors.

source

Transform

GNNGraphs.add_edgesMethod
add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])
 add_edges(g::GNNGraph, (s, t); [edata])
 add_edges(g::GNNGraph, (s, t, w); [edata])

Add to graph g the edges with source nodes s and target nodes t. Optionally, pass the edge weight w and the features edata for the new edges. Returns a new graph sharing part of the underlying data with g.

If the s or t contain nodes that are not already present in the graph, they are added to the graph as well.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
@@ -102,12 +102,12 @@
 julia> add_edges(g, [1,2], [2,3])
 GNNGraph:
     num_nodes: 3
-    num_edges: 2
source
GNNGraphs.add_edgesMethod
add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])
 add_edges(g::GNNHeteroGraph, edge_t => (s, t); [edata, num_nodes])
-add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
-add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
+add_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])

Add to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t).

If the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.

source
GNNGraphs.add_nodesMethod
add_nodes(g::GNNGraph, n; [ndata])

Add n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNGraph)

Return a graph with the same features as g but also adding edges connecting the nodes to themselves.

Nodes with already existing self-loops will obtain a second self-loop.

If the graphs has edge weights, the new edges will have weight 1.

source
GNNGraphs.add_self_loopsMethod
add_self_loops(g::GNNHeteroGraph, edge_t::EType)
+add_self_loops(g::GNNHeteroGraph)

If the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.

Nodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.

If the graph has edge weights for edges of type edge_t, the new edges will have weight 1.

If no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.

If edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.

source
GNNGraphs.getgraphMethod
getgraph(g::GNNGraph, i; nmap=false)

Return the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph.

If nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.

source
GNNGraphs.negative_sampleMethod
negative_sample(g::GNNGraph; 
                 num_neg_edges = g.num_edges, 
-                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
+                bidirected = is_bidirected(g))

Return a graph containing random negative edges (i.e. non-edges) from graph g as edges.

If bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph.

See also is_bidirected.

source
GNNGraphs.perturb_edgesMethod
perturb_edges([rng], g::GNNGraph, perturb_ratio)

Return a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops.

The function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.

Arguments

  • g::GNNGraph: The graph to be perturbed.
  • perturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.
  • rng: An optionalrandom number generator to ensure reproducible results.

Examples

julia> g = GNNGraph((s, t, w))
 GNNGraph:
   num_nodes: 4
   num_edges: 5
@@ -115,7 +115,7 @@
 julia> perturbed_g = perturb_edges(g, 0.2)
 GNNGraph:
   num_nodes: 4
-  num_edges: 6
source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.ppr_diffusionMethod
ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph

Calculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web

The function performs the following steps:

  1. Constructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.
  2. Normalizes A to ensure each column sums to 1, representing transition probabilities.
  3. Applies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.
  4. Updates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.

Arguments

  • g::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.
  • alpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.

Returns

  • A new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.
source
GNNGraphs.rand_edge_splitMethod
rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2

Randomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.

If bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.

rand_edge_split is tipically used to create train/test splits in link prediction tasks.

source
GNNGraphs.remove_edgesMethod
remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})
 remove_edges(g::GNNGraph, p=0.5)

Remove specified edges from a GNNGraph, either by specifying edge indices or by randomly removing edges with a given probability.

Arguments

  • g: The input graph from which edges will be removed.
  • edges_to_remove: Vector of edge indices to be removed. This argument is only required for the first method.
  • p: Probability of removing each edge. This argument is only required for the second method and defaults to 0.5.

Returns

A new GNNGraph with the specified edges removed.

Example

julia> using GraphNeuralNetworks
 
 # Construct a GNNGraph
@@ -138,7 +138,7 @@
 julia> g_new
 GNNGraph:
   num_nodes: 3
-  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
+  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, p)

Returns a new graph obtained by dropping nodes from g with independent probabilities p.

Examples

julia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])
 GNNGraph:
   num_nodes: 4
   num_edges: 6
@@ -146,7 +146,7 @@
 julia> g_new = remove_nodes(g, 0.5)
 GNNGraph:
   num_nodes: 2
-  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GraphNeuralNetworks
+  num_edges: 2
source
GNNGraphs.remove_nodesMethod
remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)

Remove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.

Arguments

  • g: The input graph from which nodes (and their edges) will be removed.
  • nodes_to_remove: Vector of node indices to be removed.

Returns

A new GNNGraph with the specified nodes and all edges associated with these nodes removed.

Example

using GraphNeuralNetworks
 
 g = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])
 
@@ -154,7 +154,7 @@
 g_new = remove_nodes(g, [2, 3])
 
 # g_new now does not contain nodes 2 and 3, and any edges that were connected to these nodes.
-println(g_new)
source
GNNGraphs.to_bidirectedMethod
to_bidirected(g)

Adds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph.

See also is_bidirected.

Examples

julia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];
 
 julia> w = [1.0, 2.0, 3.0, 4.0, 5.0];
 
@@ -195,7 +195,7 @@
  20.0
  35.0
  35.0
- 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 6, ndata=ones(8, 4))
+ 50.0
source
GNNGraphs.to_unidirectedMethod
to_unidirected(g::GNNGraph)

Return a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.

source
MLUtils.batchMethod
batch(gs::Vector{<:GNNGraph})

Batch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.

Equivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.

Examples

julia> g1 = rand_graph(4, 6, ndata=ones(8, 4))
 GNNGraph:
     num_nodes = 4
     num_edges = 6
@@ -226,7 +226,7 @@
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
- 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> gbatched = MLUtils.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])
+ 1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
source
MLUtils.unbatchMethod
unbatch(g::GNNGraph)

Opposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.

See also MLUtils.batch and getgraph.

Examples

julia> gbatched = MLUtils.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])
 GNNGraph:
     num_nodes = 19
     num_edges = 16
@@ -244,8 +244,8 @@
 
  GNNGraph:
     num_nodes = 4
-    num_edges = 2
source

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
-sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

Utils

GNNGraphs.sort_edge_indexFunction
sort_edge_index(ei::Tuple) -> u', v'
+sort_edge_index(u, v) -> u', v'

Return a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi.

source
GNNGraphs.color_refinementFunction
color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters

The color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.

At each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.

math x_i' = hashmap((x_i, sort([x_j for j \in N(i)]))).`

This algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.

Arguments

  • g::GNNGraph: The graph to color.
  • x0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.

Returns

  • x::AbstractVector{<:Integer}: The final coloring.
  • num_colors::Int: The number of colors used.
  • niters::Int: The number of iterations until convergence.
source

Generate

GNNGraphs.knn_graphMethod
knn_graph(points::AbstractMatrix, 
           k::Int; 
           graph_indicator = nothing,
           self_loops = false, 
@@ -266,7 +266,7 @@
     num_nodes = 10
     num_edges = 30
     num_graphs = 2
-
source
GNNGraphs.rand_bipartite_heterographMethod
rand_bipartite_heterograph([rng,] 
                            (n1, n2), (m12, m21); 
                            bidirected = true, 
                            node_t = (:A, :B), 
@@ -300,7 +300,7 @@
 julia> g = rand_bipartite_heterograph((10, 15), (20, 0), node_t=(:user, :item), edge_t=:-, bidirected=false)
 GNNHeteroGraph:
   num_nodes: Dict(:item => 15, :user => 10)
-  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
+  num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)
source
GNNGraphs.rand_graphMethod
rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)

Generate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.

If bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.

A vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.

Pass a random number generator as the first argument to make the generation reproducible.

Additional keyword arguments will be passed to the GNNGraph constructor.

Examples

julia> g = rand_graph(5, 4, bidirected=false)
 GNNGraph:
     num_nodes = 5
     num_edges = 4
@@ -318,11 +318,11 @@
 
 # Each edge has a reverse
 julia> edge_index(g)
-([1, 3, 3, 4], [3, 4, 1, 3])
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
+([1, 3, 3, 4], [3, 4, 1, 3])
source
GNNGraphs.rand_heterographFunction
rand_heterograph([rng,] n, m; bidirected=false, kws...)

Construct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.

Pass a random number generator as a first argument to make the generation reproducible.

Setting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.

Additional keyword arguments will be passed to the GNNHeteroGraph constructor.

Examples

julia> g = rand_heterograph((:user => 10, :movie => 20),
                             (:user, :rate, :movie) => 30)
 GNNHeteroGraph:
   num_nodes: (:user => 10, :movie => 20)         
-  num_edges: ((:user, :rate, :movie) => 30,)
source

Operators

Base.intersectFunction

" intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
+  num_edges: ((:user, :rate, :movie) => 30,)
source

Operators

Base.intersectFunction

" intersect(g1::GNNGraph, g2::GNNGraph)

Intersect two graphs by keeping only the common edges.

source

Sampling

GNNGraphs.sample_neighborsFunction
sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)

Sample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.

The returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.

Arguments

  • g. The graph.
  • nodes. A list of node IDs to sample neighbors from.
  • K. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.
  • dir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).
  • replace. If true, sample with replacement.
  • dropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.

Examples

julia> g = rand_graph(20, 100)
 GNNGraph:
     num_nodes = 20
     num_edges = 100
@@ -361,4 +361,4 @@
     num_nodes = 20
     num_edges = 10
     edata:
-        EID => (10,)
source
+ EID => (10,)
source
diff --git a/dev/api/heterograph/index.html b/dev/api/heterograph/index.html index 5101a615..7afeefa4 100644 --- a/dev/api/heterograph/index.html +++ b/dev/api/heterograph/index.html @@ -40,7 +40,7 @@ julia> hg.ndata[:A].x 2×10 Matrix{Float64}: 0.825882 0.0797502 0.245813 0.142281 0.231253 0.685025 0.821457 0.888838 0.571347 0.53165 - 0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
+    0.631286  0.316292   0.705325  0.239211  0.533007  0.249233  0.473736  0.595475  0.0623298  0.159307

See also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.

source
GNNGraphs.edge_type_subgraphMethod
edge_type_subgraph(g::GNNHeteroGraph, edge_ts)

Return a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.

source
GNNGraphs.num_edge_typesMethod
num_edge_types(g)

Return the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.

source
GNNGraphs.num_node_typesMethod
num_node_types(g)

Return the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.

source
Graphs.has_edgeMethod
has_edge(g::GNNHeteroGraph, edge_t, i, j)

Return true if there is an edge of type edge_t from node i to node j in g.

Examples

julia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)
 GNNHeteroGraph:
   num_nodes: (:A => 2, :B => 2)
   num_edges: ((:A, :to, :B) => 4, (:B, :to, :A) => 0)
@@ -49,7 +49,7 @@
 true
 
 julia> has_edge(g, (:B,:to,:A), 1, 1)
-false
source

Heterogeneous Graph Convolutions

Heterogeneous graph convolutions are implemented in the type HeteroGraphConv. HeteroGraphConv relies on standard graph convolutional layers to perform message passing on the different relations. See the table at this page for the supported layers.

GraphNeuralNetworks.HeteroGraphConvType
HeteroGraphConv(itr; aggr = +)
+false
source

Heterogeneous Graph Convolutions

Heterogeneous graph convolutions are implemented in the type HeteroGraphConv. HeteroGraphConv relies on standard graph convolutional layers to perform message passing on the different relations. See the table at this page for the supported layers.

GraphNeuralNetworks.HeteroGraphConvType
HeteroGraphConv(itr; aggr = +)
 HeteroGraphConv(pairs...; aggr = +)

A convolutional layer for heterogeneous graphs.

The itr argument is an iterator of pairs of the form edge_t => layer, where edge_t is a 3-tuple of the form (src_node_type, edge_type, dst_node_type), and layer is a convolutional layers for homogeneous graphs.

Each convolution is applied to the corresponding relation. Since a node type can be involved in multiple relations, the single convolution outputs have to be aggregated using the aggr function. The default is to sum the outputs.

Forward Arguments

  • g::GNNHeteroGraph: The input graph.
  • x::Union{NamedTuple,Dict}: The input node features. The keys are node types and the values are node feature tensors.

Examples

julia> g = rand_bipartite_heterograph((10, 15), 20)
 GNNHeteroGraph:
   num_nodes: Dict(:A => 10, :B => 15)
@@ -63,4 +63,4 @@
 julia> y = layer(g, x); # output is a named tuple
 
 julia> size(y.A) == (32, 10) && size(y.B) == (32, 15)
-true
source
+truesource diff --git a/dev/api/messagepassing/index.html b/dev/api/messagepassing/index.html index 3cfcf951..43c0d0e1 100644 --- a/dev/api/messagepassing/index.html +++ b/dev/api/messagepassing/index.html @@ -1,6 +1,6 @@ Message Passing · GraphNeuralNetworks.jl

Message Passing

Index

Interface

GNNlib.apply_edgesFunction
apply_edges(fmsg, g; [xi, xj, e])
-apply_edges(fmsg, g, xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).

See also propagate and aggregate_neighbors.

source
GNNlib.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GNNlib.propagateFunction
propagate(fmsg, g, aggr; [xi, xj, e])
+apply_edges(fmsg, g, xi, xj, e=nothing)

Returns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).

The function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.

Arguments

  • g: An AbstractGNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xi, but now to be materialized on each edge's source node.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).

See also propagate and aggregate_neighbors.

source
GNNlib.aggregate_neighborsFunction
aggregate_neighbors(g, aggr, m)

Given a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features

\[\mathbf{x}_i = \square_{j \in \mathcal{N}(i)} \mathbf{m}_{j\to i}\]

Neighborhood aggregation is the second step of propagate, where it comes after apply_edges.

source
GNNlib.propagateFunction
propagate(fmsg, g, aggr; [xi, xj, e])
 propagate(fmsg, g, aggr xi, xj, e=nothing)

Performs message passing on graph g. Takes care of materializing the node features on each edge, applying the message function fmsg, and returning an aggregated message $\bar{\mathbf{m}}$ (depending on the return value of fmsg, an array or a named tuple of arrays with last dimension's size g.num_nodes).

It can be decomposed in two steps:

m = apply_edges(fmsg, g, xi, xj, e)
 m̄ = aggregate_neighbors(g, aggr, m)

GNN layers typically call propagate in their forward pass, providing as input f a closure.

Arguments

  • g: A GNNGraph.
  • xi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).
  • xj: As xj, but to be materialized on edges' sources.
  • e: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.
  • fmsg: A generic function that will be passed over to apply_edges. Has to take as inputs the edge-materialized xi, xj, and e (arrays or named tuples of arrays whose last dimension' size is the size of a batch of edges). Its output has to be an array or a named tuple of arrays with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).
  • aggr: Neighborhood aggregation operator. Use +, mean, max, or min.

Examples

using GraphNeuralNetworks, Flux
 
@@ -26,4 +26,4 @@
 end
 
 l = GNNConv(10 => 20)
-l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GNNlib.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GNNlib.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
+l(g, x)

See also apply_edges and aggregate_neighbors.

source

Built-in message functions

GNNlib.copy_xiFunction
copy_xi(xi, xj, e) = xi
source
GNNlib.copy_xjFunction
copy_xj(xi, xj, e) = xj
source
GNNlib.xi_dot_xjFunction
xi_dot_xj(xi, xj, e) = sum(xi .* xj, dims=1)
source
GNNlib.xi_sub_xjFunction
xi_sub_xj(xi, xj, e) = xi .- xj
source
GNNlib.xj_sub_xiFunction
xj_sub_xi(xi, xj, e) = xj .- xi
source
GNNlib.e_mul_xjFunction
e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj

Reshape e into broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.

source
GNNlib.w_mul_xjFunction
w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj

Similar to e_mul_xj but specialized on scalar edge features (weights).

source
diff --git a/dev/api/pool/index.html b/dev/api/pool/index.html index c6774bd5..d9bd937f 100644 --- a/dev/api/pool/index.html +++ b/dev/api/pool/index.html @@ -13,7 +13,7 @@ u = pool(g, g.ndata.x) -@assert size(u) == (chout, g.num_graphs)source
GraphNeuralNetworks.GlobalPoolType
GlobalPool(aggr)

Global pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation

\[\mathbf{u}_V = \square_{i \in V} \mathbf{x}_i\]

where $V$ is the set of nodes of the input graph and the type of aggregation represented by $\square$ is selected by the aggr argument. Commonly used aggregations are mean, max, and +.

See also reduce_nodes.

Examples

using Flux, GraphNeuralNetworks, Graphs
+@assert size(u) == (chout, g.num_graphs)
source
GraphNeuralNetworks.GlobalPoolType
GlobalPool(aggr)

Global pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation

\[\mathbf{u}_V = \square_{i \in V} \mathbf{x}_i\]

where $V$ is the set of nodes of the input graph and the type of aggregation represented by $\square$ is selected by the aggr argument. Commonly used aggregations are mean, max, and +.

See also reduce_nodes.

Examples

using Flux, GraphNeuralNetworks, Graphs
 
 pool = GlobalPool(mean)
 
@@ -24,7 +24,7 @@
 
 g = Flux.batch([GNNGraph(erdos_renyi(10, 4)) for _ in 1:5])
 X = rand(32, 50)
-pool(g, X) # => 32x5 matrix
source
GraphNeuralNetworks.Set2SetType
Set2Set(n_in, n_iters, n_layers = 1)

Set2Set layer from the paper Order Matters: Sequence to sequence for sets.

For each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:

\[\mathbf{q} = \mathrm{LSTM}(\mathbf{q}_{t-1}^*) +pool(g, X) # => 32x5 matrix

source
GraphNeuralNetworks.Set2SetType
Set2Set(n_in, n_iters, n_layers = 1)

Set2Set layer from the paper Order Matters: Sequence to sequence for sets.

For each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:

\[\mathbf{q} = \mathrm{LSTM}(\mathbf{q}_{t-1}^*) \alpha_{i} = \frac{\exp(\mathbf{q}^T \mathbf{x}_i)}{\sum_{j=1}^N \exp(\mathbf{q}^T \mathbf{x}_j)} \mathbf{r} = \sum_{i=1}^N \alpha_{i} \mathbf{x}_i -\mathbf{q}^*_t = [\mathbf{q}; \mathbf{r}]\]

where N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.

Given a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```

source
GraphNeuralNetworks.TopKPoolType
TopKPool(adj, k, in_channel)

Top-k pooling layer.

Arguments

  • adj: Adjacency matrix of a graph.
  • k: Top-k nodes are selected to pool together.
  • in_channel: The dimension of input channel.
source
+\mathbf{q}^*_t = [\mathbf{q}; \mathbf{r}]\]

where N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.

Given a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```

source
GraphNeuralNetworks.TopKPoolType
TopKPool(adj, k, in_channel)

Top-k pooling layer.

Arguments

  • adj: Adjacency matrix of a graph.
  • k: Top-k nodes are selected to pool together.
  • in_channel: The dimension of input channel.
source
diff --git a/dev/api/temporalconv/index.html b/dev/api/temporalconv/index.html index b4662b97..56a29a1f 100644 --- a/dev/api/temporalconv/index.html +++ b/dev/api/temporalconv/index.html @@ -14,7 +14,7 @@ julia> y = a3tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)); julia> size(y) -(6, 5)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
GraphNeuralNetworks.EvolveGCNOType
EvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])
+(6, 5)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
GraphNeuralNetworks.EvolveGCNOType
EvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)

Evolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.

Perfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10]
   num_edges: [20, 14, 22]
@@ -27,7 +27,7 @@
 (3,)
 
 julia> size(ev(tg, tg.ndata.x)[1])
-(5, 10)
source
GraphNeuralNetworks.DCGRUMethod
DCGRU(in => out, k, n; [bias, init, init_state])

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 10)
source
GraphNeuralNetworks.DCGRUMethod
DCGRU(in => out, k, n; [bias, init, init_state])

Diffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.

Performs a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Diffusion step.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> dcgru = DCGRU(2 => 5, 2, g1.num_nodes);
 
@@ -41,7 +41,7 @@
 julia> z = dcgru(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.GConvGRUMethod
GConvGRU(in => out, k, n; [bias, init, init_state])

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 5, 30)
source
GraphNeuralNetworks.GConvGRUMethod
GConvGRU(in => out, k, n; [bias, init, init_state])

Graph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> ggru = GConvGRU(2 => 5, 2, g1.num_nodes);
 
@@ -55,7 +55,7 @@
 julia> z = ggru(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.GConvLSTMMethod
GConvLSTM(in => out, k, n; [bias, init, init_state])

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
+(5, 5, 30)
source
GraphNeuralNetworks.GConvLSTMMethod
GConvLSTM(in => out, k, n; [bias, init, init_state])

Graph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.

Performs a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • k: Chebyshev polynomial order.
  • n: Number of nodes in the graph.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.

Examples

julia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);
 
 julia> gclstm = GConvLSTM(2 => 5, 2, g1.num_nodes);
 
@@ -69,7 +69,7 @@
 julia> z = gclstm(g2, x2);
 
 julia> size(z)
-(5, 5, 30)
source
GraphNeuralNetworks.TGCNMethod
TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

julia> tgcn = TGCN(2 => 6)
+(5, 5, 30)
source
GraphNeuralNetworks.TGCNMethod
TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])

Temporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.

Performs a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.

Arguments

  • in: Number of input features.
  • out: Number of output features.
  • bias: Add learnable bias. Default true.
  • init: Weights' initializer. Default glorot_uniform.
  • init_state: Initial state of the hidden stat of the GRU layer. Default zeros32.
  • add_self_loops: Add self loops to the graph before performing the convolution. Default false.
  • use_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.

Examples

julia> tgcn = TGCN(2 => 6)
 Recur(
   TGCNCell(
     GCNConv(2 => 6, σ),                 # 18 parameters
@@ -91,4 +91,4 @@
 julia> Flux.reset!(tgcn);
 
 julia> tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)) |> size # batch size of 20
-(6, 5, 20)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source
+(6, 5, 20)
Batch size changes

Failing to call reset! when the input batch size changes can lead to unexpected behavior.

source diff --git a/dev/api/temporalgraph/index.html b/dev/api/temporalgraph/index.html index a3dd3003..845004a5 100644 --- a/dev/api/temporalgraph/index.html +++ b/dev/api/temporalgraph/index.html @@ -17,7 +17,7 @@ num_edges: [20, 20, 20, 20, 20] num_snapshots: 5 tgdata: - x = 4-element Vector{Float64}source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GraphNeuralNetworks
+        x = 4-element Vector{Float64}
source
GNNGraphs.add_snapshotMethod
add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)

Return a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.

Examples

julia> using GraphNeuralNetworks
 
 julia> snapshots = [rand_graph(10, 20) for i in 1:5];
 
@@ -31,7 +31,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10, 10]
   num_edges: [20, 20, 16, 20, 20, 20]
-  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GraphNeuralNetworks
+  num_snapshots: 6
source
GNNGraphs.remove_snapshotMethod
remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)

Return a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.

Examples

julia> using GraphNeuralNetworks
 
 julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];
 
@@ -45,7 +45,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10]
   num_edges: [20, 22]
-  num_snapshots: 2
source

TemporalSnapshotsGNNGraph random generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
+  num_snapshots: 2
source

TemporalSnapshotsGNNGraph random generators

GNNGraphs.rand_temporal_radius_graphFunction
rand_temporal_radius_graph(number_nodes::Int, 
                            number_snapshots::Int,
                            speed::AbstractFloat,
                            r::AbstractFloat;
@@ -57,7 +57,7 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [90, 90, 90, 90, 90]
-  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
+  num_snapshots: 5
source
GNNGraphs.rand_temporal_hyperbolic_graphFunction
rand_temporal_hyperbolic_graph(number_nodes::Int, 
                                number_snapshots::Int;
                                α::Real,
                                R::Real,
@@ -70,4 +70,4 @@
 TemporalSnapshotsGNNGraph:
   num_nodes: [10, 10, 10, 10, 10]
   num_edges: [44, 46, 48, 42, 38]
-  num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source
+ num_snapshots: 5

References

Section D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks

source diff --git a/dev/api/utils/index.html b/dev/api/utils/index.html index 2c7f1013..b912e910 100644 --- a/dev/api/utils/index.html +++ b/dev/api/utils/index.html @@ -1,3 +1,3 @@ -Utils · GraphNeuralNetworks.jl

Utility Functions

Index

Docs

Graph-wise operations

GNNlib.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GNNlib.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source
GNNlib.broadcast_nodesFunction
broadcast_nodes(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).

source
GNNlib.broadcast_edgesFunction
broadcast_edges(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).

source

Neighborhood operations

GNNlib.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} - {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib

Primitive functions implemented in NNlib.jl:

+Utils · GraphNeuralNetworks.jl

Utility Functions

Index

Docs

Graph-wise operations

GNNlib.reduce_nodesFunction
reduce_nodes(aggr, g, x)

For a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

See also: reduce_edges.

source
reduce_nodes(aggr, indicator::AbstractVector, x)

Return the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.

See also graph_indicator.

source
GNNlib.reduce_edgesFunction
reduce_edges(aggr, g, e)

For a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.

source
GNNlib.broadcast_nodesFunction
broadcast_nodes(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).

source
GNNlib.broadcast_edgesFunction
broadcast_edges(g, x)

Graph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).

source

Neighborhood operations

GNNlib.softmax_edge_neighborsFunction
softmax_edge_neighbors(g, e)

Softmax over each node's neighborhood of the edge features e.

\[\mathbf{e}'_{j\to i} = \frac{e^{\mathbf{e}_{j\to i}}} + {\sum_{j'\in N(i)} e^{\mathbf{e}_{j'\to i}}}.\]

source

NNlib

Primitive functions implemented in NNlib.jl:

diff --git a/dev/datasets/index.html b/dev/datasets/index.html index 4e404dfe..a8007ac0 100644 --- a/dev/datasets/index.html +++ b/dev/datasets/index.html @@ -10,4 +10,4 @@ targets => 2708-element Vector{Int64} train_mask => 2708-element BitVector val_mask => 2708-element BitVector - test_mask => 2708-element BitVectorsource + test_mask => 2708-element BitVectorsource diff --git a/dev/dev/index.html b/dev/dev/index.html index be5dd3c5..6f258a6a 100644 --- a/dev/dev/index.html +++ b/dev/dev/index.html @@ -24,4 +24,4 @@ julia> @load "perf_pr_20210803_mymachine.jld2" julia> compare(dfpr, dfmaster)

Caching tutorials

Tutorials in GraphNeuralNetworks.jl are written in Pluto and rendered using DemoCards.jl and PlutoStaticHTML.jl. Rendering a Pluto notebook is time and resource-consuming, especially in a CI environment. So we use the caching functionality provided by PlutoStaticHTML.jl to reduce CI time.

If you are contributing a new tutorial or making changes to the existing notebook, generate the docs locally before committing/pushing. For caching to work, the cache environment(your local) and the documenter CI should have the same Julia version (e.g. "v1.9.1", also the patch number must match). So use the documenter CI Julia version for generating docs locally.

julia --version # check julia version before generating docs
-julia --project=docs docs/make.jl

Note: Use juliaup for easy switching of Julia versions.

During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.

git add docs/pluto_output # add generated cache

Check the documenter CI logs to ensure that it used the local cache:

+julia --project=docs docs/make.jl

Note: Use juliaup for easy switching of Julia versions.

During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.

git add docs/pluto_output # add generated cache

Check the documenter CI logs to ensure that it used the local cache:

diff --git a/dev/gnngraph/index.html b/dev/gnngraph/index.html index 779ca904..ef499f88 100644 --- a/dev/gnngraph/index.html +++ b/dev/gnngraph/index.html @@ -167,4 +167,4 @@ julia> GNNGraph(gd) GNNGraph: num_nodes: 10 - num_edges: 20 + num_edges: 20 diff --git a/dev/gsoc/index.html b/dev/gsoc/index.html index 78f11d25..ed0d935a 100644 --- a/dev/gsoc/index.html +++ b/dev/gsoc/index.html @@ -1,2 +1,2 @@ -Summer Of Code · GraphNeuralNetworks.jl
+Summer Of Code · GraphNeuralNetworks.jl
diff --git a/dev/heterograph/index.html b/dev/heterograph/index.html index 4d9032ee..29fd23e2 100644 --- a/dev/heterograph/index.html +++ b/dev/heterograph/index.html @@ -81,4 +81,4 @@ @assert g.num_nodes[:A] == 80 @assert size(g.ndata[:A].x) == (3, 80) # ... -end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

+end

Graph convolutions on heterographs

See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.

diff --git a/dev/index.html b/dev/index.html index c49a23db..f1823eec 100644 --- a/dev/index.html +++ b/dev/index.html @@ -37,4 +37,4 @@ end @info (; epoch, train_loss=loss(model, train_loader), test_loss=loss(model, test_loader)) -end +end diff --git a/dev/messagepassing/index.html b/dev/messagepassing/index.html index a4473b63..caf8fa5a 100644 --- a/dev/messagepassing/index.html +++ b/dev/messagepassing/index.html @@ -76,4 +76,4 @@ x = propagate(message, g, +, xj=x) return l.σ.(l.weight * x .+ l.bias) -end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

+end

See the GATConv implementation here for a more complex example.

Built-in message functions

In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible.

diff --git a/dev/models/index.html b/dev/models/index.html index 4b9284b5..7078e00f 100644 --- a/dev/models/index.html +++ b/dev/models/index.html @@ -66,4 +66,4 @@ X = randn(Float32, din, 10) # Pass only X as input, the model already contains the graph. -y = model(X)

An example of WithGraph usage is given in the graph neural ODE script in the examples folder.

+y = model(X)

An example of WithGraph usage is given in the graph neural ODE script in the examples folder.

diff --git a/dev/search_index.js b/dev/search_index.js index 76a0c6ee..d1784fc3 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"CurrentModule = GNNGraphs","category":"page"},{"location":"api/gnngraph/#GNNGraph","page":"GNNGraph","title":"GNNGraph","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Documentation page for the graph type GNNGraph provided by GraphNeuralNetworks.jl and related methods. ","category":"page"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Besides the methods documented here, one can rely on the large set of functionalities given by Graphs.jl thanks to the fact that GNNGraph inherits from Graphs.AbstractGraph.","category":"page"},{"location":"api/gnngraph/#Index","page":"GNNGraph","title":"Index","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Order = [:type, :function]\nPages = [\"gnngraph.md\"]","category":"page"},{"location":"api/gnngraph/#GNNGraph-type","page":"GNNGraph","title":"GNNGraph type","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"GNNGraph\nBase.copy","category":"page"},{"location":"api/gnngraph/#GNNGraphs.GNNGraph","page":"GNNGraph","title":"GNNGraphs.GNNGraph","text":"GNNGraph(data; [graph_type, ndata, edata, gdata, num_nodes, graph_indicator, dir])\nGNNGraph(g::GNNGraph; [ndata, edata, gdata])\n\nA type representing a graph structure that also stores feature arrays associated to nodes, edges, and the graph itself.\n\nThe feature arrays are stored in the fields ndata, edata, and gdata as DataStore objects offering a convenient dictionary-like and namedtuple-like interface. The features can be passed at construction time or added later.\n\nA GNNGraph can be constructed out of different data objects expressing the connections inside the graph. The internal representation type is determined by graph_type.\n\nWhen constructed from another GNNGraph, the internal graph representation is preserved and shared. The node/edge/graph features are retained as well, unless explicitely set by the keyword arguments ndata, edata, and gdata.\n\nA GNNGraph can also represent multiple graphs batched togheter (see MLUtils.batch or SparseArrays.blockdiag). The field g.graph_indicator contains the graph membership of each node.\n\nGNNGraphs are always directed graphs, therefore each edge is defined by a source node and a target node (see edge_index). Self loops (edges connecting a node to itself) and multiple edges (more than one edge between the same pair of nodes) are supported.\n\nA GNNGraph is a Graphs.jl's AbstractGraph, therefore it supports most functionality from that library.\n\nArguments\n\ndata: Some data representing the graph topology. Possible type are\nAn adjacency matrix\nAn adjacency list.\nA tuple containing the source and target vectors (COO representation)\nA Graphs.jl' graph.\ngraph_type: A keyword argument that specifies the underlying representation used by the GNNGraph. Currently supported values are\n:coo. Graph represented as a tuple (source, target), such that the k-th edge connects the node source[k] to node target[k]. Optionally, also edge weights can be given: (source, target, weights).\n:sparse. A sparse adjacency matrix representation.\n:dense. A dense adjacency matrix representation.\nDefaults to :coo, currently the most supported type.\ndir: The assumed edge direction when given adjacency matrix or adjacency list input data g. Possible values are :out and :in. Default :out.\nnum_nodes: The number of nodes. If not specified, inferred from g. Default nothing.\ngraph_indicator: For batched graphs, a vector containing the graph assignment of each node. Default nothing.\nndata: Node features. An array or named tuple of arrays whose last dimension has size num_nodes.\nedata: Edge features. An array or named tuple of arrays whose last dimension has size num_edges.\ngdata: Graph features. An array or named tuple of arrays whose last dimension has size num_graphs.\n\nExamples\n\nusing GraphNeuralNetworks\n\n# Construct from adjacency list representation\ndata = [[2,3], [1,4,5], [1], [2,5], [2,4]]\ng = GNNGraph(data)\n\n# Number of nodes, edges, and batched graphs\ng.num_nodes # 5\ng.num_edges # 10\ng.num_graphs # 1\n\n# Same graph in COO representation\ns = [1,1,2,2,2,3,4,4,5,5]\nt = [2,3,1,4,5,3,2,5,2,4]\ng = GNNGraph(s, t)\n\n# From a Graphs' graph\ng = GNNGraph(erdos_renyi(100, 20))\n\n# Add 2 node feature arrays at creation time\ng = GNNGraph(g, ndata = (x=rand(100, g.num_nodes), y=rand(g.num_nodes)))\n\n# Add 1 edge feature array, after the graph creation\ng.edata.z = rand(16, g.num_edges)\n\n# Add node features and edge features with default names `x` and `e`\ng = GNNGraph(g, ndata = rand(100, g.num_nodes), edata = rand(16, g.num_edges))\n\ng.ndata.x # or just g.x\ng.edata.e # or just g.e\n\n# Collect edges' source and target nodes.\n# Both source and target are vectors of length num_edges\nsource, target = edge_index(g)\n\nA GNNGraph can be sent to the GPU using e.g. Flux's gpu function:\n\n# Send to gpu\nusing Flux, CUDA\ng = g |> Flux.gpu\n\n\n\n\n\n","category":"type"},{"location":"api/gnngraph/#Base.copy","page":"GNNGraph","title":"Base.copy","text":"copy(g::GNNGraph; deep=false)\n\nCreate a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#DataStore","page":"GNNGraph","title":"DataStore","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"datastore.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/#GNNGraphs.DataStore","page":"GNNGraph","title":"GNNGraphs.DataStore","text":"DataStore([n, data])\nDataStore([n,] k1 = x1, k2 = x2, ...)\n\nA container for feature arrays. The optional argument n enforces that numobs(x) == n for each array contained in the datastore.\n\nAt construction time, the data can be provided as any iterables of pairs of symbols and arrays or as keyword arguments:\n\njulia> ds = DataStore(3, x = rand(Float32, 2, 3), y = rand(Float32, 3))\nDataStore(3) with 2 elements:\n y = 3-element Vector{Float32}\n x = 2×3 Matrix{Float32}\n\njulia> ds = DataStore(3, Dict(:x => rand(Float32, 2, 3), :y => rand(Float32, 3))); # equivalent to above\n\njulia> ds = DataStore(3, (x = rand(Float32, 2, 3), y = rand(Float32, 30)))\nERROR: AssertionError: DataStore: data[y] has 30 observations, but n = 3\nStacktrace:\n [1] DataStore(n::Int64, data::Dict{Symbol, Any})\n @ GNNGraphs ~/.julia/dev/GNNGraphs/datastore.jl:54\n [2] DataStore(n::Int64, data::NamedTuple{(:x, :y), Tuple{Matrix{Float32}, Vector{Float32}}})\n @ GNNGraphs ~/.julia/dev/GNNGraphs/datastore.jl:73\n [3] top-level scope\n @ REPL[13]:1\n\njulia> ds = DataStore(x = randFloat32, 2, 3), y = rand(Float32, 30)) # no checks\nDataStore() with 2 elements:\n y = 30-element Vector{Float32}\n x = 2×3 Matrix{Float32}\n y = 30-element Vector{Float64}\n x = 2×3 Matrix{Float64}\n\nThe DataStore has an interface similar to both dictionaries and named tuples. Arrays can be accessed and added using either the indexing or the property syntax:\n\njulia> ds = DataStore(x = ones(Float32, 2, 3), y = zeros(Float32, 3))\nDataStore() with 2 elements:\n y = 3-element Vector{Float32}\n x = 2×3 Matrix{Float32}\n\njulia> ds.x # same as `ds[:x]`\n2×3 Matrix{Float32}:\n 1.0 1.0 1.0\n 1.0 1.0 1.0\n\njulia> ds.z = zeros(Float32, 3) # Add new feature array `z`. Same as `ds[:z] = rand(Float32, 3)`\n3-element Vector{Float64}:\n0.0\n0.0\n0.0\n\nThe DataStore can be iterated over, and the keys and values can be accessed using keys(ds) and values(ds). map(f, ds) applies the function f to each feature array:\n\njulia> ds = DataStore(a = zeros(2), b = zeros(2));\n\njulia> ds2 = map(x -> x .+ 1, ds)\n\njulia> ds2.a\n2-element Vector{Float64}:\n 1.0\n 1.0\n\n\n\n\n\n","category":"type"},{"location":"api/gnngraph/#Query","page":"GNNGraph","title":"Query","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"query.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/#GNNGraphs.adjacency_list-Tuple{GNNGraph, Any}","page":"GNNGraph","title":"GNNGraphs.adjacency_list","text":"adjacency_list(g; dir=:out)\nadjacency_list(g, nodes; dir=:out)\n\nReturn the adjacency list representation (a vector of vectors) of the graph g.\n\nCalling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.\n\nIf nodes is given, return the neighborhood of the nodes in nodes only.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.edge_index-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.edge_index","text":"edge_index(g::GNNGraph)\n\nReturn a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.\n\ns, t = edge_index(g)\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.edge_index-Tuple{GNNHeteroGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Tuple{Symbol, Symbol, Symbol}}","page":"GNNGraph","title":"GNNGraphs.edge_index","text":"edge_index(g::GNNHeteroGraph, [edge_t])\n\nReturn a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).\n\nIf edge_t is not provided, it will error if g has more than one edge type.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.graph_indicator-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.graph_indicator","text":"graph_indicator(g::GNNGraph; edges=false)\n\nReturn a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.graph_indicator-Tuple{GNNHeteroGraph}","page":"GNNGraph","title":"GNNGraphs.graph_indicator","text":"graph_indicator(g::GNNHeteroGraph, [node_t])\n\nReturn a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.\n\nSee also batch.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.has_isolated_nodes-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.has_isolated_nodes","text":"has_isolated_nodes(g::GNNGraph; dir=:out)\n\nReturn true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.has_multi_edges-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.has_multi_edges","text":"has_multi_edges(g::GNNGraph)\n\nReturn true if g has any multiple edges.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.is_bidirected-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.is_bidirected","text":"is_bidirected(g::GNNGraph)\n\nCheck if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge. \n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.khop_adj","page":"GNNGraph","title":"GNNGraphs.khop_adj","text":"khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)\n\nReturn A^k where A is the adjacency matrix of the graph 'g'.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#GNNGraphs.laplacian_lambda_max","page":"GNNGraph","title":"GNNGraphs.laplacian_lambda_max","text":"laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)\n\nReturn the largest eigenvalue of the normalized symmetric Laplacian of the graph g.\n\nIf the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#GNNGraphs.normalized_laplacian","page":"GNNGraph","title":"GNNGraphs.normalized_laplacian","text":"normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)\n\nNormalized Laplacian matrix of graph g.\n\nArguments\n\ng: A GNNGraph.\nT: result element type.\nadd_self_loops: add self-loops while calculating the matrix.\ndir: the edge directionality considered (:out, :in, :both).\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#GNNGraphs.scaled_laplacian","page":"GNNGraph","title":"GNNGraphs.scaled_laplacian","text":"scaled_laplacian(g, T=Float32; dir=:out)\n\nScaled Laplacian matrix of graph g, defined as hatL = frac2lambda_max L - I where L is the normalized Laplacian matrix.\n\nArguments\n\ng: A GNNGraph.\nT: result element type.\ndir: the edge directionality considered (:out, :in, :both).\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Graphs.LinAlg.adjacency_matrix","page":"GNNGraph","title":"Graphs.LinAlg.adjacency_matrix","text":"adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)\n\nReturn the adjacency matrix A for the graph g. \n\nIf dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.\n\nUser may specify the eltype T of the returned matrix. \n\nIf weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Graphs.degree-Union{Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}, Tuple{TT}, Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, TT}} where TT<:Union{Nothing, Type{<:Number}}","page":"GNNGraph","title":"Graphs.degree","text":"degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)\n\nReturn a vector containing the degrees of the nodes in g.\n\nThe gradient is propagated through this function only if edge_weight is true or a vector.\n\nArguments\n\ng: A graph.\nT: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.\ndir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.\nedge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Graphs.degree-Union{Tuple{TT}, Tuple{GNNHeteroGraph, Tuple{Symbol, Symbol, Symbol}}, Tuple{GNNHeteroGraph, Tuple{Symbol, Symbol, Symbol}, TT}} where TT<:Union{Nothing, Type{<:Number}}","page":"GNNGraph","title":"Graphs.degree","text":"degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)\n\nReturn a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.\n\nArguments\n\ng: A graph.\nedge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.\nT: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.\ndir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Graphs.has_self_loops-Tuple{GNNGraph}","page":"GNNGraph","title":"Graphs.has_self_loops","text":"has_self_loops(g::GNNGraph)\n\nReturn true if g has any self loops.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Graphs.inneighbors-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Integer}","page":"GNNGraph","title":"Graphs.inneighbors","text":"inneighbors(g::GNNGraph, i::Integer)\n\nReturn the neighbors of node i in the graph g through incoming edges.\n\nSee also neighbors and outneighbors.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Graphs.outneighbors-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Integer}","page":"GNNGraph","title":"Graphs.outneighbors","text":"outneighbors(g::GNNGraph, i::Integer)\n\nReturn the neighbors of node i in the graph g through outgoing edges.\n\nSee also neighbors and inneighbors.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Graphs.neighbors(::GNNGraph, ::Integer)","category":"page"},{"location":"api/gnngraph/#Graphs.neighbors-Tuple{GNNGraph, Integer}","page":"GNNGraph","title":"Graphs.neighbors","text":"neighbors(g::GNNGraph, i::Integer; dir=:out)\n\nReturn the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.\n\nSee also outneighbors, inneighbors.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Transform","page":"GNNGraph","title":"Transform","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"transform.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/#GNNGraphs.add_edges-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, AbstractVector, AbstractVector}","page":"GNNGraph","title":"GNNGraphs.add_edges","text":"add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])\nadd_edges(g::GNNGraph, (s, t); [edata])\nadd_edges(g::GNNGraph, (s, t, w); [edata])\n\nAdd to graph g the edges with source nodes s and target nodes t. Optionally, pass the edge weight w and the features edata for the new edges. Returns a new graph sharing part of the underlying data with g.\n\nIf the s or t contain nodes that are not already present in the graph, they are added to the graph as well.\n\nExamples\n\njulia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];\n\njulia> w = Float32[1.0, 2.0, 3.0, 4.0, 5.0];\n\njulia> g = GNNGraph((s, t, w))\nGNNGraph:\n num_nodes: 4\n num_edges: 5\n\njulia> add_edges(g, ([2, 3], [4, 1], [10.0, 20.0]))\nGNNGraph:\n num_nodes: 4\n num_edges: 7\n\njulia> g = GNNGraph()\nGNNGraph:\n num_nodes: 0\n num_edges: 0\n\njulia> add_edges(g, [1,2], [2,3])\nGNNGraph:\n num_nodes: 3\n num_edges: 2\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.add_edges-Tuple{GNNHeteroGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Tuple{Symbol, Symbol, Symbol}, AbstractVector, AbstractVector}","page":"GNNGraph","title":"GNNGraphs.add_edges","text":"add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])\nadd_edges(g::GNNHeteroGraph, edge_t => (s, t); [edata, num_nodes])\nadd_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])\n\nAdd to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t). \n\nIf the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.add_nodes-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Integer}","page":"GNNGraph","title":"GNNGraphs.add_nodes","text":"add_nodes(g::GNNGraph, n; [ndata])\n\nAdd n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.add_self_loops-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.add_self_loops","text":"add_self_loops(g::GNNGraph)\n\nReturn a graph with the same features as g but also adding edges connecting the nodes to themselves.\n\nNodes with already existing self-loops will obtain a second self-loop.\n\nIf the graphs has edge weights, the new edges will have weight 1.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.add_self_loops-Tuple{GNNHeteroGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Tuple{Symbol, Symbol, Symbol}}","page":"GNNGraph","title":"GNNGraphs.add_self_loops","text":"add_self_loops(g::GNNHeteroGraph, edge_t::EType)\nadd_self_loops(g::GNNHeteroGraph)\n\nIf the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.\n\nNodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.\n\nIf the graph has edge weights for edges of type edge_t, the new edges will have weight 1.\n\nIf no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.\n\nIf edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.getgraph-Tuple{GNNGraph, Int64}","page":"GNNGraph","title":"GNNGraphs.getgraph","text":"getgraph(g::GNNGraph, i; nmap=false)\n\nReturn the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph. \n\nIf nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.negative_sample-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.negative_sample","text":"negative_sample(g::GNNGraph; \n num_neg_edges = g.num_edges, \n bidirected = is_bidirected(g))\n\nReturn a graph containing random negative edges (i.e. non-edges) from graph g as edges.\n\nIf bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph. \n\nSee also is_bidirected.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.perturb_edges-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, AbstractFloat}","page":"GNNGraph","title":"GNNGraphs.perturb_edges","text":"perturb_edges([rng], g::GNNGraph, perturb_ratio)\n\nReturn a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops. \n\nThe function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.\n\nArguments\n\ng::GNNGraph: The graph to be perturbed.\nperturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.\nrng: An optionalrandom number generator to ensure reproducible results.\n\nExamples\n\njulia> g = GNNGraph((s, t, w))\nGNNGraph:\n num_nodes: 4\n num_edges: 5\n\njulia> perturbed_g = perturb_edges(g, 0.2)\nGNNGraph:\n num_nodes: 4\n num_edges: 6\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.ppr_diffusion-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.ppr_diffusion","text":"ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph\n\nCalculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web\n\nThe function performs the following steps:\n\nConstructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.\nNormalizes A to ensure each column sums to 1, representing transition probabilities.\nApplies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.\nUpdates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.\n\nArguments\n\ng::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.\nalpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.\n\nReturns\n\nA new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.rand_edge_split-Tuple{GNNGraph, Any}","page":"GNNGraph","title":"GNNGraphs.rand_edge_split","text":"rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2\n\nRandomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.\n\nIf bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.\n\nrand_edge_split is tipically used to create train/test splits in link prediction tasks.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.random_walk_pe-Tuple{GNNGraph, Int64}","page":"GNNGraph","title":"GNNGraphs.random_walk_pe","text":"random_walk_pe(g, walk_length)\n\nReturn the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes). \n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_edges-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, AbstractVector{<:Integer}}","page":"GNNGraph","title":"GNNGraphs.remove_edges","text":"remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})\nremove_edges(g::GNNGraph, p=0.5)\n\nRemove specified edges from a GNNGraph, either by specifying edge indices or by randomly removing edges with a given probability.\n\nArguments\n\ng: The input graph from which edges will be removed.\nedges_to_remove: Vector of edge indices to be removed. This argument is only required for the first method.\np: Probability of removing each edge. This argument is only required for the second method and defaults to 0.5.\n\nReturns\n\nA new GNNGraph with the specified edges removed.\n\nExample\n\njulia> using GraphNeuralNetworks\n\n# Construct a GNNGraph\njulia> g = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])\nGNNGraph:\n num_nodes: 3\n num_edges: 5\n \n# Remove the second edge\njulia> g_new = remove_edges(g, [2]);\n\njulia> g_new\nGNNGraph:\n num_nodes: 3\n num_edges: 4\n\n# Remove edges with a probability of 0.5\njulia> g_new = remove_edges(g, 0.5);\n\njulia> g_new\nGNNGraph:\n num_nodes: 3\n num_edges: 2\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_multi_edges-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.remove_multi_edges","text":"remove_multi_edges(g::GNNGraph; aggr=+)\n\nRemove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.\n\nSee also remove_self_loops, has_multi_edges, and to_bidirected.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_nodes-Tuple{GNNGraph, AbstractFloat}","page":"GNNGraph","title":"GNNGraphs.remove_nodes","text":"remove_nodes(g::GNNGraph, p)\n\nReturns a new graph obtained by dropping nodes from g with independent probabilities p. \n\nExamples\n\njulia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])\nGNNGraph:\n num_nodes: 4\n num_edges: 6\n\njulia> g_new = remove_nodes(g, 0.5)\nGNNGraph:\n num_nodes: 2\n num_edges: 2\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_nodes-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, AbstractVector}","page":"GNNGraph","title":"GNNGraphs.remove_nodes","text":"remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)\n\nRemove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.\n\nArguments\n\ng: The input graph from which nodes (and their edges) will be removed.\nnodes_to_remove: Vector of node indices to be removed.\n\nReturns\n\nA new GNNGraph with the specified nodes and all edges associated with these nodes removed. \n\nExample\n\nusing GraphNeuralNetworks\n\ng = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])\n\n# Remove nodes with indices 2 and 3, for example\ng_new = remove_nodes(g, [2, 3])\n\n# g_new now does not contain nodes 2 and 3, and any edges that were connected to these nodes.\nprintln(g_new)\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_self_loops-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.remove_self_loops","text":"remove_self_loops(g::GNNGraph)\n\nReturn a graph constructed from g where self-loops (edges from a node to itself) are removed. \n\nSee also add_self_loops and remove_multi_edges.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.set_edge_weight-Tuple{GNNGraph, AbstractVector}","page":"GNNGraph","title":"GNNGraphs.set_edge_weight","text":"set_edge_weight(g::GNNGraph, w::AbstractVector)\n\nSet w as edge weights in the returned graph. \n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.to_bidirected-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.to_bidirected","text":"to_bidirected(g)\n\nAdds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph. \n\nSee also is_bidirected. \n\nExamples\n\njulia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];\n\njulia> w = [1.0, 2.0, 3.0, 4.0, 5.0];\n\njulia> e = [10.0, 20.0, 30.0, 40.0, 50.0];\n\njulia> g = GNNGraph(s, t, w, edata = e)\nGNNGraph:\n num_nodes = 4\n num_edges = 5\n edata:\n e => (5,)\n\njulia> g2 = to_bidirected(g)\nGNNGraph:\n num_nodes = 4\n num_edges = 7\n edata:\n e => (7,)\n\njulia> edge_index(g2)\n([1, 2, 2, 3, 3, 4, 4], [2, 1, 3, 2, 4, 3, 4])\n\njulia> get_edge_weight(g2)\n7-element Vector{Float64}:\n 1.0\n 1.0\n 2.0\n 2.0\n 3.5\n 3.5\n 5.0\n\njulia> g2.edata.e\n7-element Vector{Float64}:\n 10.0\n 10.0\n 20.0\n 20.0\n 35.0\n 35.0\n 50.0\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.to_unidirected-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.to_unidirected","text":"to_unidirected(g::GNNGraph)\n\nReturn a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#MLUtils.batch-Tuple{AbstractVector{<:GNNGraph}}","page":"GNNGraph","title":"MLUtils.batch","text":"batch(gs::Vector{<:GNNGraph})\n\nBatch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.\n\nEquivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.\n\nExamples\n\njulia> g1 = rand_graph(4, 6, ndata=ones(8, 4))\nGNNGraph:\n num_nodes = 4\n num_edges = 6\n ndata:\n x => (8, 4)\n\njulia> g2 = rand_graph(7, 4, ndata=zeros(8, 7))\nGNNGraph:\n num_nodes = 7\n num_edges = 4\n ndata:\n x => (8, 7)\n\njulia> g12 = MLUtils.batch([g1, g2])\nGNNGraph:\n num_nodes = 11\n num_edges = 10\n num_graphs = 2\n ndata:\n x => (8, 11)\n\njulia> g12.ndata.x\n8×11 Matrix{Float64}:\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#MLUtils.unbatch-Union{Tuple{GNNGraph{T}}, Tuple{T}} where T<:(Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}})","page":"GNNGraph","title":"MLUtils.unbatch","text":"unbatch(g::GNNGraph)\n\nOpposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.\n\nSee also MLUtils.batch and getgraph.\n\nExamples\n\njulia> gbatched = MLUtils.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])\nGNNGraph:\n num_nodes = 19\n num_edges = 16\n num_graphs = 3\n\njulia> MLUtils.unbatch(gbatched)\n3-element Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}:\n GNNGraph:\n num_nodes = 5\n num_edges = 6\n\n GNNGraph:\n num_nodes = 10\n num_edges = 8\n\n GNNGraph:\n num_nodes = 4\n num_edges = 2\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#SparseArrays.blockdiag-Tuple{GNNGraph, Vararg{GNNGraph}}","page":"GNNGraph","title":"SparseArrays.blockdiag","text":"blockdiag(xs::GNNGraph...)\n\nEquivalent to MLUtils.batch.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Utils","page":"GNNGraph","title":"Utils","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"GNNGraphs.sort_edge_index\nGNNGraphs.color_refinement","category":"page"},{"location":"api/gnngraph/#GNNGraphs.sort_edge_index","page":"GNNGraph","title":"GNNGraphs.sort_edge_index","text":"sort_edge_index(ei::Tuple) -> u', v'\nsort_edge_index(u, v) -> u', v'\n\nReturn a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi. \n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#GNNGraphs.color_refinement","page":"GNNGraph","title":"GNNGraphs.color_refinement","text":"color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters\n\nThe color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.\n\nAt each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.\n\nmath x_i' = hashmap((x_i, sort([x_j for j \\in N(i)]))).`\n\nThis algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.\n\nArguments\n\ng::GNNGraph: The graph to color.\nx0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.\n\nReturns\n\nx::AbstractVector{<:Integer}: The final coloring.\nnum_colors::Int: The number of colors used.\nniters::Int: The number of iterations until convergence.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Generate","page":"GNNGraph","title":"Generate","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"generate.jl\"]\nPrivate = false\nFilter = t -> typeof(t) <: Function && t!=rand_temporal_radius_graph && t!=rand_temporal_hyperbolic_graph\n","category":"page"},{"location":"api/gnngraph/#GNNGraphs.knn_graph-Tuple{AbstractMatrix, Int64}","page":"GNNGraph","title":"GNNGraphs.knn_graph","text":"knn_graph(points::AbstractMatrix, \n k::Int; \n graph_indicator = nothing,\n self_loops = false, \n dir = :in, \n kws...)\n\nCreate a k-nearest neighbor graph where each node is linked to its k closest points. \n\nArguments\n\npoints: A numfeatures × numnodes matrix storing the Euclidean positions of the nodes.\nk: The number of neighbors considered in the kNN algorithm.\ngraph_indicator: Either nothing or a vector containing the graph assignment of each node, in which case the returned graph will be a batch of graphs. \nself_loops: If true, consider the node itself among its k nearest neighbors, in which case the graph will contain self-loops. \ndir: The direction of the edges. If dir=:in edges go from the k neighbors to the central node. If dir=:out we have the opposite direction.\nkws: Further keyword arguments will be passed to the GNNGraph constructor.\n\nExamples\n\njulia> n, k = 10, 3;\n\njulia> x = rand(Float32, 3, n);\n\njulia> g = knn_graph(x, k)\nGNNGraph:\n num_nodes = 10\n num_edges = 30\n\njulia> graph_indicator = [1,1,1,1,1,2,2,2,2,2];\n\njulia> g = knn_graph(x, k; graph_indicator)\nGNNGraph:\n num_nodes = 10\n num_edges = 30\n num_graphs = 2\n\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.radius_graph-Tuple{AbstractMatrix, AbstractFloat}","page":"GNNGraph","title":"GNNGraphs.radius_graph","text":"radius_graph(points::AbstractMatrix, \n r::AbstractFloat; \n graph_indicator = nothing,\n self_loops = false, \n dir = :in, \n kws...)\n\nCreate a graph where each node is linked to its neighbors within a given distance r. \n\nArguments\n\npoints: A numfeatures × numnodes matrix storing the Euclidean positions of the nodes.\nr: The radius.\ngraph_indicator: Either nothing or a vector containing the graph assignment of each node, in which case the returned graph will be a batch of graphs. \nself_loops: If true, consider the node itself among its neighbors, in which case the graph will contain self-loops. \ndir: The direction of the edges. If dir=:in edges go from the neighbors to the central node. If dir=:out we have the opposite direction.\nkws: Further keyword arguments will be passed to the GNNGraph constructor.\n\nExamples\n\njulia> n, r = 10, 0.75;\n\njulia> x = rand(Float32, 3, n);\n\njulia> g = radius_graph(x, r)\nGNNGraph:\n num_nodes = 10\n num_edges = 46\n\njulia> graph_indicator = [1,1,1,1,1,2,2,2,2,2];\n\njulia> g = radius_graph(x, r; graph_indicator)\nGNNGraph:\n num_nodes = 10\n num_edges = 20\n num_graphs = 2\n\n\nReferences\n\nSection B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.rand_bipartite_heterograph-Tuple{Any, Any}","page":"GNNGraph","title":"GNNGraphs.rand_bipartite_heterograph","text":"rand_bipartite_heterograph([rng,] \n (n1, n2), (m12, m21); \n bidirected = true, \n node_t = (:A, :B), \n edge_t = :to, \n kws...)\n\nConstruct an GNNHeteroGraph with random edges representing a bipartite graph. The graph will have two types of nodes, and edges will only connect nodes of different types.\n\nThe first argument is a tuple (n1, n2) specifying the number of nodes of each type. The second argument is a tuple (m12, m21) specifying the number of edges connecting nodes of type 1 to nodes of type 2 and vice versa.\n\nThe type of nodes and edges can be specified with the node_t and edge_t keyword arguments, which default to (:A, :B) and :to respectively.\n\nIf bidirected=true (default), the reverse edge of each edge will be present. In this case m12 == m21 is required.\n\nA random number generator can be passed as the first argument to make the generation reproducible.\n\nAdditional keyword arguments will be passed to the GNNHeteroGraph constructor.\n\nSee rand_heterograph for a more general version.\n\nExamples\n\njulia> g = rand_bipartite_heterograph((10, 15), 20)\nGNNHeteroGraph:\n num_nodes: (:A => 10, :B => 15)\n num_edges: ((:A, :to, :B) => 20, (:B, :to, :A) => 20)\n\njulia> g = rand_bipartite_heterograph((10, 15), (20, 0), node_t=(:user, :item), edge_t=:-, bidirected=false)\nGNNHeteroGraph:\n num_nodes: Dict(:item => 15, :user => 10)\n num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.rand_graph-Tuple{Integer, Integer}","page":"GNNGraph","title":"GNNGraphs.rand_graph","text":"rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)\n\nGenerate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.\n\nIf bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.\n\nA vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.\n\nPass a random number generator as the first argument to make the generation reproducible.\n\nAdditional keyword arguments will be passed to the GNNGraph constructor.\n\nExamples\n\njulia> g = rand_graph(5, 4, bidirected=false)\nGNNGraph:\n num_nodes = 5\n num_edges = 4\n\njulia> edge_index(g)\n([1, 3, 3, 4], [5, 4, 5, 2])\n\n# In the bidirected case, edge data will be duplicated on the reverse edges if needed.\njulia> g = rand_graph(5, 4, edata=rand(Float32, 16, 2))\nGNNGraph:\n num_nodes = 5\n num_edges = 4\n edata:\n e => (16, 4)\n\n# Each edge has a reverse\njulia> edge_index(g)\n([1, 3, 3, 4], [3, 4, 1, 3])\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.rand_heterograph","page":"GNNGraph","title":"GNNGraphs.rand_heterograph","text":"rand_heterograph([rng,] n, m; bidirected=false, kws...)\n\nConstruct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.\n\nPass a random number generator as a first argument to make the generation reproducible.\n\nSetting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.\n\nAdditional keyword arguments will be passed to the GNNHeteroGraph constructor.\n\nExamples\n\njulia> g = rand_heterograph((:user => 10, :movie => 20),\n (:user, :rate, :movie) => 30)\nGNNHeteroGraph:\n num_nodes: (:user => 10, :movie => 20) \n num_edges: ((:user, :rate, :movie) => 30,)\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Operators","page":"GNNGraph","title":"Operators","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"operators.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Base.intersect","category":"page"},{"location":"api/gnngraph/#Base.intersect","page":"GNNGraph","title":"Base.intersect","text":"\" intersect(g1::GNNGraph, g2::GNNGraph)\n\nIntersect two graphs by keeping only the common edges.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Sampling","page":"GNNGraph","title":"Sampling","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"sampling.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/#GNNGraphs.sample_neighbors","page":"GNNGraph","title":"GNNGraphs.sample_neighbors","text":"sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)\n\nSample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.\n\nThe returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.\n\nArguments\n\ng. The graph.\nnodes. A list of node IDs to sample neighbors from.\nK. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.\ndir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).\nreplace. If true, sample with replacement.\ndropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.\n\nExamples\n\njulia> g = rand_graph(20, 100)\nGNNGraph:\n num_nodes = 20\n num_edges = 100\n\njulia> sample_neighbors(g, 2:3)\nGNNGraph:\n num_nodes = 20\n num_edges = 9\n edata:\n EID => (9,)\n\njulia> sg = sample_neighbors(g, 2:3, dropnodes=true)\nGNNGraph:\n num_nodes = 10\n num_edges = 9\n ndata:\n NID => (10,)\n edata:\n EID => (9,)\n\njulia> sg.ndata.NID\n10-element Vector{Int64}:\n 2\n 3\n 17\n 14\n 18\n 15\n 16\n 20\n 7\n 10\n\njulia> sample_neighbors(g, 2:3, 5, replace=true)\nGNNGraph:\n num_nodes = 20\n num_edges = 10\n edata:\n EID => (10,)\n\n\n\n\n\n","category":"function"},{"location":"heterograph/#Heterogeneous-Graphs","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Heterogeneous graphs (also called heterographs), are graphs where each node has a type, that we denote with symbols such as :user and :movie. Relations such as :rate or :like can connect nodes of different types. We call a triplet (source_node_type, relation_type, target_node_type) the type of a edge, e.g. (:user, :rate, :movie).","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Different node/edge types can store different groups of features and this makes heterographs a very flexible modeling tools and data containers. In GraphNeuralNetworks.jl heterographs are implemented in the type GNNHeteroGraph.","category":"page"},{"location":"heterograph/#Creating-a-Heterograph","page":"Heterogeneous Graphs","title":"Creating a Heterograph","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"A heterograph can be created empty or by passing pairs edge_type => data to the constructor.","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> g = GNNHeteroGraph()\nGNNHeteroGraph:\n num_nodes: Dict()\n num_edges: Dict()\n \njulia> g = GNNHeteroGraph((:user, :like, :actor) => ([1,2,2,3], [1,3,2,9]),\n (:user, :rate, :movie) => ([1,1,2,3], [7,13,5,7]))\nGNNHeteroGraph:\n num_nodes: Dict(:actor => 9, :movie => 13, :user => 3)\n num_edges: Dict((:user, :like, :actor) => 4, (:user, :rate, :movie) => 4)\n\njulia> g = GNNHeteroGraph((:user, :rate, :movie) => ([1,1,2,3], [7,13,5,7]))\nGNNHeteroGraph:\n num_nodes: Dict(:movie => 13, :user => 3)\n num_edges: Dict((:user, :rate, :movie) => 4)","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"New relations, possibly with new node types, can be added with the function add_edges.","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> g = add_edges(g, (:user, :like, :actor) => ([1,2,3,3,3], [3,5,1,9,4]))\nGNNHeteroGraph:\n num_nodes: Dict(:actor => 9, :movie => 13, :user => 3)\n num_edges: Dict((:user, :like, :actor) => 5, (:user, :rate, :movie) => 4)","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"See rand_heterograph, rand_bipartite_heterograph for generating random heterographs. ","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> g = rand_bipartite_heterograph((10, 15), 20)\nGNNHeteroGraph:\n num_nodes: Dict(:A => 10, :B => 15)\n num_edges: Dict((:A, :to, :B) => 20, (:B, :to, :A) => 20)","category":"page"},{"location":"heterograph/#Basic-Queries","page":"Heterogeneous Graphs","title":"Basic Queries","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Basic queries are similar to those for homogeneous graphs:","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> g = GNNHeteroGraph((:user, :rate, :movie) => ([1,1,2,3], [7,13,5,7]))\nGNNHeteroGraph:\n num_nodes: Dict(:movie => 13, :user => 3)\n num_edges: Dict((:user, :rate, :movie) => 4)\n\njulia> g.num_nodes\nDict{Symbol, Int64} with 2 entries:\n :user => 3\n :movie => 13\n\njulia> g.num_edges\nDict{Tuple{Symbol, Symbol, Symbol}, Int64} with 1 entry:\n (:user, :rate, :movie) => 4\n\n# source and target node for a given relation\njulia> edge_index(g, (:user, :rate, :movie))\n([1, 1, 2, 3], [7, 13, 5, 7])\n\n# node types\njulia> g.ntypes\n2-element Vector{Symbol}:\n :user\n :movie\n\n# edge types\njulia> g.etypes\n1-element Vector{Tuple{Symbol, Symbol, Symbol}}:\n (:user, :rate, :movie)","category":"page"},{"location":"heterograph/#Data-Features","page":"Heterogeneous Graphs","title":"Data Features","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Node, edge, and graph features can be added at construction time or later using:","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"# equivalent to g.ndata[:user][:x] = ...\njulia> g[:user].x = rand(Float32, 64, 3);\n\njulia> g[:movie].z = rand(Float32, 64, 13);\n\n# equivalent to g.edata[(:user, :rate, :movie)][:e] = ...\njulia> g[:user, :rate, :movie].e = rand(Float32, 64, 4);\n\njulia> g\nGNNHeteroGraph:\n num_nodes: Dict(:movie => 13, :user => 3)\n num_edges: Dict((:user, :rate, :movie) => 4)\n ndata:\n :movie => DataStore(z = [64×13 Matrix{Float32}])\n :user => DataStore(x = [64×3 Matrix{Float32}])\n edata:\n (:user, :rate, :movie) => DataStore(e = [64×4 Matrix{Float32}])","category":"page"},{"location":"heterograph/#Batching","page":"Heterogeneous Graphs","title":"Batching","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Similarly to graphs, also heterographs can be batched together.","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> gs = [rand_bipartite_heterograph((5, 10), 20) for _ in 1:32];\n\njulia> Flux.batch(gs)\nGNNHeteroGraph:\n num_nodes: Dict(:A => 160, :B => 320)\n num_edges: Dict((:A, :to, :B) => 640, (:B, :to, :A) => 640)\n num_graphs: 32","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Batching is automatically performed by the DataLoader iterator when the collate option is set to true.","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"using Flux: DataLoader\n\ndata = [rand_bipartite_heterograph((5, 10), 20, \n ndata=Dict(:A=>rand(Float32, 3, 5))) \n for _ in 1:320];\n\ntrain_loader = DataLoader(data, batchsize=16, shuffle=true, collate=true)\n\nfor g in train_loader\n @assert g.num_graphs == 16\n @assert g.num_nodes[:A] == 80\n @assert size(g.ndata[:A].x) == (3, 80) \n # ...\nend","category":"page"},{"location":"heterograph/#Graph-convolutions-on-heterographs","page":"Heterogeneous Graphs","title":"Graph convolutions on heterographs","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.","category":"page"},{"location":"datasets/#Datasets","page":"Datasets","title":"Datasets","text":"","category":"section"},{"location":"datasets/","page":"Datasets","title":"Datasets","text":"GraphNeuralNetworks.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others.","category":"page"},{"location":"datasets/","page":"Datasets","title":"Datasets","text":"GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.","category":"page"},{"location":"datasets/","page":"Datasets","title":"Datasets","text":"mldataset2gnngraph","category":"page"},{"location":"datasets/#GNNGraphs.mldataset2gnngraph","page":"Datasets","title":"GNNGraphs.mldataset2gnngraph","text":"mldataset2gnngraph(dataset)\n\nConvert a graph dataset from the package MLDatasets.jl into one or many GNNGraphs.\n\nExamples\n\njulia> using MLDatasets, GraphNeuralNetworks\n\njulia> mldataset2gnngraph(Cora())\nGNNGraph:\n num_nodes = 2708\n num_edges = 10556\n ndata:\n features => 1433×2708 Matrix{Float32}\n targets => 2708-element Vector{Int64}\n train_mask => 2708-element BitVector\n val_mask => 2708-element BitVector\n test_mask => 2708-element BitVector\n\n\n\n\n\n","category":"function"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"EditURL = \"/home/runner/work/GraphNeuralNetworks.jl/GraphNeuralNetworks.jl/GraphNeuralNetworks/docs/tutorials/index.md\"","category":"page"},{"location":"tutorials/#tutorials","page":"Tutorials","title":"Tutorials","text":"","category":"section"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"","category":"page"},{"location":"tutorials/#Introductory-tutorials","page":"Tutorials","title":"Introductory tutorials","text":"","category":"section"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"A beginner level introduction to graph machine learning using GraphNeuralNetworks.jl","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"(Image: card-cover-image)","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Hands-on introduction to Graph Neural Networks","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Tutorial for Graph Classification using GraphNeuralNetworks.jl","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"(Image: card-cover-image)","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Graph Classification with Graph Neural Networks","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Tutorial for Node classification using GraphNeuralNetworks.jl","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"(Image: card-cover-image)","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Node Classification with Graph Neural Networks","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"","category":"page"},{"location":"tutorials/#Contributions","page":"Tutorials","title":"Contributions","text":"","category":"section"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"If you have a suggestion on adding new tutorials, feel free to create a new issue here. Users are invited to contribute demonstrations of their own. If you want to contribute new tutorials and looking for inspiration, checkout these tutorials from PyTorch Geometric. You are expected to use Pluto.jl notebooks with DemoCards.jl. Please check out existing tutorials for more details.","category":"page"},{"location":"dev/#Developer-Notes","page":"Developer Notes","title":"Developer Notes","text":"","category":"section"},{"location":"dev/#Develop-and-Managing-the-Monorepo","page":"Developer Notes","title":"Develop and Managing the Monorepo","text":"","category":"section"},{"location":"dev/#Development-Enviroment","page":"Developer Notes","title":"Development Enviroment","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"GraphNeuralNetworks.jl is package hosted in a monorepo that contains multiple packages. The GraphNeuralNetworks.jl package depends on GNNGraphs.jl, also hosted in the same monorepo.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"pkg> activate .\n\npkg> dev ./GNNGraphs","category":"page"},{"location":"dev/#Add-a-New-Layer","page":"Developer Notes","title":"Add a New Layer","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"To add a new graph convolutional layer and make it available in both the Flux-based frontend (GraphNeuralNetworks.jl) and the Lux-based frontend (GNNLux), you need to:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Add the functional version to GNNlib\nAdd the stateful version to GraphNeuralNetworks\nAdd the stateless version to GNNLux\nAdd the layer to the table in docs/api/conv.md","category":"page"},{"location":"dev/#Versions-and-Tagging","page":"Developer Notes","title":"Versions and Tagging","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Each PR should update the version number in the Porject.toml file of each involved package if needed by semnatic versioning. For instance, when adding new features GNNGraphs could move from \"1.17.5\" to \"1.18.0-DEV\". The \"DEV\" will be removed when the package is tagged and released. Pay also attention to updating the compat bounds, e.g. GraphNeuralNetworks might require a newer version of GNNGraphs.","category":"page"},{"location":"dev/#Generate-Documentation-Locally","page":"Developer Notes","title":"Generate Documentation Locally","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"For generating the documentation locally","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"cd docs\njulia","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"(@v1.10) pkg> activate .\n Activating project at `~/.julia/dev/GraphNeuralNetworks/docs`\n\n(docs) pkg> dev ../ ../GNNGraphs/\n Resolving package versions...\n No Changes to `~/.julia/dev/GraphNeuralNetworks/docs/Project.toml`\n No Changes to `~/.julia/dev/GraphNeuralNetworks/docs/Manifest.toml`\n\njulia> include(\"make.jl\")","category":"page"},{"location":"dev/#Benchmarking","page":"Developer Notes","title":"Benchmarking","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"You can benchmark the effect on performance of your commits using the script perf/perf.jl.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"First, checkout and benchmark the master branch:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"julia> include(\"perf.jl\")\n\njulia> df = run_benchmarks()\n\n# observe results\njulia> for g in groupby(df, :layer); println(g, \"\\n\"); end\n\njulia> @save \"perf_master_20210803_mymachine.jld2\" dfmaster=df","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Now checkout your branch and do the same:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"julia> df = run_benchmarks()\n\njulia> @save \"perf_pr_20210803_mymachine.jld2\" dfpr=df","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Finally, compare the results:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"julia> @load \"perf_master_20210803_mymachine.jld2\"\n\njulia> @load \"perf_pr_20210803_mymachine.jld2\"\n\njulia> compare(dfpr, dfmaster)","category":"page"},{"location":"dev/#Caching-tutorials","page":"Developer Notes","title":"Caching tutorials","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Tutorials in GraphNeuralNetworks.jl are written in Pluto and rendered using DemoCards.jl and PlutoStaticHTML.jl. Rendering a Pluto notebook is time and resource-consuming, especially in a CI environment. So we use the caching functionality provided by PlutoStaticHTML.jl to reduce CI time.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"If you are contributing a new tutorial or making changes to the existing notebook, generate the docs locally before committing/pushing. For caching to work, the cache environment(your local) and the documenter CI should have the same Julia version (e.g. \"v1.9.1\", also the patch number must match). So use the documenter CI Julia version for generating docs locally.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"julia --version # check julia version before generating docs\njulia --project=docs docs/make.jl","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Note: Use juliaup for easy switching of Julia versions.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"git add docs/pluto_output # add generated cache","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Check the documenter CI logs to ensure that it used the local cache:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"(Image: )","category":"page"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/utils/#Utility-Functions","page":"Utils","title":"Utility Functions","text":"","category":"section"},{"location":"api/utils/#Index","page":"Utils","title":"Index","text":"","category":"section"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"Order = [:type, :function]\nPages = [\"utils.md\"]","category":"page"},{"location":"api/utils/#Docs","page":"Utils","title":"Docs","text":"","category":"section"},{"location":"api/utils/#Graph-wise-operations","page":"Utils","title":"Graph-wise operations","text":"","category":"section"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"GraphNeuralNetworks.reduce_nodes\nGraphNeuralNetworks.reduce_edges\nGraphNeuralNetworks.softmax_nodes\nGraphNeuralNetworks.softmax_edges\nGraphNeuralNetworks.broadcast_nodes\nGraphNeuralNetworks.broadcast_edges","category":"page"},{"location":"api/utils/#GNNlib.reduce_nodes","page":"Utils","title":"GNNlib.reduce_nodes","text":"reduce_nodes(aggr, g, x)\n\nFor a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.\n\nSee also: reduce_edges.\n\n\n\n\n\nreduce_nodes(aggr, indicator::AbstractVector, x)\n\nReturn the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.\n\nSee also graph_indicator.\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.reduce_edges","page":"Utils","title":"GNNlib.reduce_edges","text":"reduce_edges(aggr, g, e)\n\nFor a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.softmax_nodes","page":"Utils","title":"GNNlib.softmax_nodes","text":"softmax_nodes(g, x)\n\nGraph-wise softmax of the node features x.\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.softmax_edges","page":"Utils","title":"GNNlib.softmax_edges","text":"softmax_edges(g, e)\n\nGraph-wise softmax of the edge features e.\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.broadcast_nodes","page":"Utils","title":"GNNlib.broadcast_nodes","text":"broadcast_nodes(g, x)\n\nGraph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.broadcast_edges","page":"Utils","title":"GNNlib.broadcast_edges","text":"broadcast_edges(g, x)\n\nGraph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#Neighborhood-operations","page":"Utils","title":"Neighborhood operations","text":"","category":"section"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"GraphNeuralNetworks.softmax_edge_neighbors","category":"page"},{"location":"api/utils/#GNNlib.softmax_edge_neighbors","page":"Utils","title":"GNNlib.softmax_edge_neighbors","text":"softmax_edge_neighbors(g, e)\n\nSoftmax over each node's neighborhood of the edge features e.\n\nmathbfe_jto i = frace^mathbfe_jto i\n sum_jin N(i) e^mathbfe_jto i\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#NNlib","page":"Utils","title":"NNlib","text":"","category":"section"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"Primitive functions implemented in NNlib.jl:","category":"page"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"gather!\ngather\nscatter!\nscatter","category":"page"},{"location":"gnngraph/#Working-with-GNNGraph","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"The fundamental graph type in GraphNeuralNetworks.jl is the GNNGraph. A GNNGraph g is a directed graph with nodes labeled from 1 to g.num_nodes. The underlying implementation allows for efficient application of graph neural network operators, gpu movement, and storage of node/edge/graph related feature arrays.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"GNNGraph inherits from Graphs.jl's AbstractGraph, therefore it supports most functionality from that library. ","category":"page"},{"location":"gnngraph/#Graph-Creation","page":"Working with GNNGraph","title":"Graph Creation","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"A GNNGraph can be created from several different data sources encoding the graph topology:","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"using GraphNeuralNetworks, Graphs, SparseArrays\n\n\n# Construct a GNNGraph from from a Graphs.jl's graph\nlg = erdos_renyi(10, 30)\ng = GNNGraph(lg)\n\n# Same as above using convenience method rand_graph\ng = rand_graph(10, 60)\n\n# From an adjacency matrix\nA = sprand(10, 10, 0.3)\ng = GNNGraph(A)\n\n# From an adjacency list\nadjlist = [[2,3], [1,3], [1,2,4], [3]]\ng = GNNGraph(adjlist)\n\n# From COO representation\nsource = [1,1,2,2,3,3,3,4]\ntarget = [2,3,1,3,1,2,4,3]\ng = GNNGraph(source, target)","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"See also the related methods Graphs.adjacency_matrix, edge_index, and adjacency_list.","category":"page"},{"location":"gnngraph/#Basic-Queries","page":"Working with GNNGraph","title":"Basic Queries","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"julia> source = [1,1,2,2,3,3,3,4];\n\njulia> target = [2,3,1,3,1,2,4,3];\n\njulia> g = GNNGraph(source, target)\nGNNGraph:\n num_nodes: 4\n num_edges: 8\n\n\njulia> @assert g.num_nodes == 4 # number of nodes\n\njulia> @assert g.num_edges == 8 # number of edges\n\njulia> @assert g.num_graphs == 1 # number of subgraphs (a GNNGraph can batch many graphs together)\n\njulia> is_directed(g) # a GNNGraph is always directed\ntrue\n\njulia> is_bidirected(g) # for each edge, also the reverse edge is present\ntrue\n\njulia> has_self_loops(g)\nfalse\n\njulia> has_multi_edges(g) \nfalse","category":"page"},{"location":"gnngraph/#Data-Features","page":"Working with GNNGraph","title":"Data Features","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"One or more arrays can be associated to nodes, edges, and (sub)graphs of a GNNGraph. They will be stored in the fields g.ndata, g.edata, and g.gdata respectively.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"The data fields are DataStore objects. DataStores conveniently offer an interface similar to both dictionaries and named tuples. Similarly to dictionaries, DataStores support addition of new features after creation time.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"The array contained in the datastores have last dimension equal to num_nodes (in ndata), num_edges (in edata), or num_graphs (in gdata) respectively.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"# Create a graph with a single feature array `x` associated to nodes\ng = rand_graph(10, 60, ndata = (; x = rand(Float32, 32, 10)))\n\ng.ndata.x # access the features\n\n# Equivalent definition passing directly the array\ng = rand_graph(10, 60, ndata = rand(Float32, 32, 10))\n\ng.ndata.x # `:x` is the default name for node features\n\ng.ndata.z = rand(Float32, 3, 10) # add new feature array `z`\n\n# For convenience, we can access the features through the shortcut\ng.x \n\n# You can have multiple feature arrays\ng = rand_graph(10, 60, ndata = (; x=rand(Float32, 32, 10), y=rand(Float32, 10)))\n\ng.ndata.y, g.ndata.x # or g.x, g.y\n\n# Attach an array with edge features.\n# Since `GNNGraph`s are directed, the number of edges\n# will be double that of the original Graphs' undirected graph.\ng = GNNGraph(erdos_renyi(10, 30), edata = rand(Float32, 60))\n@assert g.num_edges == 60\n\ng.edata.e # or g.e\n\n# If we pass only half of the edge features, they will be copied\n# on the reversed edges.\ng = GNNGraph(erdos_renyi(10, 30), edata = rand(Float32, 30))\n\n\n# Create a new graph from previous one, inheriting edge data\n# but replacing node data\ng′ = GNNGraph(g, ndata =(; z = ones(Float32, 16, 10)))\n\ng′.z\ng′.e","category":"page"},{"location":"gnngraph/#Edge-weights","page":"Working with GNNGraph","title":"Edge weights","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"It is common to denote scalar edge features as edge weights. The GNNGraph has specific support for edge weights: they can be stored as part of internal representations of the graph (COO or adjacency matrix). Some graph convolutional layers, most notably the GCNConv, can use the edge weights to perform weighted sums over the nodes' neighborhoods.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"julia> source = [1, 1, 2, 2, 3, 3];\n\njulia> target = [2, 3, 1, 3, 1, 2];\n\njulia> weight = [1.0, 0.5, 2.1, 2.3, 4, 4.1];\n\njulia> g = GNNGraph(source, target, weight)\nGNNGraph:\n num_nodes: 3\n num_edges: 6\n\njulia> get_edge_weight(g)\n6-element Vector{Float64}:\n 1.0\n 0.5\n 2.1\n 2.3\n 4.0\n 4.1","category":"page"},{"location":"gnngraph/#Batches-and-Subgraphs","page":"Working with GNNGraph","title":"Batches and Subgraphs","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"Multiple GNNGraphs can be batched together into a single graph that contains the total number of the original nodes and where the original graphs are disjoint subgraphs.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"using Flux\nusing Flux: DataLoader\n\ndata = [rand_graph(10, 30, ndata=rand(Float32, 3, 10)) for _ in 1:160]\ngall = Flux.batch(data)\n\n# gall is a GNNGraph containing many graphs\n@assert gall.num_graphs == 160 \n@assert gall.num_nodes == 1600 # 10 nodes x 160 graphs\n@assert gall.num_edges == 4800 # 30 undirected edges x 160 graphs\n\n# Let's create a mini-batch from gall\ng23 = getgraph(gall, 2:3)\n@assert g23.num_graphs == 2\n@assert g23.num_nodes == 20 # 10 nodes x 2 graphs\n@assert g23.num_edges == 60 # 30 undirected edges X 2 graphs\n\n# We can pass a GNNGraph to Flux's DataLoader\ntrain_loader = DataLoader(gall, batchsize=16, shuffle=true)\n\nfor g in train_loader\n @assert g.num_graphs == 16\n @assert g.num_nodes == 160\n @assert size(g.ndata.x) = (3, 160) \n # .....\nend\n\n# Access the nodes' graph memberships \ngraph_indicator(gall)","category":"page"},{"location":"gnngraph/#DataLoader-and-mini-batch-iteration","page":"Working with GNNGraph","title":"DataLoader and mini-batch iteration","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"While constructing a batched graph and passing it to the DataLoader is always an option for mini-batch iteration, the recommended way for better performance is to pass an array of graphs directly and set the collate option to true:","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"using Flux: DataLoader\n\ndata = [rand_graph(10, 30, ndata=rand(Float32, 3, 10)) for _ in 1:320]\n\ntrain_loader = DataLoader(data, batchsize=16, shuffle=true, collate=true)\n\nfor g in train_loader\n @assert g.num_graphs == 16\n @assert g.num_nodes == 160\n @assert size(g.ndata.x) = (3, 160) \n # .....\nend","category":"page"},{"location":"gnngraph/#Graph-Manipulation","page":"Working with GNNGraph","title":"Graph Manipulation","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"g′ = add_self_loops(g)\ng′ = remove_self_loops(g)\ng′ = add_edges(g, [1, 2], [2, 3]) # add edges 1->2 and 2->3","category":"page"},{"location":"gnngraph/#GPU-movement","page":"Working with GNNGraph","title":"GPU movement","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"Move a GNNGraph to a CUDA device using Flux.gpu method. ","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"using CUDA, Flux\n\ng_gpu = g |> Flux.gpu","category":"page"},{"location":"gnngraph/#Integration-with-Graphs.jl","page":"Working with GNNGraph","title":"Integration with Graphs.jl","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"Since GNNGraph <: Graphs.AbstractGraph, we can use any functionality from Graphs.jl for querying and analyzing the graph structure. Moreover, a GNNGraph can be easily constructed from a Graphs.Graph or a Graphs.DiGraph:","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"julia> import Graphs\n\njulia> using GraphNeuralNetworks\n\n# A Graphs.jl undirected graph\njulia> gu = Graphs.erdos_renyi(10, 20) \n{10, 20} undirected simple Int64 graph\n\n# Since GNNGraphs are undirected, the edges are doubled when converting \n# to GNNGraph\njulia> GNNGraph(gu)\nGNNGraph:\n num_nodes: 10\n num_edges: 40\n\n# A Graphs.jl directed graph\njulia> gd = Graphs.erdos_renyi(10, 20, is_directed=true)\n{10, 20} directed simple Int64 graph\n\njulia> GNNGraph(gd)\nGNNGraph:\n num_nodes: 10\n num_edges: 20","category":"page"},{"location":"gsoc/#Graph-Neural-Networks-Summer-of-Code","page":"Summer Of Code","title":"Graph Neural Networks - Summer of Code","text":"","category":"section"},{"location":"gsoc/","page":"Summer Of Code","title":"Summer Of Code","text":"Potential candidates to Google Summer of Code's scholarships can find out about the available projects involving GraphNeuralNetworks.jl on the dedicated page in the Julia Language website.","category":"page"},{"location":"models/#Models","page":"Model Building","title":"Models","text":"","category":"section"},{"location":"models/","page":"Model Building","title":"Model Building","text":"GraphNeuralNetworks.jl provides common graph convolutional layers by which you can assemble arbitrarily deep or complex models. GNN layers are compatible with Flux.jl ones, therefore expert Flux users are promptly able to define and train their models. ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"In what follows, we discuss two different styles for model creation: the explicit modeling style, more verbose but more flexible, and the implicit modeling style based on GNNChain, more concise but less flexible.","category":"page"},{"location":"models/#Explicit-modeling","page":"Model Building","title":"Explicit modeling","text":"","category":"section"},{"location":"models/","page":"Model Building","title":"Model Building","text":"In the explicit modeling style, the model is created according to the following steps:","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"Define a new type for your model (GNN in the example below). Layers and submodels are fields.\nApply Flux.@layer to the new type to make it Flux's compatible (parameters' collection, gpu movement, etc...)\nOptionally define a convenience constructor for your model.\nDefine the forward pass by implementing the call method for your type.\nInstantiate the model. ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"Here is an example of this construction:","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"using Flux, Graphs, GraphNeuralNetworks\n\nstruct GNN # step 1\n conv1\n bn\n conv2\n dropout\n dense\nend\n\nFlux.@layer GNN # step 2\n\nfunction GNN(din::Int, d::Int, dout::Int) # step 3 \n GNN(GCNConv(din => d),\n BatchNorm(d),\n GraphConv(d => d, relu),\n Dropout(0.5),\n Dense(d, dout))\nend\n\nfunction (model::GNN)(g::GNNGraph, x) # step 4\n x = model.conv1(g, x)\n x = relu.(model.bn(x))\n x = model.conv2(g, x)\n x = model.dropout(x)\n x = model.dense(x)\n return x \nend\n\ndin, d, dout = 3, 4, 2 \nmodel = GNN(din, d, dout) # step 5\n\ng = rand_graph(10, 30)\nX = randn(Float32, din, 10) \n\ny = model(g, X) # output size: (dout, g.num_nodes)\ngrad = gradient(model -> sum(model(g, X)), model)","category":"page"},{"location":"models/#Implicit-modeling-with-GNNChains","page":"Model Building","title":"Implicit modeling with GNNChains","text":"","category":"section"},{"location":"models/","page":"Model Building","title":"Model Building","text":"While very flexible, the way in which we defined GNN model definition in last section is a bit verbose. In order to simplify things, we provide the GNNChain type. It is very similar to Flux's well known Chain. It allows to compose layers in a sequential fashion as Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles propagates the input graph as well, providing it as a first argument to layers subtyping the GNNLayer abstract type. ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"Using GNNChain, the previous example becomes","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"using Flux, Graphs, GraphNeuralNetworks\n\ndin, d, dout = 3, 4, 2 \ng = rand_graph(10, 30)\nX = randn(Float32, din, 10)\n\nmodel = GNNChain(GCNConv(din => d),\n BatchNorm(d),\n x -> relu.(x),\n GCNConv(d => d, relu),\n Dropout(0.5),\n Dense(d, dout))","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"The GNNChain only propagates the graph and the node features. More complex scenarios, e.g. when also edge features are updated, have to be handled using the explicit definition of the forward pass. ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"A GNNChain opportunely propagates the graph into the branches created by the Flux.Parallel layer:","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"AddResidual(l) = Parallel(+, identity, l) # implementing a skip/residual connection\n\nmodel = GNNChain( ResGatedGraphConv(din => d, relu),\n AddResidual(ResGatedGraphConv(d => d, relu)),\n AddResidual(ResGatedGraphConv(d => d, relu)),\n AddResidual(ResGatedGraphConv(d => d, relu)),\n GlobalPooling(mean),\n Dense(d, dout))\n\ny = model(g, X) # output size: (dout, g.num_graphs)","category":"page"},{"location":"models/#Embedding-a-graph-in-the-model","page":"Model Building","title":"Embedding a graph in the model","text":"","category":"section"},{"location":"models/","page":"Model Building","title":"Model Building","text":"Sometimes it is useful to consider a specific graph as a part of a model instead of its input. GraphNeuralNetworks.jl provides the WithGraph type to deal with this scenario.","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"chain = GNNChain(GCNConv(din => d, relu),\n GCNConv(d => d))\n\n\ng = rand_graph(10, 30)\n\nmodel = WithGraph(chain, g)\n\nX = randn(Float32, din, 10)\n\n# Pass only X as input, the model already contains the graph.\ny = model(X) ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"An example of WithGraph usage is given in the graph neural ODE script in the examples folder.","category":"page"},{"location":"api/pool/","page":"Pooling Layers","title":"Pooling Layers","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/pool/#Pooling-Layers","page":"Pooling Layers","title":"Pooling Layers","text":"","category":"section"},{"location":"api/pool/#Index","page":"Pooling Layers","title":"Index","text":"","category":"section"},{"location":"api/pool/","page":"Pooling Layers","title":"Pooling Layers","text":"Order = [:type, :function]\nPages = [\"pool.md\"]","category":"page"},{"location":"api/pool/#Docs","page":"Pooling Layers","title":"Docs","text":"","category":"section"},{"location":"api/pool/","page":"Pooling Layers","title":"Pooling Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"layers/pool.jl\"]\nPrivate = false","category":"page"},{"location":"api/pool/#GraphNeuralNetworks.GlobalAttentionPool","page":"Pooling Layers","title":"GraphNeuralNetworks.GlobalAttentionPool","text":"GlobalAttentionPool(fgate, ffeat=identity)\n\nGlobal soft attention layer from the Gated Graph Sequence Neural Networks paper\n\nmathbfu_V = sum_iin V alpha_i f_feat(mathbfx_i)\n\nwhere the coefficients alpha_i are given by a softmax_nodes operation:\n\nalpha_i = frace^f_gate(mathbfx_i)\n sum_iin V e^f_gate(mathbfx_i)\n\nArguments\n\nfgate: The function f_gate mathbbR^D_in to mathbbR. It is tipically expressed by a neural network.\nffeat: The function f_feat mathbbR^D_in to mathbbR^D_out. It is tipically expressed by a neural network.\n\nExamples\n\nchin = 6\nchout = 5 \n\nfgate = Dense(chin, 1)\nffeat = Dense(chin, chout)\npool = GlobalAttentionPool(fgate, ffeat)\n\ng = Flux.batch([GNNGraph(random_regular_graph(10, 4), \n ndata=rand(Float32, chin, 10)) \n for i=1:3])\n\nu = pool(g, g.ndata.x)\n\n@assert size(u) == (chout, g.num_graphs)\n\n\n\n\n\n","category":"type"},{"location":"api/pool/#GraphNeuralNetworks.GlobalPool","page":"Pooling Layers","title":"GraphNeuralNetworks.GlobalPool","text":"GlobalPool(aggr)\n\nGlobal pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation\n\nmathbfu_V = square_i in V mathbfx_i\n\nwhere V is the set of nodes of the input graph and the type of aggregation represented by square is selected by the aggr argument. Commonly used aggregations are mean, max, and +.\n\nSee also reduce_nodes.\n\nExamples\n\nusing Flux, GraphNeuralNetworks, Graphs\n\npool = GlobalPool(mean)\n\ng = GNNGraph(erdos_renyi(10, 4))\nX = rand(32, 10)\npool(g, X) # => 32x1 matrix\n\n\ng = Flux.batch([GNNGraph(erdos_renyi(10, 4)) for _ in 1:5])\nX = rand(32, 50)\npool(g, X) # => 32x5 matrix\n\n\n\n\n\n","category":"type"},{"location":"api/pool/#GraphNeuralNetworks.Set2Set","page":"Pooling Layers","title":"GraphNeuralNetworks.Set2Set","text":"Set2Set(n_in, n_iters, n_layers = 1)\n\nSet2Set layer from the paper Order Matters: Sequence to sequence for sets.\n\nFor each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:\n\nmathbfq = mathrmLSTM(mathbfq_t-1^*)\nalpha_i = fracexp(mathbfq^T mathbfx_i)sum_j=1^N exp(mathbfq^T mathbfx_j) \nmathbfr = sum_i=1^N alpha_i mathbfx_i\nmathbfq^*_t = mathbfq mathbfr\n\nwhere N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.\n\nGiven a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```\n\n\n\n\n\n","category":"type"},{"location":"api/pool/#GraphNeuralNetworks.TopKPool","page":"Pooling Layers","title":"GraphNeuralNetworks.TopKPool","text":"TopKPool(adj, k, in_channel)\n\nTop-k pooling layer.\n\nArguments\n\nadj: Adjacency matrix of a graph.\nk: Top-k nodes are selected to pool together.\nin_channel: The dimension of input channel.\n\n\n\n\n\n","category":"type"},{"location":"messagepassing/#Message-Passing","page":"Message Passing","title":"Message Passing","text":"","category":"section"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"A generic message passing on graph takes the form","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"beginaligned\nmathbfm_jto i = phi(mathbfx_i mathbfx_j mathbfe_jto i) \nbarmathbfm_i = square_jin N(i) mathbfm_jto i \nmathbfx_i = gamma_x(mathbfx_i barmathbfm_i)\nmathbfe_jto i^prime = gamma_e(mathbfe_j to imathbfm_j to i)\nendaligned","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"where we refer to phi as to the message function, and to gamma_x and gamma_e as to the node update and edge update function respectively. The aggregation square is over the neighborhood N(i) of node i, and it is usually equal either to sum, to max or to a mean operation. ","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"In GraphNeuralNetworks.jl, the message passing mechanism is exposed by the propagate function. propagate takes care of materializing the node features on each edge, applying the message function, performing the aggregation, and returning barmathbfm. It is then left to the user to perform further node and edge updates, manipulating arrays of size D_node times num_nodes and D_edge times num_edges.","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"propagate is composed of two steps, also available as two independent methods:","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"apply_edges materializes node features on edges and applies the message function. \naggregate_neighbors applies a reduction operator on the messages coming from the neighborhood of each node.","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"The whole propagation mechanism internally relies on the NNlib.gather and NNlib.scatter methods.","category":"page"},{"location":"messagepassing/#Examples","page":"Message Passing","title":"Examples","text":"","category":"section"},{"location":"messagepassing/#Basic-use-of-apply_edges-and-propagate","page":"Message Passing","title":"Basic use of apply_edges and propagate","text":"","category":"section"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"The function apply_edges can be used to broadcast node data on each edge and produce new edge data.","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"julia> using GraphNeuralNetworks, Graphs, Statistics\n\njulia> g = rand_graph(10, 20)\nGNNGraph:\n num_nodes = 10\n num_edges = 20\n\njulia> x = ones(2,10);\n\njulia> z = 2ones(2,10);\n\n# Return an edge features arrays (D × num_edges)\njulia> apply_edges((xi, xj, e) -> xi .+ xj, g, xi=x, xj=z)\n2×20 Matrix{Float64}:\n 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0\n 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0\n\n# now returning a named tuple\njulia> apply_edges((xi, xj, e) -> (a=xi .+ xj, b=xi .- xj), g, xi=x, xj=z)\n(a = [3.0 3.0 … 3.0 3.0; 3.0 3.0 … 3.0 3.0], b = [-1.0 -1.0 … -1.0 -1.0; -1.0 -1.0 … -1.0 -1.0])\n\n# Here we provide a named tuple input\njulia> apply_edges((xi, xj, e) -> xi.a + xi.b .* xj, g, xi=(a=x,b=z), xj=z)\n2×20 Matrix{Float64}:\n 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0\n 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"The function propagate instead performs the apply_edges operation but then also applies a reduction over each node's neighborhood (see aggregate_neighbors).","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"julia> propagate((xi, xj, e) -> xi .+ xj, g, +, xi=x, xj=z)\n2×10 Matrix{Float64}:\n 3.0 6.0 9.0 9.0 0.0 6.0 6.0 3.0 15.0 3.0\n 3.0 6.0 9.0 9.0 0.0 6.0 6.0 3.0 15.0 3.0\n\n# Previous output can be understood by looking at the degree\njulia> degree(g)\n10-element Vector{Int64}:\n 1\n 2\n 3\n 3\n 0\n 2\n 2\n 1\n 5\n 1","category":"page"},{"location":"messagepassing/#Implementing-a-custom-Graph-Convolutional-Layer","page":"Message Passing","title":"Implementing a custom Graph Convolutional Layer","text":"","category":"section"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"Let's implement a simple graph convolutional layer using the message passing framework. The convolution reads ","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"mathbfx_i = W cdot sum_j in N(i) mathbfx_j","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"We will also add a bias and an activation function.","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"using Flux, Graphs, GraphNeuralNetworks\n\nstruct GCN{A<:AbstractMatrix, B, F} <: GNNLayer\n weight::A\n bias::B\n σ::F\nend\n\nFlux.@layer GCN # allow gpu movement, select trainable params etc...\n\nfunction GCN(ch::Pair{Int,Int}, σ=identity)\n in, out = ch\n W = Flux.glorot_uniform(out, in)\n b = zeros(Float32, out)\n GCN(W, b, σ)\nend\n\nfunction (l::GCN)(g::GNNGraph, x::AbstractMatrix{T}) where T\n @assert size(x, 2) == g.num_nodes\n\n # Computes messages from source/neighbour nodes (j) to target/root nodes (i).\n # The message function will have to handle matrices of size (*, num_edges).\n # In this simple case we just let the neighbor features go through.\n message(xi, xj, e) = xj \n\n # The + operator gives the sum aggregation.\n # `mean`, `max`, `min`, and `*` are other possibilities.\n x = propagate(message, g, +, xj=x) \n\n return l.σ.(l.weight * x .+ l.bias)\nend","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"See the GATConv implementation here for a more complex example.","category":"page"},{"location":"messagepassing/#Built-in-message-functions","page":"Message Passing","title":"Built-in message functions","text":"","category":"section"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible. ","category":"page"},{"location":"api/basic/","page":"Basic Layers","title":"Basic Layers","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/basic/#Basic-Layers","page":"Basic Layers","title":"Basic Layers","text":"","category":"section"},{"location":"api/basic/#Index","page":"Basic Layers","title":"Index","text":"","category":"section"},{"location":"api/basic/","page":"Basic Layers","title":"Basic Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"basic.md\"]","category":"page"},{"location":"api/basic/#Docs","page":"Basic Layers","title":"Docs","text":"","category":"section"},{"location":"api/basic/","page":"Basic Layers","title":"Basic Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"layers/basic.jl\"]\nPrivate = false","category":"page"},{"location":"api/basic/#GraphNeuralNetworks.DotDecoder","page":"Basic Layers","title":"GraphNeuralNetworks.DotDecoder","text":"DotDecoder()\n\nA graph neural network layer that for given input graph g and node features x, returns the dot product x_i ⋅ xj on each edge. \n\nExamples\n\njulia> g = rand_graph(5, 6)\nGNNGraph:\n num_nodes = 5\n num_edges = 6\n\njulia> dotdec = DotDecoder()\nDotDecoder()\n\njulia> dotdec(g, rand(2, 5))\n1×6 Matrix{Float64}:\n 0.345098 0.458305 0.106353 0.345098 0.458305 0.106353\n\n\n\n\n\n","category":"type"},{"location":"api/basic/#GraphNeuralNetworks.GNNChain","page":"Basic Layers","title":"GraphNeuralNetworks.GNNChain","text":"GNNChain(layers...)\nGNNChain(name = layer, ...)\n\nCollects multiple layers / functions to be called in sequence on given input graph and input node features. \n\nIt allows to compose layers in a sequential fashion as Flux.Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles the input graph as well, providing it as a first argument only to layers subtyping the GNNLayer abstract type. \n\nGNNChain supports indexing and slicing, m[2] or m[1:end-1], and if names are given, m[:name] == m[1] etc.\n\nExamples\n\njulia> using Flux, GraphNeuralNetworks\n\njulia> m = GNNChain(GCNConv(2=>5), \n BatchNorm(5), \n x -> relu.(x), \n Dense(5, 4))\nGNNChain(GCNConv(2 => 5), BatchNorm(5), #7, Dense(5 => 4))\n\njulia> x = randn(Float32, 2, 3);\n\njulia> g = rand_graph(3, 6)\nGNNGraph:\n num_nodes = 3\n num_edges = 6\n\njulia> m(g, x)\n4×3 Matrix{Float32}:\n -0.795592 -0.795592 -0.795592\n -0.736409 -0.736409 -0.736409\n 0.994925 0.994925 0.994925\n 0.857549 0.857549 0.857549\n\njulia> m2 = GNNChain(enc = m, \n dec = DotDecoder())\nGNNChain(enc = GNNChain(GCNConv(2 => 5), BatchNorm(5), #7, Dense(5 => 4)), dec = DotDecoder())\n\njulia> m2(g, x)\n1×6 Matrix{Float32}:\n 2.90053 2.90053 2.90053 2.90053 2.90053 2.90053\n\njulia> m2[:enc](g, x) == m(g, x)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"api/basic/#GraphNeuralNetworks.GNNLayer","page":"Basic Layers","title":"GraphNeuralNetworks.GNNLayer","text":"abstract type GNNLayer end\n\nAn abstract type from which graph neural network layers are derived.\n\nSee also GNNChain.\n\n\n\n\n\n","category":"type"},{"location":"api/basic/#GraphNeuralNetworks.WithGraph","page":"Basic Layers","title":"GraphNeuralNetworks.WithGraph","text":"WithGraph(model, g::GNNGraph; traingraph=false)\n\nA type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).\n\nIf traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.\n\nExamples\n\ng = GNNGraph([1,2,3], [2,3,1])\nx = rand(Float32, 2, 3)\nmodel = SAGEConv(2 => 3)\nwg = WithGraph(model, g)\n# No need to feed the graph to `wg`\n@assert wg(x) == model(g, x)\n\ng2 = GNNGraph([1,1,2,3], [2,4,1,1])\nx2 = rand(Float32, 2, 4)\n# WithGraph will ignore the internal graph if fed with a new one. \n@assert wg(g2, x2) == model(g2, x2)\n\n\n\n\n\n","category":"type"},{"location":"api/temporalgraph/#Temporal-Graphs","page":"Temporal Graphs","title":"Temporal Graphs","text":"","category":"section"},{"location":"api/temporalgraph/#TemporalSnapshotsGNNGraph","page":"Temporal Graphs","title":"TemporalSnapshotsGNNGraph","text":"","category":"section"},{"location":"api/temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Documentation page for the graph type TemporalSnapshotsGNNGraph and related methods, representing time varying graphs with time varying features.","category":"page"},{"location":"api/temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Modules = [GNNGraphs]\nPages = [\"temporalsnapshotsgnngraph.jl\"]\nPrivate = false","category":"page"},{"location":"api/temporalgraph/#GNNGraphs.TemporalSnapshotsGNNGraph","page":"Temporal Graphs","title":"GNNGraphs.TemporalSnapshotsGNNGraph","text":"TemporalSnapshotsGNNGraph(snapshots::AbstractVector{<:GNNGraph})\n\nA type representing a temporal graph as a sequence of snapshots. In this case a snapshot is a GNNGraph.\n\nTemporalSnapshotsGNNGraph can store the feature array associated to the graph itself as a DataStore object, and it uses the DataStore objects of each snapshot for the node and edge features. The features can be passed at construction time or added later.\n\nConstructor Arguments\n\nsnapshot: a vector of snapshots, where each snapshot must have the same number of nodes.\n\nExamples\n\njulia> using GraphNeuralNetworks\n\njulia> snapshots = [rand_graph(10,20) for i in 1:5];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [20, 20, 20, 20, 20]\n num_snapshots: 5\n\njulia> tg.tgdata.x = rand(4); # add temporal graph feature\n\njulia> tg # show temporal graph with new feature\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [20, 20, 20, 20, 20]\n num_snapshots: 5\n tgdata:\n x = 4-element Vector{Float64}\n\n\n\n\n\n","category":"type"},{"location":"api/temporalgraph/#GNNGraphs.add_snapshot-Tuple{TemporalSnapshotsGNNGraph, Int64, GNNGraph}","page":"Temporal Graphs","title":"GNNGraphs.add_snapshot","text":"add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)\n\nReturn a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.\n\nExamples\n\njulia> using GraphNeuralNetworks\n\njulia> snapshots = [rand_graph(10, 20) for i in 1:5];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [20, 20, 20, 20, 20]\n num_snapshots: 5\n\njulia> new_tg = add_snapshot(tg, 3, rand_graph(10, 16)) # add a new snapshot at time 3\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10, 10]\n num_edges: [20, 20, 16, 20, 20, 20]\n num_snapshots: 6\n\n\n\n\n\n","category":"method"},{"location":"api/temporalgraph/#GNNGraphs.remove_snapshot-Tuple{TemporalSnapshotsGNNGraph, Int64}","page":"Temporal Graphs","title":"GNNGraphs.remove_snapshot","text":"remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)\n\nReturn a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.\n\nExamples\n\njulia> using GraphNeuralNetworks\n\njulia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n\njulia> new_tg = remove_snapshot(tg, 2) # remove snapshot at time 2\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10]\n num_edges: [20, 22]\n num_snapshots: 2\n\n\n\n\n\n","category":"method"},{"location":"api/temporalgraph/#TemporalSnapshotsGNNGraph-random-generators","page":"Temporal Graphs","title":"TemporalSnapshotsGNNGraph random generators","text":"","category":"section"},{"location":"api/temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"rand_temporal_radius_graph\nrand_temporal_hyperbolic_graph","category":"page"},{"location":"api/temporalgraph/#GNNGraphs.rand_temporal_radius_graph","page":"Temporal Graphs","title":"GNNGraphs.rand_temporal_radius_graph","text":"rand_temporal_radius_graph(number_nodes::Int, \n number_snapshots::Int,\n speed::AbstractFloat,\n r::AbstractFloat;\n self_loops = false,\n dir = :in,\n kws...)\n\nCreate a random temporal graph given number_nodes nodes and number_snapshots snapshots. First, the positions of the nodes are randomly generated in the unit square. Two nodes are connected if their distance is less than a given radius r. Each following snapshot is obtained by applying the same construction to new positions obtained as follows. For each snapshot, the new positions of the points are determined by applying random independent displacement vectors to the previous positions. The direction of the displacement is chosen uniformly at random and its length is chosen uniformly in [0, speed]. Then the connections are recomputed. If a point happens to move outside the boundary, its position is updated as if it had bounced off the boundary.\n\nArguments\n\nnumber_nodes: The number of nodes of each snapshot.\nnumber_snapshots: The number of snapshots.\nspeed: The speed to update the nodes.\nr: The radius of connection.\nself_loops: If true, consider the node itself among its neighbors, in which case the graph will contain self-loops. \ndir: The direction of the edges. If dir=:in edges go from the neighbors to the central node. If dir=:out we have the opposite direction.\nkws: Further keyword arguments will be passed to the GNNGraph constructor of each snapshot.\n\nExample\n\njulia> n, snaps, s, r = 10, 5, 0.1, 1.5;\n\njulia> tg = rand_temporal_radius_graph(n,snaps,s,r) # complete graph at each snapshot\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [90, 90, 90, 90, 90]\n num_snapshots: 5\n\n\n\n\n\n","category":"function"},{"location":"api/temporalgraph/#GNNGraphs.rand_temporal_hyperbolic_graph","page":"Temporal Graphs","title":"GNNGraphs.rand_temporal_hyperbolic_graph","text":"rand_temporal_hyperbolic_graph(number_nodes::Int, \n number_snapshots::Int;\n α::Real,\n R::Real,\n speed::Real,\n ζ::Real=1,\n self_loop = false,\n kws...)\n\nCreate a random temporal graph given number_nodes nodes and number_snapshots snapshots. First, the positions of the nodes are generated with a quasi-uniform distribution (depending on the parameter α) in hyperbolic space within a disk of radius R. Two nodes are connected if their hyperbolic distance is less than R. Each following snapshot is created in order to keep the same initial distribution.\n\nArguments\n\nnumber_nodes: The number of nodes of each snapshot.\nnumber_snapshots: The number of snapshots.\nα: The parameter that controls the position of the points. If α=ζ, the points are uniformly distributed on the disk of radius R. If α>ζ, the points are more concentrated in the center of the disk. If α<ζ, the points are more concentrated at the boundary of the disk.\nR: The radius of the disk and of connection.\nspeed: The speed to update the nodes.\nζ: The parameter that controls the curvature of the disk.\nself_loops: If true, consider the node itself among its neighbors, in which case the graph will contain self-loops.\nkws: Further keyword arguments will be passed to the GNNGraph constructor of each snapshot.\n\nExample\n\njulia> n, snaps, α, R, speed, ζ = 10, 5, 1.0, 4.0, 0.1, 1.0;\n\njulia> thg = rand_temporal_hyperbolic_graph(n, snaps; α, R, speed, ζ)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [44, 46, 48, 42, 38]\n num_snapshots: 5\n\nReferences\n\nSection D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks\n\n\n\n\n\n","category":"function"},{"location":"api/heterograph/#Hetereogeneous-Graphs","page":"Heterogeneous Graphs","title":"Hetereogeneous Graphs","text":"","category":"section"},{"location":"api/heterograph/#GNNHeteroGraph","page":"Heterogeneous Graphs","title":"GNNHeteroGraph","text":"","category":"section"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Documentation page for the type GNNHeteroGraph representing heterogeneous graphs, where nodes and edges can have different types.","category":"page"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Modules = [GNNGraphs]\nPages = [\"gnnheterograph.jl\"]\nPrivate = false","category":"page"},{"location":"api/heterograph/#GNNGraphs.GNNHeteroGraph","page":"Heterogeneous Graphs","title":"GNNGraphs.GNNHeteroGraph","text":"GNNHeteroGraph(data; [ndata, edata, gdata, num_nodes])\nGNNHeteroGraph(pairs...; [ndata, edata, gdata, num_nodes])\n\nA type representing a heterogeneous graph structure. It is similar to GNNGraph but nodes and edges are of different types.\n\nConstructor Arguments\n\ndata: A dictionary or an iterable object that maps (source_type, edge_type, target_type) triples to (source, target) index vectors (or to (source, target, weight) if also edge weights are present).\npairs: Passing multiple relations as pairs is equivalent to passing data=Dict(pairs...).\nndata: Node features. A dictionary of arrays or named tuple of arrays. The size of the last dimension of each array must be given by g.num_nodes.\nedata: Edge features. A dictionary of arrays or named tuple of arrays. Default nothing. The size of the last dimension of each array must be given by g.num_edges. Default nothing.\ngdata: Graph features. An array or named tuple of arrays whose last dimension has size num_graphs. Default nothing.\nnum_nodes: The number of nodes for each type. If not specified, inferred from data. Default nothing.\n\nFields\n\ngraph: A dictionary that maps (sourcetype, edgetype, target_type) triples to (source, target) index vectors.\nnum_nodes: The number of nodes for each type.\nnum_edges: The number of edges for each type.\nndata: Node features.\nedata: Edge features.\ngdata: Graph features.\nntypes: The node types.\netypes: The edge types.\n\nExamples\n\njulia> using GraphNeuralNetworks\n\njulia> nA, nB = 10, 20;\n\njulia> num_nodes = Dict(:A => nA, :B => nB);\n\njulia> edges1 = (rand(1:nA, 20), rand(1:nB, 20))\n([4, 8, 6, 3, 4, 7, 2, 7, 3, 2, 3, 4, 9, 4, 2, 9, 10, 1, 3, 9], [6, 4, 20, 8, 16, 7, 12, 16, 5, 4, 6, 20, 11, 19, 17, 9, 12, 2, 18, 12])\n\njulia> edges2 = (rand(1:nB, 30), rand(1:nA, 30))\n([17, 5, 2, 4, 5, 3, 8, 7, 9, 7 … 19, 8, 20, 7, 16, 2, 9, 15, 8, 13], [1, 1, 3, 1, 1, 3, 2, 7, 4, 4 … 7, 10, 6, 3, 4, 9, 1, 5, 8, 5])\n\njulia> data = ((:A, :rel1, :B) => edges1, (:B, :rel2, :A) => edges2);\n\njulia> hg = GNNHeteroGraph(data; num_nodes)\nGNNHeteroGraph:\n num_nodes: (:A => 10, :B => 20)\n num_edges: ((:A, :rel1, :B) => 20, (:B, :rel2, :A) => 30)\n\njulia> hg.num_edges\nDict{Tuple{Symbol, Symbol, Symbol}, Int64} with 2 entries:\n(:A, :rel1, :B) => 20\n(:B, :rel2, :A) => 30\n\n# Let's add some node features\njulia> ndata = Dict(:A => (x = rand(2, nA), y = rand(3, num_nodes[:A])),\n :B => rand(10, nB));\n\njulia> hg = GNNHeteroGraph(data; num_nodes, ndata)\nGNNHeteroGraph:\n num_nodes: (:A => 10, :B => 20)\n num_edges: ((:A, :rel1, :B) => 20, (:B, :rel2, :A) => 30)\n ndata:\n :A => (x = 2×10 Matrix{Float64}, y = 3×10 Matrix{Float64})\n :B => x = 10×20 Matrix{Float64}\n\n# Access features of nodes of type :A\njulia> hg.ndata[:A].x\n2×10 Matrix{Float64}:\n 0.825882 0.0797502 0.245813 0.142281 0.231253 0.685025 0.821457 0.888838 0.571347 0.53165\n 0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307\n\nSee also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.\n\n\n\n\n\n","category":"type"},{"location":"api/heterograph/#GNNGraphs.edge_type_subgraph-Tuple{GNNHeteroGraph, Tuple{Symbol, Symbol, Symbol}}","page":"Heterogeneous Graphs","title":"GNNGraphs.edge_type_subgraph","text":"edge_type_subgraph(g::GNNHeteroGraph, edge_ts)\n\nReturn a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.\n\n\n\n\n\n","category":"method"},{"location":"api/heterograph/#GNNGraphs.num_edge_types-Tuple{GNNGraph}","page":"Heterogeneous Graphs","title":"GNNGraphs.num_edge_types","text":"num_edge_types(g)\n\nReturn the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.\n\n\n\n\n\n","category":"method"},{"location":"api/heterograph/#GNNGraphs.num_node_types-Tuple{GNNGraph}","page":"Heterogeneous Graphs","title":"GNNGraphs.num_node_types","text":"num_node_types(g)\n\nReturn the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.\n\n\n\n\n\n","category":"method"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Graphs.has_edge(::GNNHeteroGraph, ::Tuple{Symbol, Symbol, Symbol}, ::Integer, ::Integer)","category":"page"},{"location":"api/heterograph/#Graphs.has_edge-Tuple{GNNHeteroGraph, Tuple{Symbol, Symbol, Symbol}, Integer, Integer}","page":"Heterogeneous Graphs","title":"Graphs.has_edge","text":"has_edge(g::GNNHeteroGraph, edge_t, i, j)\n\nReturn true if there is an edge of type edge_t from node i to node j in g.\n\nExamples\n\njulia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)\nGNNHeteroGraph:\n num_nodes: (:A => 2, :B => 2)\n num_edges: ((:A, :to, :B) => 4, (:B, :to, :A) => 0)\n\njulia> has_edge(g, (:A,:to,:B), 1, 1)\ntrue\n\njulia> has_edge(g, (:B,:to,:A), 1, 1)\nfalse\n\n\n\n\n\n","category":"method"},{"location":"api/heterograph/#Heterogeneous-Graph-Convolutions","page":"Heterogeneous Graphs","title":"Heterogeneous Graph Convolutions","text":"","category":"section"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Heterogeneous graph convolutions are implemented in the type HeteroGraphConv. HeteroGraphConv relies on standard graph convolutional layers to perform message passing on the different relations. See the table at this page for the supported layers.","category":"page"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"HeteroGraphConv","category":"page"},{"location":"api/heterograph/#GraphNeuralNetworks.HeteroGraphConv","page":"Heterogeneous Graphs","title":"GraphNeuralNetworks.HeteroGraphConv","text":"HeteroGraphConv(itr; aggr = +)\nHeteroGraphConv(pairs...; aggr = +)\n\nA convolutional layer for heterogeneous graphs.\n\nThe itr argument is an iterator of pairs of the form edge_t => layer, where edge_t is a 3-tuple of the form (src_node_type, edge_type, dst_node_type), and layer is a convolutional layers for homogeneous graphs. \n\nEach convolution is applied to the corresponding relation. Since a node type can be involved in multiple relations, the single convolution outputs have to be aggregated using the aggr function. The default is to sum the outputs.\n\nForward Arguments\n\ng::GNNHeteroGraph: The input graph.\nx::Union{NamedTuple,Dict}: The input node features. The keys are node types and the values are node feature tensors.\n\nExamples\n\njulia> g = rand_bipartite_heterograph((10, 15), 20)\nGNNHeteroGraph:\n num_nodes: Dict(:A => 10, :B => 15)\n num_edges: Dict((:A, :to, :B) => 20, (:B, :to, :A) => 20)\n\njulia> x = (A = rand(Float32, 64, 10), B = rand(Float32, 64, 15));\n\njulia> layer = HeteroGraphConv((:A, :to, :B) => GraphConv(64 => 32, relu),\n (:B, :to, :A) => GraphConv(64 => 32, relu));\n\njulia> y = layer(g, x); # output is a named tuple\n\njulia> size(y.A) == (32, 10) && size(y.B) == (32, 15)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"api/temporalconv/","page":"Temporal Graph-Convolutional Layers","title":"Temporal Graph-Convolutional Layers","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/temporalconv/#Temporal-Graph-Convolutional-Layers","page":"Temporal Graph-Convolutional Layers","title":"Temporal Graph-Convolutional Layers","text":"","category":"section"},{"location":"api/temporalconv/","page":"Temporal Graph-Convolutional Layers","title":"Temporal Graph-Convolutional Layers","text":"Convolutions for time-varying graphs (temporal graphs) such as the TemporalSnapshotsGNNGraph.","category":"page"},{"location":"api/temporalconv/#Docs","page":"Temporal Graph-Convolutional Layers","title":"Docs","text":"","category":"section"},{"location":"api/temporalconv/","page":"Temporal Graph-Convolutional Layers","title":"Temporal Graph-Convolutional Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"layers/temporalconv.jl\"]\nPrivate = false","category":"page"},{"location":"api/temporalconv/#GraphNeuralNetworks.A3TGCN","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.A3TGCN","text":"A3TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])\n\nAttention Temporal Graph Convolutional Network (A3T-GCN) model from the paper A3T-GCN: Attention Temporal Graph Convolutional Network for Traffic Forecasting.\n\nPerforms a TGCN layer, followed by a soft attention layer.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the GRU layer. Default zeros32.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default false.\nuse_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.\n\nExamples\n\njulia> a3tgcn = A3TGCN(2 => 6)\nA3TGCN(2 => 6)\n\njulia> g, x = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> y = a3tgcn(g,x);\n\njulia> size(y)\n(6, 5)\n\njulia> Flux.reset!(a3tgcn);\n\njulia> y = a3tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20));\n\njulia> size(y)\n(6, 5)\n\nwarning: Batch size changes\nFailing to call reset! when the input batch size changes can lead to unexpected behavior.\n\n\n\n\n\n","category":"type"},{"location":"api/temporalconv/#GraphNeuralNetworks.EvolveGCNO","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.EvolveGCNO","text":"EvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)\n\nEvolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.\n\nPerfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.\n\nExamples\n\njulia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n\njulia> ev = EvolveGCNO(4 => 5)\nEvolveGCNO(4 => 5)\n\njulia> size(ev(tg, tg.ndata.x))\n(3,)\n\njulia> size(ev(tg, tg.ndata.x)[1])\n(5, 10)\n\n\n\n\n\n","category":"type"},{"location":"api/temporalconv/#GraphNeuralNetworks.DCGRU-Tuple{Any, Any, Any}","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.DCGRU","text":"DCGRU(in => out, k, n; [bias, init, init_state])\n\nDiffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.\n\nPerforms a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk: Diffusion step.\nn: Number of nodes in the graph.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.\n\nExamples\n\njulia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> dcgru = DCGRU(2 => 5, 2, g1.num_nodes);\n\njulia> y = dcgru(g1, x1);\n\njulia> size(y)\n(5, 5)\n\njulia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);\n\njulia> z = dcgru(g2, x2);\n\njulia> size(z)\n(5, 5, 30)\n\n\n\n\n\n","category":"method"},{"location":"api/temporalconv/#GraphNeuralNetworks.GConvGRU-Tuple{Any, Any, Any}","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.GConvGRU","text":"GConvGRU(in => out, k, n; [bias, init, init_state])\n\nGraph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.\n\nPerforms a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk: Chebyshev polynomial order.\nn: Number of nodes in the graph.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the GRU layer. Default zeros32.\n\nExamples\n\njulia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> ggru = GConvGRU(2 => 5, 2, g1.num_nodes);\n\njulia> y = ggru(g1, x1);\n\njulia> size(y)\n(5, 5)\n\njulia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);\n\njulia> z = ggru(g2, x2);\n\njulia> size(z)\n(5, 5, 30)\n\n\n\n\n\n","category":"method"},{"location":"api/temporalconv/#GraphNeuralNetworks.GConvLSTM-Tuple{Any, Any, Any}","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.GConvLSTM","text":"GConvLSTM(in => out, k, n; [bias, init, init_state])\n\nGraph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks. \n\nPerforms a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk: Chebyshev polynomial order.\nn: Number of nodes in the graph.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.\n\nExamples\n\njulia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> gclstm = GConvLSTM(2 => 5, 2, g1.num_nodes);\n\njulia> y = gclstm(g1, x1);\n\njulia> size(y)\n(5, 5)\n\njulia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);\n\njulia> z = gclstm(g2, x2);\n\njulia> size(z)\n(5, 5, 30)\n\n\n\n\n\n","category":"method"},{"location":"api/temporalconv/#GraphNeuralNetworks.TGCN-Tuple{Any}","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.TGCN","text":"TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])\n\nTemporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.\n\nPerforms a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the GRU layer. Default zeros32.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default false.\nuse_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.\n\nExamples\n\njulia> tgcn = TGCN(2 => 6)\nRecur(\n TGCNCell(\n GCNConv(2 => 6, σ), # 18 parameters\n GRUv3Cell(6 => 6), # 240 parameters\n Float32[0.0; 0.0; … ; 0.0; 0.0;;], # 6 parameters (all zero)\n 2,\n 6,\n ),\n) # Total: 8 trainable arrays, 264 parameters,\n # plus 1 non-trainable, 6 parameters, summarysize 1.492 KiB.\n\njulia> g, x = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> y = tgcn(g, x);\n\njulia> size(y)\n(6, 5)\n\njulia> Flux.reset!(tgcn);\n\njulia> tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)) |> size # batch size of 20\n(6, 5, 20)\n\nwarning: Batch size changes\nFailing to call reset! when the input batch size changes can lead to unexpected behavior.\n\n\n\n\n\n","category":"method"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/#Graph-Classification-with-Graph-Neural-Networks","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"(Image: Source code) (Image: Author) (Image: Update time)","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"\n\n\n\n
begin\n    using Flux\n    using Flux: onecold, onehotbatch, logitcrossentropy\n    using Flux: DataLoader\n    using GraphNeuralNetworks\n    using MLDatasets\n    using MLUtils\n    using LinearAlgebra, Random, Statistics\n\n    ENV[\"DATADEPS_ALWAYS_ACCEPT\"] = \"true\"  # don't ask for dataset download confirmation\n    Random.seed!(17) # for reproducibility\nend;
\n\n\n\n

This Pluto notebook is a julia adaptation of the Pytorch Geometric tutorials that can be found here.

In this tutorial session we will have a closer look at how to apply Graph Neural Networks (GNNs) to the task of graph classification. Graph classification refers to the problem of classifying entire graphs (in contrast to nodes), given a dataset of graphs, based on some structural graph properties. Here, we want to embed entire graphs, and we want to embed those graphs in such a way so that they are linearly separable given a task at hand.

The most common task for graph classification is molecular property prediction, in which molecules are represented as graphs, and the task may be to infer whether a molecule inhibits HIV virus replication or not.

The TU Dortmund University has collected a wide range of different graph classification datasets, known as the TUDatasets, which are also accessible via MLDatasets.jl. Let's load and inspect one of the smaller ones, the MUTAG dataset:

\n\n
dataset = TUDataset(\"MUTAG\")
\n
dataset TUDataset:\n  name        =>    MUTAG\n  metadata    =>    Dict{String, Any} with 1 entry\n  graphs      =>    188-element Vector{MLDatasets.Graph}\n  graph_data  =>    (targets = \"188-element Vector{Int64}\",)\n  num_nodes   =>    3371\n  num_edges   =>    7442\n  num_graphs  =>    188
\n\n
dataset.graph_data.targets |> union
\n
2-element Vector{Int64}:\n  1\n -1
\n\n
g1, y1 = dataset[1] #get the first graph and target
\n
(graphs = Graph(17, 38), targets = 1)
\n\n
reduce(vcat, g.node_data.targets for (g, _) in dataset) |> union
\n
7-element Vector{Int64}:\n 0\n 1\n 2\n 3\n 4\n 5\n 6
\n\n
reduce(vcat, g.edge_data.targets for (g, _) in dataset) |> union
\n
4-element Vector{Int64}:\n 0\n 1\n 2\n 3
\n\n\n

This dataset provides 188 different graphs, and the task is to classify each graph into one out of two classes.

By inspecting the first graph object of the dataset, we can see that it comes with 17 nodes and 38 edges. It also comes with exactly one graph label, and provides additional node labels (7 classes) and edge labels (4 classes). However, for the sake of simplicity, we will not make use of edge labels.

\n\n\n

We now convert the MLDatasets.jl graph types to our GNNGraphs and we also onehot encode both the node labels (which will be used as input features) and the graph labels (what we want to predict):

\n\n
begin\n    graphs = mldataset2gnngraph(dataset)\n    graphs = [GNNGraph(g,\n                       ndata = Float32.(onehotbatch(g.ndata.targets, 0:6)),\n                       edata = nothing)\n              for g in graphs]\n    y = onehotbatch(dataset.graph_data.targets, [-1, 1])\nend
\n
2×188 OneHotMatrix(::Vector{UInt32}) with eltype Bool:\n ⋅  1  1  ⋅  1  ⋅  1  ⋅  1  ⋅  ⋅  ⋅  ⋅  1  …  ⋅  ⋅  ⋅  1  ⋅  1  1  ⋅  ⋅  1  1  ⋅  1\n 1  ⋅  ⋅  1  ⋅  1  ⋅  1  ⋅  1  1  1  1  ⋅     1  1  1  ⋅  1  ⋅  ⋅  1  1  ⋅  ⋅  1  ⋅
\n\n\n

We have some useful utilities for working with graph datasets, e.g., we can shuffle the dataset and use the first 150 graphs as training graphs, while using the remaining ones for testing:

\n\n
train_data, test_data = splitobs((graphs, y), at = 150, shuffle = true) |> getobs
\n
((GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(16, 34) with x: 7×16 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(18, 38) with x: 7×18 data  …  GNNGraph(12, 26) with x: 7×12 data, GNNGraph(19, 40) with x: 7×19 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(26, 60) with x: 7×26 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(19, 42) with x: 7×19 data, GNNGraph(22, 50) with x: 7×22 data], Bool[0 0 … 0 0; 1 1 … 1 1]), (GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(26, 60) with x: 7×26 data, GNNGraph(15, 34) with x: 7×15 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(24, 50) with x: 7×24 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(21, 44) with x: 7×21 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(17, 38) with x: 7×17 data  …  GNNGraph(12, 26) with x: 7×12 data, GNNGraph(23, 52) with x: 7×23 data, GNNGraph(12, 24) with x: 7×12 data, GNNGraph(23, 50) with x: 7×23 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(18, 40) with x: 7×18 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(13, 26) with x: 7×13 data, GNNGraph(28, 62) with x: 7×28 data, GNNGraph(11, 22) with x: 7×11 data], Bool[0 0 … 0 1; 1 1 … 1 0]))
\n\n
begin\n    train_loader = DataLoader(train_data, batchsize = 32, shuffle = true)\n    test_loader = DataLoader(test_data, batchsize = 32, shuffle = false)\nend
\n
2-element DataLoader(::Tuple{Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}, OneHotArrays.OneHotMatrix{UInt32, Vector{UInt32}}}, batchsize=32)\n  with first element:\n  (32-element Vector{GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}, 2×32 OneHotMatrix(::Vector{UInt32}) with eltype Bool,)
\n\n\n

Here, we opt for a batch_size of 32, leading to 5 (randomly shuffled) mini-batches, containing all \\(4 \\cdot 32+22 = 150\\) graphs.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/#Mini-batching-of-graphs","page":"Graph Classification with Graph Neural Networks","title":"Mini-batching of graphs","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"
\n

Since graphs in graph classification datasets are usually small, a good idea is to batch the graphs before inputting them into a Graph Neural Network to guarantee full GPU utilization. In the image or language domain, this procedure is typically achieved by rescaling or padding each example into a set of equally-sized shapes, and examples are then grouped in an additional dimension. The length of this dimension is then equal to the number of examples grouped in a mini-batch and is typically referred to as the batchsize.

However, for GNNs the two approaches described above are either not feasible or may result in a lot of unnecessary memory consumption. Therefore, GraphNeuralNetworks.jl opts for another approach to achieve parallelization across a number of examples. Here, adjacency matrices are stacked in a diagonal fashion (creating a giant graph that holds multiple isolated subgraphs), and node and target features are simply concatenated in the node dimension (the last dimension).

This procedure has some crucial advantages over other batching procedures:

  1. GNN operators that rely on a message passing scheme do not need to be modified since messages are not exchanged between two nodes that belong to different graphs.

  2. There is no computational or memory overhead since adjacency matrices are saved in a sparse fashion holding only non-zero entries, i.e., the edges.

GraphNeuralNetworks.jl can batch multiple graphs into a single giant graph:

\n\n
vec_gs, _ = first(train_loader)
\n
(GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 46) with x: 7×20 data, GNNGraph(15, 34) with x: 7×15 data, GNNGraph(25, 56) with x: 7×25 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 44) with x: 7×20 data  …  GNNGraph(12, 24) with x: 7×12 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data], Bool[0 0 … 0 0; 1 1 … 1 1])
\n\n
MLUtils.batch(vec_gs)
\n
GNNGraph:\n  num_nodes: 575\n  num_edges: 1276\n  num_graphs: 32\n  ndata:\n\tx = 7×575 Matrix{Float32}
\n\n\n

Each batched graph object is equipped with a graph_indicator vector, which maps each node to its respective graph in the batch:

$$\\textrm{graph\\_indicator} = [1, \\ldots, 1, 2, \\ldots, 2, 3, \\ldots ]$$

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/#Training-a-Graph-Neural-Network-(GNN)","page":"Graph Classification with Graph Neural Networks","title":"Training a Graph Neural Network (GNN)","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"
\n

Training a GNN for graph classification usually follows a simple recipe:

  1. Embed each node by performing multiple rounds of message passing

  2. Aggregate node embeddings into a unified graph embedding (readout layer)

  3. Train a final classifier on the graph embedding

There exists multiple readout layers in literature, but the most common one is to simply take the average of node embeddings:

$$\\mathbf{x}_{\\mathcal{G}} = \\frac{1}{|\\mathcal{V}|} \\sum_{v \\in \\mathcal{V}} \\mathcal{x}^{(L)}_v$$

GraphNeuralNetworks.jl provides this functionality via GlobalPool(mean), which takes in the node embeddings of all nodes in the mini-batch and the assignment vector graph_indicator to compute a graph embedding of size [hidden_channels, batchsize].

The final architecture for applying GNNs to the task of graph classification then looks as follows and allows for complete end-to-end training:

\n\n
function create_model(nin, nh, nout)\n    GNNChain(GCNConv(nin => nh, relu),\n             GCNConv(nh => nh, relu),\n             GCNConv(nh => nh),\n             GlobalPool(mean),\n             Dropout(0.5),\n             Dense(nh, nout))\nend
\n
create_model (generic function with 1 method)
\n\n\n

Here, we again make use of the GCNConv with \\(\\mathrm{ReLU}(x) = \\max(x, 0)\\) activation for obtaining localized node embeddings, before we apply our final classifier on top of a graph readout layer.

Let's train our network for a few epochs to see how well it performs on the training as well as test set:

\n\n
function eval_loss_accuracy(model, data_loader, device)\n    loss = 0.0\n    acc = 0.0\n    ntot = 0\n    for (g, y) in data_loader\n        g, y = MLUtils.batch(g) |> device, y |> device\n        n = length(y)\n        ŷ = model(g, g.ndata.x)\n        loss += logitcrossentropy(ŷ, y) * n\n        acc += mean((ŷ .> 0) .== y) * n\n        ntot += n\n    end\n    return (loss = round(loss / ntot, digits = 4),\n            acc = round(acc * 100 / ntot, digits = 2))\nend
\n
eval_loss_accuracy (generic function with 1 method)
\n\n
function train!(model; epochs = 200, η = 1e-2, infotime = 10)\n    # device = Flux.gpu # uncomment this for GPU training\n    device = Flux.cpu\n    model = model |> device\n    opt = Flux.setup(Adam(1e-3), model)\n\n    function report(epoch)\n        train = eval_loss_accuracy(model, train_loader, device)\n        test = eval_loss_accuracy(model, test_loader, device)\n        @info (; epoch, train, test)\n    end\n\n    report(0)\n    for epoch in 1:epochs\n        for (g, y) in train_loader\n            g, y = MLUtils.batch(g) |> device, y |> device\n            grad = Flux.gradient(model) do model\n                ŷ = model(g, g.ndata.x)\n                logitcrossentropy(ŷ, y)\n            end\n            Flux.update!(opt, model, grad[1])\n        end\n        epoch % infotime == 0 && report(epoch)\n    end\nend
\n
train! (generic function with 1 method)
\n\n
begin\n    nin = 7\n    nh = 64\n    nout = 2\n    model = create_model(nin, nh, nout)\n    train!(model)\nend
\n\n\n\n

As one can see, our model reaches around 74% test accuracy. Reasons for the fluctuations in accuracy can be explained by the rather small dataset (only 38 test graphs), and usually disappear once one applies GNNs to larger datasets.

(Optional) Exercise

Can we do better than this? As multiple papers pointed out (Xu et al. (2018), Morris et al. (2018)), applying neighborhood normalization decreases the expressivity of GNNs in distinguishing certain graph structures. An alternative formulation (Morris et al. (2018)) omits neighborhood normalization completely and adds a simple skip-connection to the GNN layer in order to preserve central node information:

$$\\mathbf{x}_i^{(\\ell+1)} = \\mathbf{W}^{(\\ell + 1)}_1 \\mathbf{x}_i^{(\\ell)} + \\mathbf{W}^{(\\ell + 1)}_2 \\sum_{j \\in \\mathcal{N}(i)} \\mathbf{x}_j^{(\\ell)}$$

This layer is implemented under the name GraphConv in GraphNeuralNetworks.jl.

As an exercise, you are invited to complete the following code to the extent that it makes use of GraphConv rather than GCNConv. This should bring you close to 82% test accuracy.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/#Conclusion","page":"Graph Classification with Graph Neural Networks","title":"Conclusion","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"
\n

In this chapter, you have learned how to apply GNNs to the task of graph classification. You have learned how graphs can be batched together for better GPU utilization, and how to apply readout layers for obtaining graph embeddings rather than node embeddings.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"EditURL = \"https://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/master/docs/src/tutorials/introductory_tutorials/graph_classification_pluto.jl\"","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"This page was generated using DemoCards.jl. and PlutoStaticHTML.jl","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Node-Classification-with-Graph-Neural-Networks","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"(Image: Source code) (Image: Author) (Image: Update time)","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"\n\n\n\n\n

In this tutorial, we will be learning how to use Graph Neural Networks (GNNs) for node classification. Given the ground-truth labels of only a small subset of nodes, and want to infer the labels for all the remaining nodes (transductive learning).

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Import","page":"Node Classification with Graph Neural Networks","title":"Import","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

Let us start off by importing some libraries. We will be using Flux.jl and GraphNeuralNetworks.jl for our tutorial.

\n\n
begin\n    using MLDatasets\n    using GraphNeuralNetworks\n    using Flux\n    using Flux: onecold, onehotbatch, logitcrossentropy\n    using Plots\n    using PlutoUI\n    using TSne\n    using Random\n    using Statistics\n\n    ENV[\"DATADEPS_ALWAYS_ACCEPT\"] = \"true\"\n    Random.seed!(17) # for reproducibility\nend;
\n\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Visualize","page":"Node Classification with Graph Neural Networks","title":"Visualize","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

We want to visualize the the outputs of the results using t-distributed stochastic neighbor embedding (tsne) to embed our output embeddings onto a 2D plane.

\n\n
function visualize_tsne(out, targets)\n    z = tsne(out, 2)\n    scatter(z[:, 1], z[:, 2], color = Int.(targets[1:size(z, 1)]), leg = false)\nend
\n
visualize_tsne (generic function with 1 method)
\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Dataset:-Cora","page":"Node Classification with Graph Neural Networks","title":"Dataset: Cora","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

For our tutorial, we will be using the Cora dataset. Cora is a citation network of 2708 documents classified into one of seven classes and 5429 links. Each node represent articles/documents and the edges between these nodes if one of them cite each other.

Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 1433 unique words.

This dataset was first introduced by Yang et al. (2016) as one of the datasets of the Planetoid benchmark suite. We will be using MLDatasets.jl for an easy access to this dataset.

\n\n
dataset = Cora()
\n
dataset Cora:\n  metadata  =>    Dict{String, Any} with 3 entries\n  graphs    =>    1-element Vector{MLDatasets.Graph}
\n\n\n

Datasets in MLDatasets.jl have metadata containing information about the dataset itself.

\n\n
dataset.metadata
\n
Dict{String, Any} with 3 entries:\n  \"name\"        => \"cora\"\n  \"classes\"     => [1, 2, 3, 4, 5, 6, 7]\n  \"num_classes\" => 7
\n\n\n

The graphs variable GraphDataset contains the graph. The Cora dataset contains only 1 graph.

\n\n
dataset.graphs
\n
1-element Vector{MLDatasets.Graph}:\n Graph(2708, 10556)
\n\n\n

There is only one graph of the dataset. The node_data contains features indicating if certain words are present or not and targets indicating the class for each document. We convert the single-graph dataset to a GNNGraph.

\n\n
g = mldataset2gnngraph(dataset)
\n
GNNGraph:\n  num_nodes: 2708\n  num_edges: 10556\n  ndata:\n\tval_mask = 2708-element BitVector\n\ttargets = 2708-element Vector{Int64}\n\ttest_mask = 2708-element BitVector\n\tfeatures = 1433×2708 Matrix{Float32}\n\ttrain_mask = 2708-element BitVector
\n\n
with_terminal() do\n    # Gather some statistics about the graph.\n    println(\"Number of nodes: $(g.num_nodes)\")\n    println(\"Number of edges: $(g.num_edges)\")\n    println(\"Average node degree: $(g.num_edges / g.num_nodes)\")\n    println(\"Number of training nodes: $(sum(g.ndata.train_mask))\")\n    println(\"Training node label rate: $(mean(g.ndata.train_mask))\")\n    # println(\"Has isolated nodes: $(has_isolated_nodes(g))\")\n    println(\"Has self-loops: $(has_self_loops(g))\")\n    println(\"Is undirected: $(is_bidirected(g))\")\nend
\n
Number of nodes: 2708\nNumber of edges: 10556\nAverage node degree: 3.8980797636632203\nNumber of training nodes: 140\nTraining node label rate: 0.051698670605613\nHas self-loops: false\nIs undirected: true\n
\n\n\n

Overall, this dataset is quite similar to the previously used KarateClub network. We can see that the Cora network holds 2,708 nodes and 10,556 edges, resulting in an average node degree of 3.9. For training this dataset, we are given the ground-truth categories of 140 nodes (20 for each class). This results in a training node label rate of only 5%.

We can further see that this network is undirected, and that there exists no isolated nodes (each document has at least one citation).

\n\n
begin\n    x = g.ndata.features\n    # we onehot encode both the node labels (what we want to predict):\n    y = onehotbatch(g.ndata.targets, 1:7)\n    train_mask = g.ndata.train_mask\n    num_features = size(x)[1]\n    hidden_channels = 16\n    num_classes = dataset.metadata[\"num_classes\"]\nend;
\n\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Multi-layer-Perception-Network-(MLP)","page":"Node Classification with Graph Neural Networks","title":"Multi-layer Perception Network (MLP)","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

In theory, we should be able to infer the category of a document solely based on its content, i.e. its bag-of-words feature representation, without taking any relational information into account.

Let's verify that by constructing a simple MLP that solely operates on input node features (using shared weights across all nodes):

\n\n
begin\n    struct MLP\n        layers::NamedTuple\n    end\n\n    Flux.@layer :expand MLP\n\n    function MLP(num_features, num_classes, hidden_channels; drop_rate = 0.5)\n        layers = (hidden = Dense(num_features => hidden_channels),\n                  drop = Dropout(drop_rate),\n                  classifier = Dense(hidden_channels => num_classes))\n        return MLP(layers)\n    end\n\n    function (model::MLP)(x::AbstractMatrix)\n        l = model.layers\n        x = l.hidden(x)\n        x = relu(x)\n        x = l.drop(x)\n        x = l.classifier(x)\n        return x\n    end\nend
\n\n\n\n

Training a Multilayer Perceptron

Our MLP is defined by two linear layers and enhanced by ReLU non-linearity and Dropout. Here, we first reduce the 1433-dimensional feature vector to a low-dimensional embedding (hidden_channels=16), while the second linear layer acts as a classifier that should map each low-dimensional node embedding to one of the 7 classes.

Let's train our simple MLP by following a similar procedure as described in the first part of this tutorial. We again make use of the cross entropy loss and Adam optimizer. This time, we also define a accuracy function to evaluate how well our final model performs on the test node set (which labels have not been observed during training).

\n\n
function train(model::MLP, data::AbstractMatrix, epochs::Int, opt)\n    Flux.trainmode!(model)\n\n    for epoch in 1:epochs\n        loss, grad = Flux.withgradient(model) do model\n            ŷ = model(data)\n            logitcrossentropy(ŷ[:, train_mask], y[:, train_mask])\n        end\n\n        Flux.update!(opt, model, grad[1])\n        if epoch % 200 == 0\n            @show epoch, loss\n        end\n    end\nend
\n
train (generic function with 1 method)
\n\n
function accuracy(model::MLP, x::AbstractMatrix, y::Flux.OneHotArray, mask::BitVector)\n    Flux.testmode!(model)\n    mean(onecold(model(x))[mask] .== onecold(y)[mask])\nend
\n
accuracy (generic function with 1 method)
\n\n
begin\n    mlp = MLP(num_features, num_classes, hidden_channels)\n    opt_mlp = Flux.setup(Adam(1e-3), mlp)\n    epochs = 2000\n    train(mlp, g.ndata.features, epochs, opt_mlp)\nend
\n\n\n\n

After training the model, we can call the accuracy function to see how well our model performs on unseen labels. Here, we are interested in the accuracy of the model, i.e., the ratio of correctly classified nodes:

\n\n
accuracy(mlp, g.ndata.features, y, .!train_mask)
\n
0.4517133956386293
\n\n\n

As one can see, our MLP performs rather bad with only about 47% test accuracy. But why does the MLP do not perform better? The main reason for that is that this model suffers from heavy overfitting due to only having access to a small amount of training nodes, and therefore generalizes poorly to unseen node representations.

It also fails to incorporate an important bias into the model: Cited papers are very likely related to the category of a document. That is exactly where Graph Neural Networks come into play and can help to boost the performance of our model.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Training-a-Graph-Convolutional-Neural-Network-(GNN)","page":"Node Classification with Graph Neural Networks","title":"Training a Graph Convolutional Neural Network (GNN)","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

Following-up on the first part of this tutorial, we replace the Dense linear layers by the GCNConv module. To recap, the GCN layer (Kipf et al. (2017)) is defined as

$$\\mathbf{x}_v^{(\\ell + 1)} = \\mathbf{W}^{(\\ell + 1)} \\sum_{w \\in \\mathcal{N}(v) \\, \\cup \\, \\{ v \\}} \\frac{1}{c_{w,v}} \\cdot \\mathbf{x}_w^{(\\ell)}$$

where \\(\\mathbf{W}^{(\\ell + 1)}\\) denotes a trainable weight matrix of shape [num_output_features, num_input_features] and \\(c_{w,v}\\) refers to a fixed normalization coefficient for each edge. In contrast, a single Linear layer is defined as

$$\\mathbf{x}_v^{(\\ell + 1)} = \\mathbf{W}^{(\\ell + 1)} \\mathbf{x}_v^{(\\ell)}$$

which does not make use of neighboring node information.

\n\n
begin\n    struct GCN\n        layers::NamedTuple\n    end\n\n    Flux.@layer GCN # provides parameter collection, gpu movement and more\n\n    function GCN(num_features, num_classes, hidden_channels; drop_rate = 0.5)\n        layers = (conv1 = GCNConv(num_features => hidden_channels),\n                  drop = Dropout(drop_rate),\n                  conv2 = GCNConv(hidden_channels => num_classes))\n        return GCN(layers)\n    end\n\n    function (gcn::GCN)(g::GNNGraph, x::AbstractMatrix)\n        l = gcn.layers\n        x = l.conv1(g, x)\n        x = relu.(x)\n        x = l.drop(x)\n        x = l.conv2(g, x)\n        return x\n    end\nend
\n\n\n\n

Now let's visualize the node embeddings of our untrained GCN network.

\n\n
begin\n    gcn = GCN(num_features, num_classes, hidden_channels)\n    h_untrained = gcn(g, x) |> transpose\n    visualize_tsne(h_untrained, g.ndata.targets)\nend
\n\n\n\n

We certainly can do better by training our model. The training and testing procedure is once again the same, but this time we make use of the node features xand the graph g as input to our GCN model.

\n\n
function train(model::GCN, g::GNNGraph, x::AbstractMatrix, epochs::Int, opt)\n    Flux.trainmode!(model)\n\n    for epoch in 1:epochs\n        loss, grad = Flux.withgradient(model) do model\n            ŷ = model(g, x)\n            logitcrossentropy(ŷ[:, train_mask], y[:, train_mask])\n        end\n\n        Flux.update!(opt, model, grad[1])\n        if epoch % 200 == 0\n            @show epoch, loss\n        end\n    end\nend
\n
train (generic function with 2 methods)
\n\n
function accuracy(model::GCN, g::GNNGraph, x::AbstractMatrix, y::Flux.OneHotArray,\n                  mask::BitVector)\n    Flux.testmode!(model)\n    mean(onecold(model(g, x))[mask] .== onecold(y)[mask])\nend
\n
accuracy (generic function with 2 methods)
\n\n
begin\n    opt_gcn = Flux.setup(Adam(1e-2), gcn)\n    train(gcn, g, x, epochs, opt_gcn)\nend
\n\n\n\n

Now let's evaluate the loss of our trained GCN.

\n\n
with_terminal() do\n    train_accuracy = accuracy(gcn, g, g.ndata.features, y, train_mask)\n    test_accuracy = accuracy(gcn, g, g.ndata.features, y, .!train_mask)\n\n    println(\"Train accuracy: $(train_accuracy)\")\n    println(\"Test accuracy: $(test_accuracy)\")\nend
\n
Train accuracy: 1.0\nTest accuracy: 0.7476635514018691\n
\n\n\n

There it is! By simply swapping the linear layers with GNN layers, we can reach 75.77% of test accuracy! This is in stark contrast to the 59% of test accuracy obtained by our MLP, indicating that relational information plays a crucial role in obtaining better performance.

We can also verify that once again by looking at the output embeddings of our trained model, which now produces a far better clustering of nodes of the same category.

\n\n
begin\n    Flux.testmode!(gcn) # inference mode\n\n    out_trained = gcn(g, x) |> transpose\n    visualize_tsne(out_trained, g.ndata.targets)\nend
\n\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#(Optional)-Exercises","page":"Node Classification with Graph Neural Networks","title":"(Optional) Exercises","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n
  1. To achieve better model performance and to avoid overfitting, it is usually a good idea to select the best model based on an additional validation set. The Cora dataset provides a validation node set as g.ndata.val_mask, but we haven't used it yet. Can you modify the code to select and test the model with the highest validation performance? This should bring test performance to 82% accuracy.

  2. How does GCN behave when increasing the hidden feature dimensionality or the number of layers? Does increasing the number of layers help at all?

  3. You can try to use different GNN layers to see how model performance changes. What happens if you swap out all GCNConv instances with GATConv layers that make use of attention? Try to write a 2-layer GAT model that makes use of 8 attention heads in the first layer and 1 attention head in the second layer, uses a dropout ratio of 0.6 inside and outside each GATConv call, and uses a hidden_channels dimensions of 8 per head.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Conclusion","page":"Node Classification with Graph Neural Networks","title":"Conclusion","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

In this tutorial, we have seen how to apply GNNs to real-world problems, and, in particular, how they can effectively be used for boosting a model's performance. In the next tutorial, we will look into how GNNs can be used for the task of graph classification.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"EditURL = \"https://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/master/docs/src/tutorials/introductory_tutorials/node_classification_pluto.jl\"","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"This page was generated using DemoCards.jl. and PlutoStaticHTML.jl","category":"page"},{"location":"temporalgraph/#Temporal-Graphs","page":"Temporal Graphs","title":"Temporal Graphs","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Temporal Graphs are graphs with time varying topologies and node features. In GraphNeuralNetworks.jl temporal graphs with fixed number of nodes over time are supported by the TemporalSnapshotsGNNGraph type.","category":"page"},{"location":"temporalgraph/#Creating-a-TemporalSnapshotsGNNGraph","page":"Temporal Graphs","title":"Creating a TemporalSnapshotsGNNGraph","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"A temporal graph can be created by passing a list of snapshots to the constructor. Each snapshot is a GNNGraph. ","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> snapshots = [rand_graph(10,20) for i in 1:5];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [20, 20, 20, 20, 20]\n num_snapshots: 5","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"A new temporal graph can be created by adding or removing snapshots to an existing temporal graph. ","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> new_tg = add_snapshot(tg, 3, rand_graph(10, 16)) # add a new snapshot at time 3\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10, 10]\n num_edges: [20, 20, 16, 20, 20, 20]\n num_snapshots: 6","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n\njulia> new_tg = remove_snapshot(tg, 2) # remove snapshot at time 2\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10]\n num_edges: [20, 22]\n num_snapshots: 2","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"See rand_temporal_radius_graph and rand_temporal_hyperbolic_graph for generating random temporal graphs. ","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> tg = rand_temporal_radius_graph(10, 3, 0.1, 0.5)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [32, 30, 34]\n num_snapshots: 3","category":"page"},{"location":"temporalgraph/#Basic-Queries","page":"Temporal Graphs","title":"Basic Queries","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Basic queries are similar to those for GNNGraphs:","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n\njulia> tg.num_nodes # number of nodes in each snapshot\n3-element Vector{Int64}:\n 10\n 10\n 10\n\njulia> tg.num_edges # number of edges in each snapshot\n3-element Vector{Int64}:\n 20\n 14\n 22\n\njulia> tg.num_snapshots # number of snapshots\n3\n\njulia> tg.snapshots # list of snapshots\n3-element Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}:\n GNNGraph(10, 20) with no data\n GNNGraph(10, 14) with no data\n GNNGraph(10, 22) with no data\n\njulia> tg.snapshots[1] # first snapshot, same as tg[1]\nGNNGraph:\n num_nodes: 10\n num_edges: 20","category":"page"},{"location":"temporalgraph/#Data-Features","page":"Temporal Graphs","title":"Data Features","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Node, edge, and graph features can be added at construction time or later using:","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> snapshots = [rand_graph(10,20; ndata = rand(3,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(5,10))]; # node features at construction time\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots);\n\njulia> tg.tgdata.y = rand(3,1); # graph features after construction\n\njulia> tg\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n tgdata:\n y = 3×1 Matrix{Float64}\n\njulia> tg.ndata # vector of Datastore for node features\n3-element Vector{DataStore}:\n DataStore(10) with 1 element:\n x = 3×10 Matrix{Float64}\n DataStore(10) with 1 element:\n x = 4×10 Matrix{Float64}\n DataStore(10) with 1 element:\n x = 5×10 Matrix{Float64}\n\njulia> typeof(tg.ndata.x) # vector containing the x feature of each snapshot\nVector{Matrix{Float64}}","category":"page"},{"location":"temporalgraph/#Graph-convolutions-on-TemporalSnapshotsGNNGraph","page":"Temporal Graphs","title":"Graph convolutions on TemporalSnapshotsGNNGraph","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"A graph convolutional layer can be applied to each snapshot independently, in the next example we apply a GINConv layer to each snapshot of a TemporalSnapshotsGNNGraph. The list of compatible graph convolution layers can be found here. ","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> using GraphNeuralNetworks, Flux\n\njulia> snapshots = [rand_graph(10, 20; ndata = rand(3, 10)), rand_graph(10, 14; ndata = rand(3, 10))];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots);\n\njulia> m = GINConv(Dense(3 => 1), 0.4);\n\njulia> output = m(tg, tg.ndata.x);\n\njulia> size(output[1])\n(1, 10)","category":"page"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/conv/#Convolutional-Layers","page":"Convolutional Layers","title":"Convolutional Layers","text":"","category":"section"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"Many different types of graphs convolutional layers have been proposed in the literature. Choosing the right layer for your application could involve a lot of exploration. Some of the most commonly used layers are the GCNConv and the GATv2Conv. Multiple graph convolutional layers are typically stacked together to create a graph neural network model (see GNNChain).","category":"page"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"The table below lists all graph convolutional layers implemented in the GraphNeuralNetworks.jl. It also highlights the presence of some additional capabilities with respect to basic message passing:","category":"page"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"Sparse Ops: implements message passing as multiplication by sparse adjacency matrix instead of the gather/scatter mechanism. This can lead to better CPU performances but it is not supported on GPU yet. \nEdge Weight: supports scalar weights (or equivalently scalar features) on edges. \nEdge Features: supports feature vectors on edges.\nHeterograph: supports heterogeneous graphs (see GNNHeteroGraph).\nTemporalSnapshotsGNNGraphs: supports temporal graphs (see TemporalSnapshotsGNNGraph) by applying the convolution layers to each snapshot independently.","category":"page"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"Layer Sparse Ops Edge Weight Edge Features Heterograph TemporalSnapshotsGNNGraphs\nAGNNConv ✓ \nCGConv ✓ ✓ ✓\nChebConv ✓\nEGNNConv ✓ \nEdgeConv ✓ \nGATConv ✓ ✓ ✓\nGATv2Conv ✓ ✓ ✓\nGatedGraphConv ✓ ✓\nGCNConv ✓ ✓ ✓ \nGINConv ✓ ✓ ✓\nGMMConv ✓ \nGraphConv ✓ ✓ ✓\nMEGNetConv ✓ \nNNConv ✓ \nResGatedGraphConv ✓ ✓\nSAGEConv ✓ ✓ ✓\nSGConv ✓ ✓\nTransformerConv ✓ ","category":"page"},{"location":"api/conv/#Docs","page":"Convolutional Layers","title":"Docs","text":"","category":"section"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"layers/conv.jl\"]\nPrivate = false","category":"page"},{"location":"api/conv/#GraphNeuralNetworks.AGNNConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.AGNNConv","text":"AGNNConv(; init_beta=1.0f0, trainable=true, add_self_loops=true)\n\nAttention-based Graph Neural Network layer from paper Attention-based Graph Neural Network for Semi-Supervised Learning.\n\nThe forward pass is given by\n\nmathbfx_i = sum_j in N(i) alpha_ij mathbfx_j\n\nwhere the attention coefficients alpha_ij are given by\n\nalpha_ij =frace^beta cos(mathbfx_i mathbfx_j)\n sum_je^beta cos(mathbfx_i mathbfx_j)\n\nwith the cosine distance defined by\n\ncos(mathbfx_i mathbfx_j) = \n fracmathbfx_i cdot mathbfx_jlVertmathbfx_irVert lVertmathbfx_jrVert\n\nand beta a trainable parameter if trainable=true.\n\nArguments\n\ninit_beta: The initial value of beta. Default 1.0f0.\ntrainable: If true, beta is trainable. Default true.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default true.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\n\n# create layer\nl = AGNNConv(init_beta=2.0f0)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.CGConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.CGConv","text":"CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)\nCGConv(in => out, ...)\n\nThe crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation\n\nmathbfx_i = mathbfx_i + sum_jin N(i)sigma(W_f mathbfz_ij + mathbfb_f) act(W_s mathbfz_ij + mathbfb_s)\n\nwhere mathbfz_ij is the node and edge features concatenation mathbfx_i mathbfx_j mathbfe_jto i and sigma is the sigmoid function. The residual mathbfx_i is added only if residual=true and the output size is the same as the input size.\n\nArguments\n\nin: The dimension of input node features.\nein: The dimension of input edge features. \n\nIf ein is not given, assumes that no edge features are passed as input in the forward pass.\n\nout: The dimension of output node features.\nact: Activation function.\nbias: Add learnable bias.\ninit: Weights' initializer.\nresidual: Add a residual connection.\n\nExamples\n\ng = rand_graph(5, 6)\nx = rand(Float32, 2, g.num_nodes)\ne = rand(Float32, 3, g.num_edges)\n\nl = CGConv((2, 3) => 4, tanh)\ny = l(g, x, e) # size: (4, num_nodes)\n\n# No edge features\nl = CGConv(2 => 4, tanh)\ny = l(g, x) # size: (4, num_nodes)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.ChebConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.ChebConv","text":"ChebConv(in => out, k; bias=true, init=glorot_uniform)\n\nChebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.\n\nImplements\n\nX = sum^K-1_k=0 W^(k) Z^(k)\n\nwhere Z^(k) is the k-th term of Chebyshev polynomials, and can be calculated by the following recursive form:\n\nbeginaligned\nZ^(0) = X \nZ^(1) = hatL X \nZ^(k) = 2 hatL Z^(k-1) - Z^(k-2)\nendaligned\n\nwith hatL the scaled_laplacian.\n\nArguments\n\nin: The dimension of input features.\nout: The dimension of output features.\nk: The order of Chebyshev polynomial.\nbias: Add learnable bias.\ninit: Weights' initializer.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = ChebConv(3 => 5, 5) \n\n# forward pass\ny = l(g, x) # size: 5 × num_nodes\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.DConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.DConv","text":"DConv(ch::Pair{Int, Int}, k::Int; init = glorot_uniform, bias = true)\n\nDiffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.\n\nArguments\n\nch: Pair of input and output dimensions.\nk: Number of diffusion steps.\ninit: Weights' initializer. Default glorot_uniform.\nbias: Add learnable bias. Default true.\n\nExamples\n\njulia> g = GNNGraph(rand(10, 10), ndata = rand(Float32, 2, 10));\n\njulia> dconv = DConv(2 => 4, 4)\nDConv(2 => 4, 4)\n\njulia> y = dconv(g, g.ndata.x);\n\njulia> size(y)\n(4, 10)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.EGNNConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.EGNNConv","text":"EGNNConv((in, ein) => out; hidden_size=2in, residual=false)\nEGNNConv(in => out; hidden_size=2in, residual=false)\n\nEquivariant Graph Convolutional Layer from E(n) Equivariant Graph Neural Networks.\n\nThe layer performs the following operation:\n\nbeginaligned\nmathbfm_jto i =phi_e(mathbfh_i mathbfh_j lVertmathbfx_i-mathbfx_jrVert^2 mathbfe_jto i)\nmathbfx_i = mathbfx_i + C_isum_jinmathcalN(i)(mathbfx_i-mathbfx_j)phi_x(mathbfm_jto i)\nmathbfm_i = C_isum_jinmathcalN(i) mathbfm_jto i\nmathbfh_i = mathbfh_i + phi_h(mathbfh_i mathbfm_i)\nendaligned\n\nwhere mathbfh_i, mathbfx_i, mathbfe_jto i are invariant node features, equivariant node features, and edge features respectively. phi_e, phi_h, and phi_x are two-layer MLPs. C is a constant for normalization, computed as 1mathcalN(i).\n\nConstructor Arguments\n\nin: Number of input features for h.\nout: Number of output features for h.\nein: Number of input edge features.\nhidden_size: Hidden representation size.\nresidual: If true, add a residual connection. Only possible if in == out. Default false.\n\nForward Pass\n\nl(g, x, h, e=nothing)\n\nForward Pass Arguments:\n\ng : The graph.\nx : Matrix of equivariant node coordinates.\nh : Matrix of invariant node features.\ne : Matrix of invariant edge features. Default nothing.\n\nReturns updated h and x.\n\nExamples\n\ng = rand_graph(10, 10)\nh = randn(Float32, 5, g.num_nodes)\nx = randn(Float32, 3, g.num_nodes)\negnn = EGNNConv(5 => 6, 10)\nhnew, xnew = egnn(g, h, x)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.EdgeConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.EdgeConv","text":"EdgeConv(nn; aggr=max)\n\nEdge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.\n\nPerforms the operation\n\nmathbfx_i = square_j in N(i) nn(mathbfx_i mathbfx_j - mathbfx_i)\n\nwhere nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.\n\nArguments\n\nnn: A (possibly learnable) function. \naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\n\n# create layer\nl = EdgeConv(Dense(2 * in_channel, out_channel), aggr = +)\n\n# forward pass\ny = l(g, x)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GATConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GATConv","text":"GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])\nGATConv((in, ein) => out, ...)\n\nGraph attentional layer from the paper Graph Attention Networks.\n\nImplements the operation\n\nmathbfx_i = sum_j in N(i) cup i alpha_ij W mathbfx_j\n\nwhere the attention coefficients alpha_ij are given by\n\nalpha_ij = frac1z_i exp(LeakyReLU(mathbfa^T W mathbfx_i W mathbfx_j))\n\nwith z_i a normalization factor. \n\nIn case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as \n\nalpha_ij = frac1z_i exp(LeakyReLU(mathbfa^T W_e mathbfe_jto i W mathbfx_i W mathbfx_j))\n\nArguments\n\nin: The dimension of input node features.\nein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).\nout: The dimension of output node features.\nσ: Activation function. Default identity.\nbias: Learn the additive bias if true. Default true.\nheads: Number attention heads. Default 1.\nconcat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.\nnegative_slope: The parameter of LeakyReLU.Default 0.2.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default true.\ndropout: Dropout probability on the normalized attention coefficient. Default 0.0.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = GATConv(in_channel => out_channel, add_self_loops = false, bias = false; heads=2, concat=true)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GATv2Conv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GATv2Conv","text":"GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])\nGATv2Conv((in, ein) => out, ...)\n\nGATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.\n\nImplements the operation\n\nmathbfx_i = sum_j in N(i) cup i alpha_ij W_1 mathbfx_j\n\nwhere the attention coefficients alpha_ij are given by\n\nalpha_ij = frac1z_i exp(mathbfa^T LeakyReLU(W_2 mathbfx_i + W_1 mathbfx_j))\n\nwith z_i a normalization factor.\n\nIn case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as \n\nalpha_ij = frac1z_i exp(mathbfa^T LeakyReLU(W_3 mathbfe_jto i + W_2 mathbfx_i + W_1 mathbfx_j))\n\nArguments\n\nin: The dimension of input node features.\nein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).\nout: The dimension of output node features.\nσ: Activation function. Default identity.\nbias: Learn the additive bias if true. Default true.\nheads: Number attention heads. Default 1.\nconcat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.\nnegative_slope: The parameter of LeakyReLU.Default 0.2.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default true.\ndropout: Dropout probability on the normalized attention coefficient. Default 0.0.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\nein = 3\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = GATv2Conv((in_channel, ein) => out_channel, add_self_loops = false)\n\n# edge features\ne = randn(Float32, ein, length(s))\n\n# forward pass\ny = l(g, x, e) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GCNConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GCNConv","text":"GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])\n\nGraph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.\n\nPerforms the operation\n\nmathbfx_i = sum_jin N(i) a_ij W mathbfx_j\n\nwhere a_ij = 1 sqrtN(i)N(j) is a normalization factor computed from the node degrees. \n\nIf the input graph has weighted edges and use_edge_weight=true, than a_ij will be computed as\n\na_ij = frace_jto isqrtsum_j in N(i) e_jto i sqrtsum_i in N(j) e_ito j\n\nThe input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nσ: Activation function. Default identity.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default false.\nuse_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.\n\nForward\n\n(::GCNConv)(g::GNNGraph, x, edge_weight = nothing; norm_fn = d -> 1 ./ sqrt.(d), conv_weight = nothing) -> AbstractMatrix\n\nTakes as input a graph g, a node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].\n\nThe norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes frac1sqrtd i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix instead of the weights stored in the model.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = GCNConv(3 => 5) \n\n# forward pass\ny = l(g, x) # size: 5 × num_nodes\n\n# convolution with edge weights and custom normalization function\nw = [1.1, 0.1, 2.3, 0.5]\ncustom_norm_fn(d) = 1 ./ sqrt.(d + 1) # Custom normalization function\ny = l(g, x, w; norm_fn = custom_norm_fn)\n\n# Edge weights can also be embedded in the graph.\ng = GNNGraph(s, t, w)\nl = GCNConv(3 => 5, use_edge_weight=true) \ny = l(g, x) # same as l(g, x, w) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GINConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GINConv","text":"GINConv(f, ϵ; aggr=+)\n\nGraph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.\n\nImplements the graph convolution\n\nmathbfx_i = f_Thetaleft((1 + epsilon) mathbfx_i + sum_j in N(i) mathbfx_j right)\n\nwhere f_Theta typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.\n\nArguments\n\nf: A (possibly learnable) function acting on node features. \nϵ: Weighting factor.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\n\n# create dense layer\nnn = Dense(in_channel, out_channel)\n\n# create layer\nl = GINConv(nn, 0.01f0, aggr = mean)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GMMConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GMMConv","text":"GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)\n\nGraph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation\n\nmathbfx_i = mathbfx_i + frac1N(i) sum_jin N(i)frac1Ksum_k=1^K mathbfw_k(mathbfe_jto i) odot Theta_k mathbfx_j\n\nwhere w^a_k(e^a) for feature a and kernel k is given by\n\nw^a_k(e^a) = exp(-frac12(e^a - mu^a_k)^T (Sigma^-1)^a_k(e^a - mu^a_k))\n\nTheta_k mu^a_k (Sigma^-1)^a_k are learnable parameters.\n\nThe input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual mathbfx_i is added only if residual=true and the output size is the same as the input size.\n\nArguments\n\nin: Number of input node features.\nein: Number of input edge features.\nout: Number of output features.\nσ: Activation function. Default identity.\nK: Number of kernels. Default 1.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\nresidual: Residual conncetion. Default false.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s,t)\nnin, ein, out, K = 4, 10, 7, 8 \nx = randn(Float32, nin, g.num_nodes)\ne = randn(Float32, ein, g.num_edges)\n\n# create layer\nl = GMMConv((nin, ein) => out, K=K)\n\n# forward pass\nl(g, x, e)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GatedGraphConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GatedGraphConv","text":"GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)\n\nGated graph convolution layer from Gated Graph Sequence Neural Networks.\n\nImplements the recursion\n\nbeginaligned\nmathbfh^(0)_i = mathbfx_i mathbf0 \nmathbfh^(l)_i = GRU(mathbfh^(l-1)_i square_j in N(i) W mathbfh^(l-1)_j)\nendaligned\n\nwhere mathbfh^(l)_i denotes the l-th hidden variables passing through GRU. The dimension of input mathbfx_i needs to be less or equal to out.\n\nArguments\n\nout: The dimension of output features.\nnum_layers: The number of recursion steps.\naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\ninit: Weight initialization function.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nout_channel = 5\nnum_layers = 3\ng = GNNGraph(s, t)\n\n# create layer\nl = GatedGraphConv(out_channel, num_layers)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GraphConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GraphConv","text":"GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)\n\nGraph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.\n\nPerforms:\n\nmathbfx_i = W_1 mathbfx_i + square_j in mathcalN(i) W_2 mathbfx_j\n\nwhere the aggregation type is selected by aggr.\n\nArguments\n\nin: The dimension of input features.\nout: The dimension of output features.\nσ: Activation function.\naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\nbias: Add learnable bias.\ninit: Weights' initializer.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = GraphConv(in_channel => out_channel, relu, bias = false, aggr = mean)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.MEGNetConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.MEGNetConv","text":"MEGNetConv(ϕe, ϕv; aggr=mean)\nMEGNetConv(in => out; aggr=mean)\n\nConvolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x and edge features e and returns updated features x' and e' according to \n\nbeginaligned\nmathbfe_ito j = phi_e(mathbfx_i mathbfx_j mathbfe_ito j)\nmathbfx_i = phi_v(mathbfx_i square_jin mathcalN(i)mathbfe_jto i)\nendaligned\n\naggr defines the aggregation to be performed.\n\nIf the neural networks ϕe and ϕv are not provided, they will be constructed from the in and out arguments instead as multi-layer perceptron with one hidden layer and relu activations.\n\nExamples\n\ng = rand_graph(10, 30)\nx = randn(Float32, 3, 10)\ne = randn(Float32, 3, 30)\nm = MEGNetConv(3 => 3)\nx′, e′ = m(g, x, e)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.NNConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.NNConv","text":"NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)\n\nThe continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.\n\nPerforms the operation\n\nmathbfx_i = W mathbfx_i + square_j in N(i) f_Theta(mathbfe_jto i)mathbfx_j\n\nwhere f_Theta denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.\n\nArguments\n\nin: The dimension of input node features.\nout: The dimension of output node features.\nf: A (possibly learnable) function acting on edge features.\naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\nσ: Activation function.\nbias: Add learnable bias.\ninit: Weights' initializer.\n\nExamples:\n\nn_in = 3\nn_in_edge = 10\nn_out = 5\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\n\n# create dense layer\nnn = Dense(n_in_edge => n_out * n_in)\n\n# create layer\nl = NNConv(n_in => n_out, nn, tanh, bias = true, aggr = +)\n\nx = randn(Float32, n_in, g.num_nodes)\ne = randn(Float32, n_in_edge, g.num_edges)\n\n# forward pass\ny = l(g, x, e) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.ResGatedGraphConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.ResGatedGraphConv","text":"ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)\n\nThe residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.\n\nThe layer's forward pass is given by\n\nmathbfx_i = actbig(Umathbfx_i + sum_j in N(i) eta_ij V mathbfx_jbig)\n\nwhere the edge gates eta_ij are given by\n\neta_ij = sigmoid(Amathbfx_i + Bmathbfx_j)\n\nArguments\n\nin: The dimension of input features.\nout: The dimension of output features.\nact: Activation function.\ninit: Weight matrices' initializing function. \nbias: Learn an additive bias if true.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\n\n# create layer\nl = ResGatedGraphConv(in_channel => out_channel, tanh, bias = true)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.SAGEConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.SAGEConv","text":"SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)\n\nGraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.\n\nPerforms:\n\nmathbfx_i = W cdot mathbfx_i square_j in mathcalN(i) mathbfx_j\n\nwhere the aggregation type is selected by aggr.\n\nArguments\n\nin: The dimension of input features.\nout: The dimension of output features.\nσ: Activation function.\naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\nbias: Add learnable bias.\ninit: Weights' initializer.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\n\n# create layer\nl = SAGEConv(in_channel => out_channel, tanh, bias = false, aggr = +)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.SGConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.SGConv","text":"SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])\n\nSGC layer from Simplifying Graph Convolutional Networks Performs operation\n\nH^K = (tildeD^-12 tildeA tildeD^-12)^K X Theta\n\nwhere tildeA is A + I.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk : Number of hops k. Default 1.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default false.\nuse_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = SGConv(3 => 5; add_self_loops = true) \n\n# forward pass\ny = l(g, x) # size: 5 × num_nodes\n\n# convolution with edge weights\nw = [1.1, 0.1, 2.3, 0.5]\ny = l(g, x, w)\n\n# Edge weights can also be embedded in the graph.\ng = GNNGraph(s, t, w)\nl = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true) \ny = l(g, x) # same as l(g, x, w) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.TAGConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.TAGConv","text":"TAGConv(in => out, k=3; bias=true, init=glorot_uniform, add_self_loops=true, use_edge_weight=false)\n\nTAGConv layer from Topology Adaptive Graph Convolutional Networks. This layer extends the idea of graph convolutions by applying filters that adapt to the topology of the data. It performs the operation:\n\nH^K = sum_k=0^K (D^-12 A D^-12)^k X Theta_k\n\nwhere A is the adjacency matrix of the graph, D is the degree matrix, X is the input feature matrix, and Theta_k is a unique weight matrix for each hop k.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk: Maximum number of hops to consider. Default is 3.\nbias: Whether to include a learnable bias term. Default is true.\ninit: Initialization function for the weights. Default is glorot_uniform.\nadd_self_loops: Whether to add self-loops to the adjacency matrix. Default is true.\nuse_edge_weight: If true, edge weights are considered in the computation (if available). Default is false.\n\nExamples\n\n# Example graph data\ns = [1, 1, 2, 3]\nt = [2, 3, 1, 1]\ng = GNNGraph(s, t) # Create a graph\nx = randn(Float32, 3, g.num_nodes) # Random features for each node\n\n# Create a TAGConv layer\nl = TAGConv(3 => 5, k=3; add_self_loops=true)\n\n# Apply the TAGConv layer\ny = l(g, x) # Output size: 5 × num_nodes\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.TransformerConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.TransformerConv","text":"TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,\n bias_root, root_weight, gating, skip_connection, batch_norm, ff_channels]))\n\nThe transformer-like multi head attention convolutional operator from the Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification paper, which also considers edge features. It further contains options to also be configured as the transformer-like convolutional operator from the Attention, Learn to Solve Routing Problems! paper, including a successive feed-forward network as well as skip layers and batch normalization.\n\nThe layer's basic forward pass is given by\n\nx_i = W_1x_i + sum_jin N(i) alpha_ij (W_2 x_j + W_6e_ij)\n\nwhere the attention scores are\n\nalpha_ij = mathrmsoftmaxleft(frac(W_3x_i)^T(W_4x_j+\nW_6e_ij)sqrtdright)\n\nOptionally, a combination of the aggregated value with transformed root node features by a gating mechanism via\n\nx_i = beta_i W_1 x_i + (1 - beta_i) underbraceleft(sum_j in mathcalN(i)\nalpha_ij W_2 x_j right)_=m_i\n\nwith\n\nbeta_i = textrmsigmoid(W_5^top W_1 x_i m_i W_1 x_i - m_i )\n\ncan be performed.\n\nArguments\n\nin: Dimension of input features, which also corresponds to the dimension of the output features.\nein: Dimension of the edge features; if 0, no edge features will be used.\nout: Dimension of the output.\nheads: Number of heads in output. Default 1.\nconcat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.\ninit: Weight matrices' initializing function. Default glorot_uniform.\nadd_self_loops: Add self loops to the input graph. Default false.\nbias_qkv: If set, bias is used in the key, query and value transformations for nodes. Default true.\nbias_root: If set, the layer will also learn an additive bias for the root when root weight is used. Default true.\nroot_weight: If set, the layer will add the transformed root node features to the output. Default true.\ngating: If set, will combine aggregation and transformed root node features by a gating mechanism. Default false.\nskip_connection: If set, a skip connection will be made from the input and added to the output. Default false.\nbatch_norm: If set, a batch normalization will be applied to the output. Default false.\nff_channels: If positive, a feed-forward NN is appended, with the first having the given number of hidden nodes; this NN also gets a skip connection and batch normalization if the respective parameters are set. Default: 0.\n\nExamples\n\nN, in_channel, out_channel = 4, 3, 5\nein, heads = 2, 3\ng = GNNGraph([1,1,2,4], [2,3,1,1])\nl = TransformerConv((in_channel, ein) => in_channel; heads, gating = true, bias_qkv = true)\nx = rand(Float32, in_channel, N)\ne = rand(Float32, ein, g.num_edges)\nl(g, x, e)\n\n\n\n\n\n","category":"type"},{"location":"api/messagepassing/","page":"Message Passing","title":"Message Passing","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/messagepassing/#Message-Passing","page":"Message Passing","title":"Message Passing","text":"","category":"section"},{"location":"api/messagepassing/#Index","page":"Message Passing","title":"Index","text":"","category":"section"},{"location":"api/messagepassing/","page":"Message Passing","title":"Message Passing","text":"Order = [:type, :function]\nPages = [\"messagepassing.md\"]","category":"page"},{"location":"api/messagepassing/#Interface","page":"Message Passing","title":"Interface","text":"","category":"section"},{"location":"api/messagepassing/","page":"Message Passing","title":"Message Passing","text":"GNNlib.apply_edges\nGNNlib.aggregate_neighbors\nGNNlib.propagate","category":"page"},{"location":"api/messagepassing/#GNNlib.apply_edges","page":"Message Passing","title":"GNNlib.apply_edges","text":"apply_edges(fmsg, g; [xi, xj, e])\napply_edges(fmsg, g, xi, xj, e=nothing)\n\nReturns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).\n\nThe function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.\n\nArguments\n\ng: An AbstractGNNGraph.\nxi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).\nxj: As xi, but now to be materialized on each edge's source node. \ne: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.\nfmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).\n\nSee also propagate and aggregate_neighbors.\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.aggregate_neighbors","page":"Message Passing","title":"GNNlib.aggregate_neighbors","text":"aggregate_neighbors(g, aggr, m)\n\nGiven a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features \n\nmathbfx_i = square_j in mathcalN(i) mathbfm_jto i\n\nNeighborhood aggregation is the second step of propagate, where it comes after apply_edges.\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.propagate","page":"Message Passing","title":"GNNlib.propagate","text":"propagate(fmsg, g, aggr; [xi, xj, e])\npropagate(fmsg, g, aggr xi, xj, e=nothing)\n\nPerforms message passing on graph g. Takes care of materializing the node features on each edge, applying the message function fmsg, and returning an aggregated message barmathbfm (depending on the return value of fmsg, an array or a named tuple of arrays with last dimension's size g.num_nodes).\n\nIt can be decomposed in two steps:\n\nm = apply_edges(fmsg, g, xi, xj, e)\nm̄ = aggregate_neighbors(g, aggr, m)\n\nGNN layers typically call propagate in their forward pass, providing as input f a closure. \n\nArguments\n\ng: A GNNGraph.\nxi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).\nxj: As xj, but to be materialized on edges' sources. \ne: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.\nfmsg: A generic function that will be passed over to apply_edges. Has to take as inputs the edge-materialized xi, xj, and e (arrays or named tuples of arrays whose last dimension' size is the size of a batch of edges). Its output has to be an array or a named tuple of arrays with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).\naggr: Neighborhood aggregation operator. Use +, mean, max, or min. \n\nExamples\n\nusing GraphNeuralNetworks, Flux\n\nstruct GNNConv <: GNNLayer\n W\n b\n σ\nend\n\nFlux.@layer GNNConv\n\nfunction GNNConv(ch::Pair{Int,Int}, σ=identity)\n in, out = ch\n W = Flux.glorot_uniform(out, in)\n b = zeros(Float32, out)\n GNNConv(W, b, σ)\nend\n\nfunction (l::GNNConv)(g::GNNGraph, x::AbstractMatrix)\n message(xi, xj, e) = l.W * xj\n m̄ = propagate(message, g, +, xj=x)\n return l.σ.(m̄ .+ l.bias)\nend\n\nl = GNNConv(10 => 20)\nl(g, x)\n\nSee also apply_edges and aggregate_neighbors.\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#Built-in-message-functions","page":"Message Passing","title":"Built-in message functions","text":"","category":"section"},{"location":"api/messagepassing/","page":"Message Passing","title":"Message Passing","text":"GNNlib.copy_xi\nGNNlib.copy_xj\nGNNlib.xi_dot_xj\nGNNlib.xi_sub_xj\nGNNlib.xj_sub_xi\nGNNlib.e_mul_xj\nGNNlib.w_mul_xj","category":"page"},{"location":"api/messagepassing/#GNNlib.copy_xi","page":"Message Passing","title":"GNNlib.copy_xi","text":"copy_xi(xi, xj, e) = xi\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.copy_xj","page":"Message Passing","title":"GNNlib.copy_xj","text":"copy_xj(xi, xj, e) = xj\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.xi_dot_xj","page":"Message Passing","title":"GNNlib.xi_dot_xj","text":"xi_dot_xj(xi, xj, e) = sum(xi .* xj, dims=1)\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.xi_sub_xj","page":"Message Passing","title":"GNNlib.xi_sub_xj","text":"xi_sub_xj(xi, xj, e) = xi .- xj\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.xj_sub_xi","page":"Message Passing","title":"GNNlib.xj_sub_xi","text":"xj_sub_xi(xi, xj, e) = xj .- xi\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.e_mul_xj","page":"Message Passing","title":"GNNlib.e_mul_xj","text":"e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj\n\nReshape e into broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.w_mul_xj","page":"Message Passing","title":"GNNlib.w_mul_xj","text":"w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj\n\nSimilar to e_mul_xj but specialized on scalar edge features (weights).\n\n\n\n\n\n","category":"function"},{"location":"#GraphNeuralNetworks","page":"Home","title":"GraphNeuralNetworks","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This is the documentation page for GraphNeuralNetworks.jl, a graph neural network library written in Julia and based on the deep learning framework Flux.jl. GraphNeuralNetworks.jl is largely inspired by PyTorch Geometric, Deep Graph Library, and GeometricFlux.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Among its features:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Implements common graph convolutional layers.\nSupports computations on batched graphs. \nEasy to define custom layers.\nCUDA support.\nIntegration with Graphs.jl.\nExamples of node, edge, and graph level machine learning tasks. ","category":"page"},{"location":"#Package-overview","page":"Home","title":"Package overview","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Let's give a brief overview of the package by solving a graph regression problem with synthetic data. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Usage examples on real datasets can be found in the examples folder. ","category":"page"},{"location":"#Data-preparation","page":"Home","title":"Data preparation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"We create a dataset consisting in multiple random graphs and associated data features. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"using GraphNeuralNetworks, Graphs, Flux, CUDA, Statistics, MLUtils\nusing Flux: DataLoader\n\nall_graphs = GNNGraph[]\n\nfor _ in 1:1000\n g = rand_graph(10, 40, \n ndata=(; x = randn(Float32, 16,10)), # input node features\n gdata=(; y = randn(Float32))) # regression target \n push!(all_graphs, g)\nend","category":"page"},{"location":"#Model-building","page":"Home","title":"Model building","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"We concisely define our model as a GNNChain containing two graph convolutional layers. If CUDA is available, our model will live on the gpu.","category":"page"},{"location":"","page":"Home","title":"Home","text":"device = CUDA.functional() ? Flux.gpu : Flux.cpu;\n\nmodel = GNNChain(GCNConv(16 => 64),\n BatchNorm(64), # Apply batch normalization on node features (nodes dimension is batch dimension)\n x -> relu.(x), \n GCNConv(64 => 64, relu),\n GlobalPool(mean), # aggregate node-wise features into graph-wise features\n Dense(64, 1)) |> device\n\nopt = Flux.setup(Adam(1f-4), model)","category":"page"},{"location":"#Training","page":"Home","title":"Training","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Finally, we use a standard Flux training pipeline to fit our dataset. We use Flux's DataLoader to iterate over mini-batches of graphs that are glued together into a single GNNGraph using the Flux.batch method. This is what happens under the hood when creating a DataLoader with the collate=true option. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"train_graphs, test_graphs = MLUtils.splitobs(all_graphs, at=0.8)\n\ntrain_loader = DataLoader(train_graphs, \n batchsize=32, shuffle=true, collate=true)\ntest_loader = DataLoader(test_graphs, \n batchsize=32, shuffle=false, collate=true)\n\nloss(model, g::GNNGraph) = mean((vec(model(g, g.x)) - g.y).^2)\n\nloss(model, loader) = mean(loss(model, g |> device) for g in loader)\n\nfor epoch in 1:100\n for g in train_loader\n g = g |> device\n grad = gradient(model -> loss(model, g), model)\n Flux.update!(opt, model, grad[1])\n end\n\n @info (; epoch, train_loss=loss(model, train_loader), test_loss=loss(model, test_loader))\nend","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/#Hands-on-introduction-to-Graph-Neural-Networks","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"(Image: Source code) (Image: Author) (Image: Update time)","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"\n\n\n\n\n

This Pluto notebook is a Julia adaptation of the Pytorch Geometric tutorials that can be found here.

Recently, deep learning on graphs has emerged to one of the hottest research fields in the deep learning community. Here, Graph Neural Networks (GNNs) aim to generalize classical deep learning concepts to irregular structured data (in contrast to images or texts) and to enable neural networks to reason about objects and their relations.

This is done by following a simple neural message passing scheme, where node features \\(\\mathbf{x}_i^{(\\ell)}\\) of all nodes \\(i \\in \\mathcal{V}\\) in a graph \\(\\mathcal{G} = (\\mathcal{V}, \\mathcal{E})\\) are iteratively updated by aggregating localized information from their neighbors \\(\\mathcal{N}(i)\\):

$$\\mathbf{x}_i^{(\\ell + 1)} = f^{(\\ell + 1)}_{\\theta} \\left( \\mathbf{x}_i^{(\\ell)}, \\left\\{ \\mathbf{x}_j^{(\\ell)} : j \\in \\mathcal{N}(i) \\right\\} \\right)$$

This tutorial will introduce you to some fundamental concepts regarding deep learning on graphs via Graph Neural Networks based on the GraphNeuralNetworks.jl library. GraphNeuralNetworks.jl is an extension library to the popular deep learning framework Flux.jl, and consists of various methods and utilities to ease the implementation of Graph Neural Networks.

Let's first import the packages we need:

\n\n
begin\n    using Flux\n    using Flux: onecold, onehotbatch, logitcrossentropy\n    using MLDatasets\n    using LinearAlgebra, Random, Statistics\n    import GraphMakie\n    import CairoMakie as Makie\n    using Graphs\n    using PlutoUI\n    using GraphNeuralNetworks\nend
\n\n\n
begin\n    ENV[\"DATADEPS_ALWAYS_ACCEPT\"] = \"true\"  # don't ask for dataset download confirmation\n    Random.seed!(17) # for reproducibility\nend;
\n\n\n\n

Following Kipf et al. (2017), let's dive into the world of GNNs by looking at a simple graph-structured example, the well-known Zachary's karate club network. This graph describes a social network of 34 members of a karate club and documents links between members who interacted outside the club. Here, we are interested in detecting communities that arise from the member's interaction.

GraphNeuralNetworks.jl provides utilities to convert MLDatasets.jl's datasets to its own type:

\n\n
dataset = MLDatasets.KarateClub()
\n
dataset KarateClub:\n  metadata  =>    Dict{String, Any} with 0 entries\n  graphs    =>    1-element Vector{MLDatasets.Graph}
\n\n\n

After initializing the KarateClub dataset, we first can inspect some of its properties. For example, we can see that this dataset holds exactly one graph. Furthermore, the graph holds exactly 4 classes, which represent the community each node belongs to.

\n\n
karate = dataset[1]
\n
Graph:\n  num_nodes   =>    34\n  num_edges   =>    156\n  edge_index  =>    (\"156-element Vector{Int64}\", \"156-element Vector{Int64}\")\n  node_data   =>    (labels_clubs = \"34-element Vector{Int64}\", labels_comm = \"34-element Vector{Int64}\")\n  edge_data   =>    nothing
\n\n
karate.node_data.labels_comm
\n
34-element Vector{Int64}:\n 1\n 1\n 1\n 1\n 3\n 3\n 3\n ⋮\n 2\n 0\n 0\n 2\n 0\n 0
\n\n\n

Now we convert the single-graph dataset to a GNNGraph. Moreover, we add a an array of node features, a 34-dimensional feature vector for each node which uniquely describes the members of the karate club. We also add a training mask selecting the nodes to be used for training in our semi-supervised node classification task.

\n\n
begin\n    # convert a MLDataset.jl's dataset to a GNNGraphs (or a collection of graphs)\n    g = mldataset2gnngraph(dataset)\n\n    x = zeros(Float32, g.num_nodes, g.num_nodes)\n    x[diagind(x)] .= 1\n\n    train_mask = [true, false, false, false, true, false, false, false, true,\n        false, false, false, false, false, false, false, false, false, false, false,\n        false, false, false, false, true, false, false, false, false, false,\n        false, false, false, false]\n\n    labels = g.ndata.labels_comm\n    y = onehotbatch(labels, 0:3)\n\n    g = GNNGraph(g, ndata = (; x, y, train_mask))\nend
\n
GNNGraph:\n  num_nodes: 34\n  num_edges: 156\n  ndata:\n\ty = 4×34 OneHotMatrix(::Vector{UInt32}) with eltype Bool\n\ttrain_mask = 34-element Vector{Bool}\n\tx = 34×34 Matrix{Float32}
\n\n\n

Let's now look at the underlying graph in more detail:

\n\n
with_terminal() do\n    # Gather some statistics about the graph.\n    println(\"Number of nodes: $(g.num_nodes)\")\n    println(\"Number of edges: $(g.num_edges)\")\n    println(\"Average node degree: $(g.num_edges / g.num_nodes)\")\n    println(\"Number of training nodes: $(sum(g.ndata.train_mask))\")\n    println(\"Training node label rate: $(mean(g.ndata.train_mask))\")\n    # println(\"Has isolated nodes: $(has_isolated_nodes(g))\")\n    println(\"Has self-loops: $(has_self_loops(g))\")\n    println(\"Is undirected: $(is_bidirected(g))\")\nend
\n
Number of nodes: 34\nNumber of edges: 156\nAverage node degree: 4.588235294117647\nNumber of training nodes: 4\nTraining node label rate: 0.11764705882352941\nHas self-loops: false\nIs undirected: true\n
\n\n\n

Each graph in GraphNeuralNetworks.jl is represented by a GNNGraph object, which holds all the information to describe its graph representation. We can print the data object anytime via print(g) to receive a short summary about its attributes and their shapes.

The g object holds 3 attributes:

These attributes are NamedTuples that can store multiple feature arrays: we can access a specific set of features e.g. x, with g.ndata.x.

In our task, g.ndata.train_mask describes for which nodes we already know their community assignments. In total, we are only aware of the ground-truth labels of 4 nodes (one for each community), and the task is to infer the community assignment for the remaining nodes.

The g object also provides some utility functions to infer some basic properties of the underlying graph. For example, we can easily infer whether there exist isolated nodes in the graph (i.e. there exists no edge to any node), whether the graph contains self-loops (i.e., \\((v, v) \\in \\mathcal{E}\\)), or whether the graph is bidirected (i.e., for each edge \\((v, w) \\in \\mathcal{E}\\) there also exists the edge \\((w, v) \\in \\mathcal{E}\\)).

Let us now inspect the edge_index method:

\n\n
edge_index(g)
\n
([1, 1, 1, 1, 1, 1, 1, 1, 1, 1  …  34, 34, 34, 34, 34, 34, 34, 34, 34, 34], [2, 3, 4, 5, 6, 7, 8, 9, 11, 12  …  21, 23, 24, 27, 28, 29, 30, 31, 32, 33])
\n\n\n

By printing edge_index(g), we can understand how GraphNeuralNetworks.jl represents graph connectivity internally. We can see that for each edge, edge_index holds a tuple of two node indices, where the first value describes the node index of the source node and the second value describes the node index of the destination node of an edge.

This representation is known as the COO format (coordinate format) commonly used for representing sparse matrices. Instead of holding the adjacency information in a dense representation \\(\\mathbf{A} \\in \\{ 0, 1 \\}^{|\\mathcal{V}| \\times |\\mathcal{V}|}\\), GraphNeuralNetworks.jl represents graphs sparsely, which refers to only holding the coordinates/values for which entries in \\(\\mathbf{A}\\) are non-zero.

Importantly, GraphNeuralNetworks.jl does not distinguish between directed and undirected graphs, and treats undirected graphs as a special case of directed graphs in which reverse edges exist for every entry in the edge_index.

Since a GNNGraph is an AbstractGraph from the Graphs.jl library, it supports graph algorithms and visualization tools from the wider julia graph ecosystem:

\n\n
GraphMakie.graphplot(g |> to_unidirected, node_size = 20, node_color = labels,\n                     arrow_show = false)
\n\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/#Implementing-Graph-Neural-Networks","page":"Hands-on introduction to Graph Neural Networks","title":"Implementing Graph Neural Networks","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"
\n

After learning about GraphNeuralNetworks.jl's data handling, it's time to implement our first Graph Neural Network!

For this, we will use on of the most simple GNN operators, the GCN layer (Kipf et al. (2017)), which is defined as

$$\\mathbf{x}_v^{(\\ell + 1)} = \\mathbf{W}^{(\\ell + 1)} \\sum_{w \\in \\mathcal{N}(v) \\, \\cup \\, \\{ v \\}} \\frac{1}{c_{w,v}} \\cdot \\mathbf{x}_w^{(\\ell)}$$

where \\(\\mathbf{W}^{(\\ell + 1)}\\) denotes a trainable weight matrix of shape [num_output_features, num_input_features] and \\(c_{w,v}\\) refers to a fixed normalization coefficient for each edge.

GraphNeuralNetworks.jl implements this layer via GCNConv, which can be executed by passing in the node feature representation x and the COO graph connectivity representation edge_index.

With this, we are ready to create our first Graph Neural Network by defining our network architecture:

\n\n
begin\n    struct GCN\n        layers::NamedTuple\n    end\n\n    Flux.@layer GCN # provides parameter collection, gpu movement and more\n\n    function GCN(num_features, num_classes)\n        layers = (conv1 = GCNConv(num_features => 4),\n                  conv2 = GCNConv(4 => 4),\n                  conv3 = GCNConv(4 => 2),\n                  classifier = Dense(2, num_classes))\n        return GCN(layers)\n    end\n\n    function (gcn::GCN)(g::GNNGraph, x::AbstractMatrix)\n        l = gcn.layers\n        x = l.conv1(g, x)\n        x = tanh.(x)\n        x = l.conv2(g, x)\n        x = tanh.(x)\n        x = l.conv3(g, x)\n        x = tanh.(x)  # Final GNN embedding space.\n        out = l.classifier(x)\n        # Apply a final (linear) classifier.\n        return out, x\n    end\nend
\n\n\n\n

Here, we first initialize all of our building blocks in the constructor and define the computation flow of our network in the call method. We first define and stack three graph convolution layers, which corresponds to aggregating 3-hop neighborhood information around each node (all nodes up to 3 \"hops\" away). In addition, the GCNConv layers reduce the node feature dimensionality to \\(2\\), i.e., \\(34 \\rightarrow 4 \\rightarrow 4 \\rightarrow 2\\). Each GCNConv layer is enhanced by a tanh non-linearity.

After that, we apply a single linear transformation (Flux.Dense that acts as a classifier to map our nodes to 1 out of the 4 classes/communities.

We return both the output of the final classifier as well as the final node embeddings produced by our GNN. We proceed to initialize our final model via GCN(), and printing our model produces a summary of all its used sub-modules.

Embedding the Karate Club Network

Let's take a look at the node embeddings produced by our GNN. Here, we pass in the initial node features x and the graph information g to the model, and visualize its 2-dimensional embedding.

\n\n
begin\n    num_features = 34\n    num_classes = 4\n    gcn = GCN(num_features, num_classes)\nend
\n
GCN((conv1 = GCNConv(34 => 4), conv2 = GCNConv(4 => 4), conv3 = GCNConv(4 => 2), classifier = Dense(2 => 4)))  # 182 parameters
\n\n
_, h = gcn(g, g.ndata.x)
\n
(Float32[-0.005740445 -0.01884863 … 0.0049703615 0.004798473; -0.003971542 -0.00664741 … 0.002226899 0.0007848764; 0.00782498 0.03179506 … -0.00793192 -0.008960456; 0.03580918 0.015548051 … -0.011664602 0.010523779], Float32[-0.023297783 -0.03627783 … 0.012548346 0.003526861; 0.036984775 0.008870006 … -0.010684907 0.013719623])
\n\n
function visualize_embeddings(h; colors = nothing)\n    xs = h[1, :] |> vec\n    ys = h[2, :] |> vec\n    Makie.scatter(xs, ys, color = labels, markersize = 20)\nend
\n
visualize_embeddings (generic function with 1 method)
\n\n
visualize_embeddings(h, colors = labels)
\n\n\n\n

Remarkably, even before training the weights of our model, the model produces an embedding of nodes that closely resembles the community-structure of the graph. Nodes of the same color (community) are already closely clustered together in the embedding space, although the weights of our model are initialized completely at random and we have not yet performed any training so far! This leads to the conclusion that GNNs introduce a strong inductive bias, leading to similar embeddings for nodes that are close to each other in the input graph.

Training on the Karate Club Network

But can we do better? Let's look at an example on how to train our network parameters based on the knowledge of the community assignments of 4 nodes in the graph (one for each community).

Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observe how the embeddings react. Here, we make use of a semi-supervised or transductive learning procedure: we simply train against one node per class, but are allowed to make use of the complete input graph data.

Training our model is very similar to any other Flux model. In addition to defining our network architecture, we define a loss criterion (here, logitcrossentropy), and initialize a stochastic gradient optimizer (here, Adam). After that, we perform multiple rounds of optimization, where each round consists of a forward and backward pass to compute the gradients of our model parameters w.r.t. to the loss derived from the forward pass. If you are not new to Flux, this scheme should appear familiar to you.

Note that our semi-supervised learning scenario is achieved by the following line:

loss = logitcrossentropy(ŷ[:,train_mask], y[:,train_mask])

While we compute node embeddings for all of our nodes, we only make use of the training nodes for computing the loss. Here, this is implemented by filtering the output of the classifier out and ground-truth labels data.y to only contain the nodes in the train_mask.

Let us now start training and see how our node embeddings evolve over time (best experienced by explicitly running the code):

\n\n
begin\n    model = GCN(num_features, num_classes)\n    opt = Flux.setup(Adam(1e-2), model)\n    epochs = 2000\n\n    emb = h\n    function report(epoch, loss, h)\n        # p = visualize_embeddings(h)\n        @info (; epoch, loss)\n    end\n\n    report(0, 10.0, emb)\n    for epoch in 1:epochs\n        loss, grad = Flux.withgradient(model) do model\n            ŷ, emb = model(g, g.ndata.x)\n            logitcrossentropy(ŷ[:, train_mask], y[:, train_mask])\n        end\n\n        Flux.update!(opt, model, grad[1])\n        if epoch % 200 == 0\n            report(epoch, loss, emb)\n        end\n    end\nend
\n\n\n
ŷ, emb_final = model(g, g.ndata.x)
\n
(Float32[-8.298183 -6.7843485 … 7.992106 7.9739995; 6.4358 5.3154697 … -6.552024 -6.5364995; -0.27719232 1.0525229 … 0.90657854 0.9205904; -0.5221016 -1.515696 … 1.1027651 1.0865755], Float32[0.99810773 0.6356816 … -0.9999997 -0.9999999; -0.9964947 -0.99806273 … 0.999749 0.9951793])
\n\n
# train accuracy\nmean(onecold(ŷ[:, train_mask]) .== onecold(y[:, train_mask]))
\n
1.0
\n\n
# test accuracy\nmean(onecold(ŷ[:, .!train_mask]) .== onecold(y[:, .!train_mask]))
\n
0.8
\n\n
visualize_embeddings(emb_final, colors = labels)
\n\n\n\n

As one can see, our 3-layer GCN model manages to linearly separating the communities and classifying most of the nodes correctly.

Furthermore, we did this all with a few lines of code, thanks to the GraphNeuralNetworks.jl which helped us out with data handling and GNN implementations.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"EditURL = \"https://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/master/docs/src/tutorials/introductory_tutorials/gnn_intro_pluto.jl\"","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"This page was generated using DemoCards.jl. and PlutoStaticHTML.jl","category":"page"}] +[{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"CurrentModule = GNNGraphs","category":"page"},{"location":"api/gnngraph/#GNNGraph","page":"GNNGraph","title":"GNNGraph","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Documentation page for the graph type GNNGraph provided by GraphNeuralNetworks.jl and related methods. ","category":"page"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Besides the methods documented here, one can rely on the large set of functionalities given by Graphs.jl thanks to the fact that GNNGraph inherits from Graphs.AbstractGraph.","category":"page"},{"location":"api/gnngraph/#Index","page":"GNNGraph","title":"Index","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Order = [:type, :function]\nPages = [\"gnngraph.md\"]","category":"page"},{"location":"api/gnngraph/#GNNGraph-type","page":"GNNGraph","title":"GNNGraph type","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"GNNGraph\nBase.copy","category":"page"},{"location":"api/gnngraph/#GNNGraphs.GNNGraph","page":"GNNGraph","title":"GNNGraphs.GNNGraph","text":"GNNGraph(data; [graph_type, ndata, edata, gdata, num_nodes, graph_indicator, dir])\nGNNGraph(g::GNNGraph; [ndata, edata, gdata])\n\nA type representing a graph structure that also stores feature arrays associated to nodes, edges, and the graph itself.\n\nThe feature arrays are stored in the fields ndata, edata, and gdata as DataStore objects offering a convenient dictionary-like and namedtuple-like interface. The features can be passed at construction time or added later.\n\nA GNNGraph can be constructed out of different data objects expressing the connections inside the graph. The internal representation type is determined by graph_type.\n\nWhen constructed from another GNNGraph, the internal graph representation is preserved and shared. The node/edge/graph features are retained as well, unless explicitely set by the keyword arguments ndata, edata, and gdata.\n\nA GNNGraph can also represent multiple graphs batched togheter (see MLUtils.batch or SparseArrays.blockdiag). The field g.graph_indicator contains the graph membership of each node.\n\nGNNGraphs are always directed graphs, therefore each edge is defined by a source node and a target node (see edge_index). Self loops (edges connecting a node to itself) and multiple edges (more than one edge between the same pair of nodes) are supported.\n\nA GNNGraph is a Graphs.jl's AbstractGraph, therefore it supports most functionality from that library.\n\nArguments\n\ndata: Some data representing the graph topology. Possible type are\nAn adjacency matrix\nAn adjacency list.\nA tuple containing the source and target vectors (COO representation)\nA Graphs.jl' graph.\ngraph_type: A keyword argument that specifies the underlying representation used by the GNNGraph. Currently supported values are\n:coo. Graph represented as a tuple (source, target), such that the k-th edge connects the node source[k] to node target[k]. Optionally, also edge weights can be given: (source, target, weights).\n:sparse. A sparse adjacency matrix representation.\n:dense. A dense adjacency matrix representation.\nDefaults to :coo, currently the most supported type.\ndir: The assumed edge direction when given adjacency matrix or adjacency list input data g. Possible values are :out and :in. Default :out.\nnum_nodes: The number of nodes. If not specified, inferred from g. Default nothing.\ngraph_indicator: For batched graphs, a vector containing the graph assignment of each node. Default nothing.\nndata: Node features. An array or named tuple of arrays whose last dimension has size num_nodes.\nedata: Edge features. An array or named tuple of arrays whose last dimension has size num_edges.\ngdata: Graph features. An array or named tuple of arrays whose last dimension has size num_graphs.\n\nExamples\n\nusing GraphNeuralNetworks\n\n# Construct from adjacency list representation\ndata = [[2,3], [1,4,5], [1], [2,5], [2,4]]\ng = GNNGraph(data)\n\n# Number of nodes, edges, and batched graphs\ng.num_nodes # 5\ng.num_edges # 10\ng.num_graphs # 1\n\n# Same graph in COO representation\ns = [1,1,2,2,2,3,4,4,5,5]\nt = [2,3,1,4,5,3,2,5,2,4]\ng = GNNGraph(s, t)\n\n# From a Graphs' graph\ng = GNNGraph(erdos_renyi(100, 20))\n\n# Add 2 node feature arrays at creation time\ng = GNNGraph(g, ndata = (x=rand(100, g.num_nodes), y=rand(g.num_nodes)))\n\n# Add 1 edge feature array, after the graph creation\ng.edata.z = rand(16, g.num_edges)\n\n# Add node features and edge features with default names `x` and `e`\ng = GNNGraph(g, ndata = rand(100, g.num_nodes), edata = rand(16, g.num_edges))\n\ng.ndata.x # or just g.x\ng.edata.e # or just g.e\n\n# Collect edges' source and target nodes.\n# Both source and target are vectors of length num_edges\nsource, target = edge_index(g)\n\nA GNNGraph can be sent to the GPU using e.g. Flux's gpu function:\n\n# Send to gpu\nusing Flux, CUDA\ng = g |> Flux.gpu\n\n\n\n\n\n","category":"type"},{"location":"api/gnngraph/#Base.copy","page":"GNNGraph","title":"Base.copy","text":"copy(g::GNNGraph; deep=false)\n\nCreate a copy of g. If deep is true, then copy will be a deep copy (equivalent to deepcopy(g)), otherwise it will be a shallow copy with the same underlying graph data.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#DataStore","page":"GNNGraph","title":"DataStore","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"datastore.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/#GNNGraphs.DataStore","page":"GNNGraph","title":"GNNGraphs.DataStore","text":"DataStore([n, data])\nDataStore([n,] k1 = x1, k2 = x2, ...)\n\nA container for feature arrays. The optional argument n enforces that numobs(x) == n for each array contained in the datastore.\n\nAt construction time, the data can be provided as any iterables of pairs of symbols and arrays or as keyword arguments:\n\njulia> ds = DataStore(3, x = rand(Float32, 2, 3), y = rand(Float32, 3))\nDataStore(3) with 2 elements:\n y = 3-element Vector{Float32}\n x = 2×3 Matrix{Float32}\n\njulia> ds = DataStore(3, Dict(:x => rand(Float32, 2, 3), :y => rand(Float32, 3))); # equivalent to above\n\njulia> ds = DataStore(3, (x = rand(Float32, 2, 3), y = rand(Float32, 30)))\nERROR: AssertionError: DataStore: data[y] has 30 observations, but n = 3\nStacktrace:\n [1] DataStore(n::Int64, data::Dict{Symbol, Any})\n @ GNNGraphs ~/.julia/dev/GNNGraphs/datastore.jl:54\n [2] DataStore(n::Int64, data::NamedTuple{(:x, :y), Tuple{Matrix{Float32}, Vector{Float32}}})\n @ GNNGraphs ~/.julia/dev/GNNGraphs/datastore.jl:73\n [3] top-level scope\n @ REPL[13]:1\n\njulia> ds = DataStore(x = randFloat32, 2, 3), y = rand(Float32, 30)) # no checks\nDataStore() with 2 elements:\n y = 30-element Vector{Float32}\n x = 2×3 Matrix{Float32}\n y = 30-element Vector{Float64}\n x = 2×3 Matrix{Float64}\n\nThe DataStore has an interface similar to both dictionaries and named tuples. Arrays can be accessed and added using either the indexing or the property syntax:\n\njulia> ds = DataStore(x = ones(Float32, 2, 3), y = zeros(Float32, 3))\nDataStore() with 2 elements:\n y = 3-element Vector{Float32}\n x = 2×3 Matrix{Float32}\n\njulia> ds.x # same as `ds[:x]`\n2×3 Matrix{Float32}:\n 1.0 1.0 1.0\n 1.0 1.0 1.0\n\njulia> ds.z = zeros(Float32, 3) # Add new feature array `z`. Same as `ds[:z] = rand(Float32, 3)`\n3-element Vector{Float64}:\n0.0\n0.0\n0.0\n\nThe DataStore can be iterated over, and the keys and values can be accessed using keys(ds) and values(ds). map(f, ds) applies the function f to each feature array:\n\njulia> ds = DataStore(a = zeros(2), b = zeros(2));\n\njulia> ds2 = map(x -> x .+ 1, ds)\n\njulia> ds2.a\n2-element Vector{Float64}:\n 1.0\n 1.0\n\n\n\n\n\n","category":"type"},{"location":"api/gnngraph/#Query","page":"GNNGraph","title":"Query","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"query.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/#GNNGraphs.adjacency_list-Tuple{GNNGraph, Any}","page":"GNNGraph","title":"GNNGraphs.adjacency_list","text":"adjacency_list(g; dir=:out)\nadjacency_list(g, nodes; dir=:out)\n\nReturn the adjacency list representation (a vector of vectors) of the graph g.\n\nCalling a the adjacency list, if dir=:out than a[i] will contain the neighbors of node i through outgoing edges. If dir=:in, it will contain neighbors from incoming edges instead.\n\nIf nodes is given, return the neighborhood of the nodes in nodes only.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.edge_index-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.edge_index","text":"edge_index(g::GNNGraph)\n\nReturn a tuple containing two vectors, respectively storing the source and target nodes for each edges in g.\n\ns, t = edge_index(g)\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.edge_index-Tuple{GNNHeteroGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Tuple{Symbol, Symbol, Symbol}}","page":"GNNGraph","title":"GNNGraphs.edge_index","text":"edge_index(g::GNNHeteroGraph, [edge_t])\n\nReturn a tuple containing two vectors, respectively storing the source and target nodes for each edges in g of type edge_t = (src_t, rel_t, trg_t).\n\nIf edge_t is not provided, it will error if g has more than one edge type.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.graph_indicator-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.graph_indicator","text":"graph_indicator(g::GNNGraph; edges=false)\n\nReturn a vector containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph. If edges=true, return the graph membership of each edge instead.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.graph_indicator-Tuple{GNNHeteroGraph}","page":"GNNGraph","title":"GNNGraphs.graph_indicator","text":"graph_indicator(g::GNNHeteroGraph, [node_t])\n\nReturn a Dict of vectors containing the graph membership (an integer from 1 to g.num_graphs) of each node in the graph for each node type. If node_t is provided, return the graph membership of each node of type node_t instead.\n\nSee also batch.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.has_isolated_nodes-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.has_isolated_nodes","text":"has_isolated_nodes(g::GNNGraph; dir=:out)\n\nReturn true if the graph g contains nodes with out-degree (if dir=:out) or in-degree (if dir = :in) equal to zero.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.has_multi_edges-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.has_multi_edges","text":"has_multi_edges(g::GNNGraph)\n\nReturn true if g has any multiple edges.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.is_bidirected-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.is_bidirected","text":"is_bidirected(g::GNNGraph)\n\nCheck if the directed graph g essentially corresponds to an undirected graph, i.e. if for each edge it also contains the reverse edge. \n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.khop_adj","page":"GNNGraph","title":"GNNGraphs.khop_adj","text":"khop_adj(g::GNNGraph,k::Int,T::DataType=eltype(g); dir=:out, weighted=true)\n\nReturn A^k where A is the adjacency matrix of the graph 'g'.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#GNNGraphs.laplacian_lambda_max","page":"GNNGraph","title":"GNNGraphs.laplacian_lambda_max","text":"laplacian_lambda_max(g::GNNGraph, T=Float32; add_self_loops=false, dir=:out)\n\nReturn the largest eigenvalue of the normalized symmetric Laplacian of the graph g.\n\nIf the graph is batched from multiple graphs, return the list of the largest eigenvalue for each graph.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#GNNGraphs.normalized_laplacian","page":"GNNGraph","title":"GNNGraphs.normalized_laplacian","text":"normalized_laplacian(g, T=Float32; add_self_loops=false, dir=:out)\n\nNormalized Laplacian matrix of graph g.\n\nArguments\n\ng: A GNNGraph.\nT: result element type.\nadd_self_loops: add self-loops while calculating the matrix.\ndir: the edge directionality considered (:out, :in, :both).\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#GNNGraphs.scaled_laplacian","page":"GNNGraph","title":"GNNGraphs.scaled_laplacian","text":"scaled_laplacian(g, T=Float32; dir=:out)\n\nScaled Laplacian matrix of graph g, defined as hatL = frac2lambda_max L - I where L is the normalized Laplacian matrix.\n\nArguments\n\ng: A GNNGraph.\nT: result element type.\ndir: the edge directionality considered (:out, :in, :both).\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Graphs.LinAlg.adjacency_matrix","page":"GNNGraph","title":"Graphs.LinAlg.adjacency_matrix","text":"adjacency_matrix(g::GNNGraph, T=eltype(g); dir=:out, weighted=true)\n\nReturn the adjacency matrix A for the graph g. \n\nIf dir=:out, A[i,j] > 0 denotes the presence of an edge from node i to node j. If dir=:in instead, A[i,j] > 0 denotes the presence of an edge from node j to node i.\n\nUser may specify the eltype T of the returned matrix. \n\nIf weighted=true, the A will contain the edge weights if any, otherwise the elements of A will be either 0 or 1.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Graphs.degree-Union{Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}, Tuple{TT}, Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, TT}} where TT<:Union{Nothing, Type{<:Number}}","page":"GNNGraph","title":"Graphs.degree","text":"degree(g::GNNGraph, T=nothing; dir=:out, edge_weight=true)\n\nReturn a vector containing the degrees of the nodes in g.\n\nThe gradient is propagated through this function only if edge_weight is true or a vector.\n\nArguments\n\ng: A graph.\nT: Element type of the returned vector. If nothing, is chosen based on the graph type and will be an integer if edge_weight = false. Default nothing.\ndir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two.\nedge_weight: If true and the graph contains weighted edges, the degree will be weighted. Set to false instead to just count the number of outgoing/ingoing edges. Finally, you can also pass a vector of weights to be used instead of the graph's own weights. Default true.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Graphs.degree-Union{Tuple{TT}, Tuple{GNNHeteroGraph, Tuple{Symbol, Symbol, Symbol}}, Tuple{GNNHeteroGraph, Tuple{Symbol, Symbol, Symbol}, TT}} where TT<:Union{Nothing, Type{<:Number}}","page":"GNNGraph","title":"Graphs.degree","text":"degree(g::GNNHeteroGraph, edge_type::EType; dir = :in)\n\nReturn a vector containing the degrees of the nodes in g GNNHeteroGraph given edge_type.\n\nArguments\n\ng: A graph.\nedge_type: A tuple of symbols (source_t, edge_t, target_t) representing the edge type.\nT: Element type of the returned vector. If nothing, is chosen based on the graph type. Default nothing.\ndir: For dir = :out the degree of a node is counted based on the outgoing edges. For dir = :in, the ingoing edges are used. If dir = :both we have the sum of the two. Default dir = :out.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Graphs.has_self_loops-Tuple{GNNGraph}","page":"GNNGraph","title":"Graphs.has_self_loops","text":"has_self_loops(g::GNNGraph)\n\nReturn true if g has any self loops.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Graphs.inneighbors-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Integer}","page":"GNNGraph","title":"Graphs.inneighbors","text":"inneighbors(g::GNNGraph, i::Integer)\n\nReturn the neighbors of node i in the graph g through incoming edges.\n\nSee also neighbors and outneighbors.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Graphs.outneighbors-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Integer}","page":"GNNGraph","title":"Graphs.outneighbors","text":"outneighbors(g::GNNGraph, i::Integer)\n\nReturn the neighbors of node i in the graph g through outgoing edges.\n\nSee also neighbors and inneighbors.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Graphs.neighbors(::GNNGraph, ::Integer)","category":"page"},{"location":"api/gnngraph/#Graphs.neighbors-Tuple{GNNGraph, Integer}","page":"GNNGraph","title":"Graphs.neighbors","text":"neighbors(g::GNNGraph, i::Integer; dir=:out)\n\nReturn the neighbors of node i in the graph g. If dir=:out, return the neighbors through outgoing edges. If dir=:in, return the neighbors through incoming edges.\n\nSee also outneighbors, inneighbors.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Transform","page":"GNNGraph","title":"Transform","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"transform.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/#GNNGraphs.add_edges-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, AbstractVector, AbstractVector}","page":"GNNGraph","title":"GNNGraphs.add_edges","text":"add_edges(g::GNNGraph, s::AbstractVector, t::AbstractVector; [edata])\nadd_edges(g::GNNGraph, (s, t); [edata])\nadd_edges(g::GNNGraph, (s, t, w); [edata])\n\nAdd to graph g the edges with source nodes s and target nodes t. Optionally, pass the edge weight w and the features edata for the new edges. Returns a new graph sharing part of the underlying data with g.\n\nIf the s or t contain nodes that are not already present in the graph, they are added to the graph as well.\n\nExamples\n\njulia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];\n\njulia> w = Float32[1.0, 2.0, 3.0, 4.0, 5.0];\n\njulia> g = GNNGraph((s, t, w))\nGNNGraph:\n num_nodes: 4\n num_edges: 5\n\njulia> add_edges(g, ([2, 3], [4, 1], [10.0, 20.0]))\nGNNGraph:\n num_nodes: 4\n num_edges: 7\n\njulia> g = GNNGraph()\nGNNGraph:\n num_nodes: 0\n num_edges: 0\n\njulia> add_edges(g, [1,2], [2,3])\nGNNGraph:\n num_nodes: 3\n num_edges: 2\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.add_edges-Tuple{GNNHeteroGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Tuple{Symbol, Symbol, Symbol}, AbstractVector, AbstractVector}","page":"GNNGraph","title":"GNNGraphs.add_edges","text":"add_edges(g::GNNHeteroGraph, edge_t, s, t; [edata, num_nodes])\nadd_edges(g::GNNHeteroGraph, edge_t => (s, t); [edata, num_nodes])\nadd_edges(g::GNNHeteroGraph, edge_t => (s, t, w); [edata, num_nodes])\n\nAdd to heterograph g edges of type edge_t with source node vector s and target node vector t. Optionally, pass the edge weights w or the features edata for the new edges. edge_t is a triplet of symbols (src_t, rel_t, dst_t). \n\nIf the edge type is not already present in the graph, it is added. If it involves new node types, they are added to the graph as well. In this case, a dictionary or named tuple of num_nodes can be passed to specify the number of nodes of the new types, otherwise the number of nodes is inferred from the maximum node id in s and t.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.add_nodes-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Integer}","page":"GNNGraph","title":"GNNGraphs.add_nodes","text":"add_nodes(g::GNNGraph, n; [ndata])\n\nAdd n new nodes to graph g. In the new graph, these nodes will have indexes from g.num_nodes + 1 to g.num_nodes + n.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.add_self_loops-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.add_self_loops","text":"add_self_loops(g::GNNGraph)\n\nReturn a graph with the same features as g but also adding edges connecting the nodes to themselves.\n\nNodes with already existing self-loops will obtain a second self-loop.\n\nIf the graphs has edge weights, the new edges will have weight 1.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.add_self_loops-Tuple{GNNHeteroGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, Tuple{Symbol, Symbol, Symbol}}","page":"GNNGraph","title":"GNNGraphs.add_self_loops","text":"add_self_loops(g::GNNHeteroGraph, edge_t::EType)\nadd_self_loops(g::GNNHeteroGraph)\n\nIf the source node type is the same as the destination node type in edge_t, return a graph with the same features as g but also add self-loops of the specified type, edge_t. Otherwise, it returns g unchanged.\n\nNodes with already existing self-loops of type edge_t will obtain a second set of self-loops of the same type.\n\nIf the graph has edge weights for edges of type edge_t, the new edges will have weight 1.\n\nIf no edges of type edge_t exist, or all existing edges have no weight, then all new self loops will have no weight.\n\nIf edge_t is not passed as argument, for the entire graph self-loop is added to each node for every edge type in the graph where the source and destination node types are the same. This iterates over all edge types present in the graph, applying the self-loop addition logic to each applicable edge type.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.getgraph-Tuple{GNNGraph, Int64}","page":"GNNGraph","title":"GNNGraphs.getgraph","text":"getgraph(g::GNNGraph, i; nmap=false)\n\nReturn the subgraph of g induced by those nodes j for which g.graph_indicator[j] == i or, if i is a collection, g.graph_indicator[j] ∈ i. In other words, it extract the component graphs from a batched graph. \n\nIf nmap=true, return also a vector v mapping the new nodes to the old ones. The node i in the subgraph will correspond to the node v[i] in g.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.negative_sample-Tuple{GNNGraph}","page":"GNNGraph","title":"GNNGraphs.negative_sample","text":"negative_sample(g::GNNGraph; \n num_neg_edges = g.num_edges, \n bidirected = is_bidirected(g))\n\nReturn a graph containing random negative edges (i.e. non-edges) from graph g as edges.\n\nIf bidirected=true, the output graph will be bidirected and there will be no leakage from the origin graph. \n\nSee also is_bidirected.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.perturb_edges-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, AbstractFloat}","page":"GNNGraph","title":"GNNGraphs.perturb_edges","text":"perturb_edges([rng], g::GNNGraph, perturb_ratio)\n\nReturn a new graph obtained from g by adding random edges, based on a specified perturb_ratio. The perturb_ratio determines the fraction of new edges to add relative to the current number of edges in the graph. These new edges are added without creating self-loops. \n\nThe function returns a new GNNGraph instance that shares some of the underlying data with g but includes the additional edges. The nodes for the new edges are selected randomly, and no edge data (edata) or weights (w) are assigned to these new edges.\n\nArguments\n\ng::GNNGraph: The graph to be perturbed.\nperturb_ratio: The ratio of the number of new edges to add relative to the current number of edges in the graph. For example, a perturb_ratio of 0.1 means that 10% of the current number of edges will be added as new random edges.\nrng: An optionalrandom number generator to ensure reproducible results.\n\nExamples\n\njulia> g = GNNGraph((s, t, w))\nGNNGraph:\n num_nodes: 4\n num_edges: 5\n\njulia> perturbed_g = perturb_edges(g, 0.2)\nGNNGraph:\n num_nodes: 4\n num_edges: 6\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.ppr_diffusion-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.ppr_diffusion","text":"ppr_diffusion(g::GNNGraph{<:COO_T}, alpha =0.85f0) -> GNNGraph\n\nCalculates the Personalized PageRank (PPR) diffusion based on the edge weight matrix of a GNNGraph and updates the graph with new edge weights derived from the PPR matrix. References paper: The pagerank citation ranking: Bringing order to the web\n\nThe function performs the following steps:\n\nConstructs a modified adjacency matrix A using the graph's edge weights, where A is adjusted by (α - 1) * A + I, with α being the damping factor (alpha_f32) and I the identity matrix.\nNormalizes A to ensure each column sums to 1, representing transition probabilities.\nApplies the PPR formula α * (I + (α - 1) * A)^-1 to compute the diffusion matrix.\nUpdates the original edge weights of the graph based on the PPR diffusion matrix, assigning new weights for each edge from the PPR matrix.\n\nArguments\n\ng::GNNGraph: The input graph for which PPR diffusion is to be calculated. It should have edge weights available.\nalpha_f32::Float32: The damping factor used in PPR calculation, controlling the teleport probability in the random walk. Defaults to 0.85f0.\n\nReturns\n\nA new GNNGraph instance with the same structure as g but with updated edge weights according to the PPR diffusion calculation.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.rand_edge_split-Tuple{GNNGraph, Any}","page":"GNNGraph","title":"GNNGraphs.rand_edge_split","text":"rand_edge_split(g::GNNGraph, frac; bidirected=is_bidirected(g)) -> g1, g2\n\nRandomly partition the edges in g to form two graphs, g1 and g2. Both will have the same number of nodes as g. g1 will contain a fraction frac of the original edges, while g2 wil contain the rest.\n\nIf bidirected = true makes sure that an edge and its reverse go into the same split. This option is supported only for bidirected graphs with no self-loops and multi-edges.\n\nrand_edge_split is tipically used to create train/test splits in link prediction tasks.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.random_walk_pe-Tuple{GNNGraph, Int64}","page":"GNNGraph","title":"GNNGraphs.random_walk_pe","text":"random_walk_pe(g, walk_length)\n\nReturn the random walk positional encoding from the paper Graph Neural Networks with Learnable Structural and Positional Representations of the given graph g and the length of the walk walk_length as a matrix of size (walk_length, g.num_nodes). \n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_edges-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, AbstractVector{<:Integer}}","page":"GNNGraph","title":"GNNGraphs.remove_edges","text":"remove_edges(g::GNNGraph, edges_to_remove::AbstractVector{<:Integer})\nremove_edges(g::GNNGraph, p=0.5)\n\nRemove specified edges from a GNNGraph, either by specifying edge indices or by randomly removing edges with a given probability.\n\nArguments\n\ng: The input graph from which edges will be removed.\nedges_to_remove: Vector of edge indices to be removed. This argument is only required for the first method.\np: Probability of removing each edge. This argument is only required for the second method and defaults to 0.5.\n\nReturns\n\nA new GNNGraph with the specified edges removed.\n\nExample\n\njulia> using GraphNeuralNetworks\n\n# Construct a GNNGraph\njulia> g = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])\nGNNGraph:\n num_nodes: 3\n num_edges: 5\n \n# Remove the second edge\njulia> g_new = remove_edges(g, [2]);\n\njulia> g_new\nGNNGraph:\n num_nodes: 3\n num_edges: 4\n\n# Remove edges with a probability of 0.5\njulia> g_new = remove_edges(g, 0.5);\n\njulia> g_new\nGNNGraph:\n num_nodes: 3\n num_edges: 2\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_multi_edges-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.remove_multi_edges","text":"remove_multi_edges(g::GNNGraph; aggr=+)\n\nRemove multiple edges (also called parallel edges or repeated edges) from graph g. Possible edge features are aggregated according to aggr, that can take value +,min, max or mean.\n\nSee also remove_self_loops, has_multi_edges, and to_bidirected.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_nodes-Tuple{GNNGraph, AbstractFloat}","page":"GNNGraph","title":"GNNGraphs.remove_nodes","text":"remove_nodes(g::GNNGraph, p)\n\nReturns a new graph obtained by dropping nodes from g with independent probabilities p. \n\nExamples\n\njulia> g = GNNGraph([1, 1, 2, 2, 3, 4], [1, 2, 3, 1, 3, 1])\nGNNGraph:\n num_nodes: 4\n num_edges: 6\n\njulia> g_new = remove_nodes(g, 0.5)\nGNNGraph:\n num_nodes: 2\n num_edges: 2\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_nodes-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}, AbstractVector}","page":"GNNGraph","title":"GNNGraphs.remove_nodes","text":"remove_nodes(g::GNNGraph, nodes_to_remove::AbstractVector)\n\nRemove specified nodes, and their associated edges, from a GNNGraph. This operation reindexes the remaining nodes to maintain a continuous sequence of node indices, starting from 1. Similarly, edges are reindexed to account for the removal of edges connected to the removed nodes.\n\nArguments\n\ng: The input graph from which nodes (and their edges) will be removed.\nnodes_to_remove: Vector of node indices to be removed.\n\nReturns\n\nA new GNNGraph with the specified nodes and all edges associated with these nodes removed. \n\nExample\n\nusing GraphNeuralNetworks\n\ng = GNNGraph([1, 1, 2, 2, 3], [2, 3, 1, 3, 1])\n\n# Remove nodes with indices 2 and 3, for example\ng_new = remove_nodes(g, [2, 3])\n\n# g_new now does not contain nodes 2 and 3, and any edges that were connected to these nodes.\nprintln(g_new)\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.remove_self_loops-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.remove_self_loops","text":"remove_self_loops(g::GNNGraph)\n\nReturn a graph constructed from g where self-loops (edges from a node to itself) are removed. \n\nSee also add_self_loops and remove_multi_edges.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.set_edge_weight-Tuple{GNNGraph, AbstractVector}","page":"GNNGraph","title":"GNNGraphs.set_edge_weight","text":"set_edge_weight(g::GNNGraph, w::AbstractVector)\n\nSet w as edge weights in the returned graph. \n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.to_bidirected-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.to_bidirected","text":"to_bidirected(g)\n\nAdds a reverse edge for each edge in the graph, then calls remove_multi_edges with mean aggregation to simplify the graph. \n\nSee also is_bidirected. \n\nExamples\n\njulia> s, t = [1, 2, 3, 3, 4], [2, 3, 4, 4, 4];\n\njulia> w = [1.0, 2.0, 3.0, 4.0, 5.0];\n\njulia> e = [10.0, 20.0, 30.0, 40.0, 50.0];\n\njulia> g = GNNGraph(s, t, w, edata = e)\nGNNGraph:\n num_nodes = 4\n num_edges = 5\n edata:\n e => (5,)\n\njulia> g2 = to_bidirected(g)\nGNNGraph:\n num_nodes = 4\n num_edges = 7\n edata:\n e => (7,)\n\njulia> edge_index(g2)\n([1, 2, 2, 3, 3, 4, 4], [2, 1, 3, 2, 4, 3, 4])\n\njulia> get_edge_weight(g2)\n7-element Vector{Float64}:\n 1.0\n 1.0\n 2.0\n 2.0\n 3.5\n 3.5\n 5.0\n\njulia> g2.edata.e\n7-element Vector{Float64}:\n 10.0\n 10.0\n 20.0\n 20.0\n 35.0\n 35.0\n 50.0\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.to_unidirected-Tuple{GNNGraph{<:Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}}}}","page":"GNNGraph","title":"GNNGraphs.to_unidirected","text":"to_unidirected(g::GNNGraph)\n\nReturn a graph that for each multiple edge between two nodes in g keeps only an edge in one direction.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#MLUtils.batch-Tuple{AbstractVector{<:GNNGraph}}","page":"GNNGraph","title":"MLUtils.batch","text":"batch(gs::Vector{<:GNNGraph})\n\nBatch together multiple GNNGraphs into a single one containing the total number of original nodes and edges.\n\nEquivalent to SparseArrays.blockdiag. See also MLUtils.unbatch.\n\nExamples\n\njulia> g1 = rand_graph(4, 6, ndata=ones(8, 4))\nGNNGraph:\n num_nodes = 4\n num_edges = 6\n ndata:\n x => (8, 4)\n\njulia> g2 = rand_graph(7, 4, ndata=zeros(8, 7))\nGNNGraph:\n num_nodes = 7\n num_edges = 4\n ndata:\n x => (8, 7)\n\njulia> g12 = MLUtils.batch([g1, g2])\nGNNGraph:\n num_nodes = 11\n num_edges = 10\n num_graphs = 2\n ndata:\n x => (8, 11)\n\njulia> g12.ndata.x\n8×11 Matrix{Float64}:\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#MLUtils.unbatch-Union{Tuple{GNNGraph{T}}, Tuple{T}} where T<:(Tuple{T, T, V} where {T<:(AbstractVector{<:Integer}), V<:Union{Nothing, AbstractVector}})","page":"GNNGraph","title":"MLUtils.unbatch","text":"unbatch(g::GNNGraph)\n\nOpposite of the MLUtils.batch operation, returns an array of the individual graphs batched together in g.\n\nSee also MLUtils.batch and getgraph.\n\nExamples\n\njulia> gbatched = MLUtils.batch([rand_graph(5, 6), rand_graph(10, 8), rand_graph(4,2)])\nGNNGraph:\n num_nodes = 19\n num_edges = 16\n num_graphs = 3\n\njulia> MLUtils.unbatch(gbatched)\n3-element Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}:\n GNNGraph:\n num_nodes = 5\n num_edges = 6\n\n GNNGraph:\n num_nodes = 10\n num_edges = 8\n\n GNNGraph:\n num_nodes = 4\n num_edges = 2\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#SparseArrays.blockdiag-Tuple{GNNGraph, Vararg{GNNGraph}}","page":"GNNGraph","title":"SparseArrays.blockdiag","text":"blockdiag(xs::GNNGraph...)\n\nEquivalent to MLUtils.batch.\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#Utils","page":"GNNGraph","title":"Utils","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"GNNGraphs.sort_edge_index\nGNNGraphs.color_refinement","category":"page"},{"location":"api/gnngraph/#GNNGraphs.sort_edge_index","page":"GNNGraph","title":"GNNGraphs.sort_edge_index","text":"sort_edge_index(ei::Tuple) -> u', v'\nsort_edge_index(u, v) -> u', v'\n\nReturn a sorted version of the tuple of vectors ei = (u, v), applying a common permutation to u and v. The sorting is lexycographic, that is the pairs (ui, vi) are sorted first according to the ui and then according to vi. \n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#GNNGraphs.color_refinement","page":"GNNGraph","title":"GNNGraphs.color_refinement","text":"color_refinement(g::GNNGraph, [x0]) -> x, num_colors, niters\n\nThe color refinement algorithm for graph coloring. Given a graph g and an initial coloring x0, the algorithm iteratively refines the coloring until a fixed point is reached.\n\nAt each iteration the algorithm computes a hash of the coloring and the sorted list of colors of the neighbors of each node. This hash is used to determine if the coloring has changed.\n\nmath x_i' = hashmap((x_i, sort([x_j for j \\in N(i)]))).`\n\nThis algorithm is related to the 1-Weisfeiler-Lehman algorithm for graph isomorphism testing.\n\nArguments\n\ng::GNNGraph: The graph to color.\nx0::AbstractVector{<:Integer}: The initial coloring. If not provided, all nodes are colored with 1.\n\nReturns\n\nx::AbstractVector{<:Integer}: The final coloring.\nnum_colors::Int: The number of colors used.\nniters::Int: The number of iterations until convergence.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Generate","page":"GNNGraph","title":"Generate","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"generate.jl\"]\nPrivate = false\nFilter = t -> typeof(t) <: Function && t!=rand_temporal_radius_graph && t!=rand_temporal_hyperbolic_graph\n","category":"page"},{"location":"api/gnngraph/#GNNGraphs.knn_graph-Tuple{AbstractMatrix, Int64}","page":"GNNGraph","title":"GNNGraphs.knn_graph","text":"knn_graph(points::AbstractMatrix, \n k::Int; \n graph_indicator = nothing,\n self_loops = false, \n dir = :in, \n kws...)\n\nCreate a k-nearest neighbor graph where each node is linked to its k closest points. \n\nArguments\n\npoints: A numfeatures × numnodes matrix storing the Euclidean positions of the nodes.\nk: The number of neighbors considered in the kNN algorithm.\ngraph_indicator: Either nothing or a vector containing the graph assignment of each node, in which case the returned graph will be a batch of graphs. \nself_loops: If true, consider the node itself among its k nearest neighbors, in which case the graph will contain self-loops. \ndir: The direction of the edges. If dir=:in edges go from the k neighbors to the central node. If dir=:out we have the opposite direction.\nkws: Further keyword arguments will be passed to the GNNGraph constructor.\n\nExamples\n\njulia> n, k = 10, 3;\n\njulia> x = rand(Float32, 3, n);\n\njulia> g = knn_graph(x, k)\nGNNGraph:\n num_nodes = 10\n num_edges = 30\n\njulia> graph_indicator = [1,1,1,1,1,2,2,2,2,2];\n\njulia> g = knn_graph(x, k; graph_indicator)\nGNNGraph:\n num_nodes = 10\n num_edges = 30\n num_graphs = 2\n\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.radius_graph-Tuple{AbstractMatrix, AbstractFloat}","page":"GNNGraph","title":"GNNGraphs.radius_graph","text":"radius_graph(points::AbstractMatrix, \n r::AbstractFloat; \n graph_indicator = nothing,\n self_loops = false, \n dir = :in, \n kws...)\n\nCreate a graph where each node is linked to its neighbors within a given distance r. \n\nArguments\n\npoints: A numfeatures × numnodes matrix storing the Euclidean positions of the nodes.\nr: The radius.\ngraph_indicator: Either nothing or a vector containing the graph assignment of each node, in which case the returned graph will be a batch of graphs. \nself_loops: If true, consider the node itself among its neighbors, in which case the graph will contain self-loops. \ndir: The direction of the edges. If dir=:in edges go from the neighbors to the central node. If dir=:out we have the opposite direction.\nkws: Further keyword arguments will be passed to the GNNGraph constructor.\n\nExamples\n\njulia> n, r = 10, 0.75;\n\njulia> x = rand(Float32, 3, n);\n\njulia> g = radius_graph(x, r)\nGNNGraph:\n num_nodes = 10\n num_edges = 46\n\njulia> graph_indicator = [1,1,1,1,1,2,2,2,2,2];\n\njulia> g = radius_graph(x, r; graph_indicator)\nGNNGraph:\n num_nodes = 10\n num_edges = 20\n num_graphs = 2\n\n\nReferences\n\nSection B paragraphs 1 and 2 of the paper Dynamic Hidden-Variable Network Models\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.rand_bipartite_heterograph-Tuple{Any, Any}","page":"GNNGraph","title":"GNNGraphs.rand_bipartite_heterograph","text":"rand_bipartite_heterograph([rng,] \n (n1, n2), (m12, m21); \n bidirected = true, \n node_t = (:A, :B), \n edge_t = :to, \n kws...)\n\nConstruct an GNNHeteroGraph with random edges representing a bipartite graph. The graph will have two types of nodes, and edges will only connect nodes of different types.\n\nThe first argument is a tuple (n1, n2) specifying the number of nodes of each type. The second argument is a tuple (m12, m21) specifying the number of edges connecting nodes of type 1 to nodes of type 2 and vice versa.\n\nThe type of nodes and edges can be specified with the node_t and edge_t keyword arguments, which default to (:A, :B) and :to respectively.\n\nIf bidirected=true (default), the reverse edge of each edge will be present. In this case m12 == m21 is required.\n\nA random number generator can be passed as the first argument to make the generation reproducible.\n\nAdditional keyword arguments will be passed to the GNNHeteroGraph constructor.\n\nSee rand_heterograph for a more general version.\n\nExamples\n\njulia> g = rand_bipartite_heterograph((10, 15), 20)\nGNNHeteroGraph:\n num_nodes: (:A => 10, :B => 15)\n num_edges: ((:A, :to, :B) => 20, (:B, :to, :A) => 20)\n\njulia> g = rand_bipartite_heterograph((10, 15), (20, 0), node_t=(:user, :item), edge_t=:-, bidirected=false)\nGNNHeteroGraph:\n num_nodes: Dict(:item => 15, :user => 10)\n num_edges: Dict((:item, :-, :user) => 0, (:user, :-, :item) => 20)\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.rand_graph-Tuple{Integer, Integer}","page":"GNNGraph","title":"GNNGraphs.rand_graph","text":"rand_graph([rng,] n, m; bidirected=true, edge_weight = nothing, kws...)\n\nGenerate a random (Erdós-Renyi) GNNGraph with n nodes and m edges.\n\nIf bidirected=true the reverse edge of each edge will be present. If bidirected=false instead, m unrelated edges are generated. In any case, the output graph will contain no self-loops or multi-edges.\n\nA vector can be passed as edge_weight. Its length has to be equal to m in the directed case, and m÷2 in the bidirected one.\n\nPass a random number generator as the first argument to make the generation reproducible.\n\nAdditional keyword arguments will be passed to the GNNGraph constructor.\n\nExamples\n\njulia> g = rand_graph(5, 4, bidirected=false)\nGNNGraph:\n num_nodes = 5\n num_edges = 4\n\njulia> edge_index(g)\n([1, 3, 3, 4], [5, 4, 5, 2])\n\n# In the bidirected case, edge data will be duplicated on the reverse edges if needed.\njulia> g = rand_graph(5, 4, edata=rand(Float32, 16, 2))\nGNNGraph:\n num_nodes = 5\n num_edges = 4\n edata:\n e => (16, 4)\n\n# Each edge has a reverse\njulia> edge_index(g)\n([1, 3, 3, 4], [3, 4, 1, 3])\n\n\n\n\n\n","category":"method"},{"location":"api/gnngraph/#GNNGraphs.rand_heterograph","page":"GNNGraph","title":"GNNGraphs.rand_heterograph","text":"rand_heterograph([rng,] n, m; bidirected=false, kws...)\n\nConstruct an GNNHeteroGraph with random edges and with number of nodes and edges specified by n and m respectively. n and m can be any iterable of pairs specifing node/edge types and their numbers.\n\nPass a random number generator as a first argument to make the generation reproducible.\n\nSetting bidirected=true will generate a bidirected graph, i.e. each edge will have a reverse edge. Therefore, for each edge type (:A, :rel, :B) a corresponding reverse edge type (:B, :rel, :A) will be generated.\n\nAdditional keyword arguments will be passed to the GNNHeteroGraph constructor.\n\nExamples\n\njulia> g = rand_heterograph((:user => 10, :movie => 20),\n (:user, :rate, :movie) => 30)\nGNNHeteroGraph:\n num_nodes: (:user => 10, :movie => 20) \n num_edges: ((:user, :rate, :movie) => 30,)\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Operators","page":"GNNGraph","title":"Operators","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"operators.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Base.intersect","category":"page"},{"location":"api/gnngraph/#Base.intersect","page":"GNNGraph","title":"Base.intersect","text":"\" intersect(g1::GNNGraph, g2::GNNGraph)\n\nIntersect two graphs by keeping only the common edges.\n\n\n\n\n\n","category":"function"},{"location":"api/gnngraph/#Sampling","page":"GNNGraph","title":"Sampling","text":"","category":"section"},{"location":"api/gnngraph/","page":"GNNGraph","title":"GNNGraph","text":"Modules = [GNNGraphs]\nPages = [\"sampling.jl\"]\nPrivate = false","category":"page"},{"location":"api/gnngraph/#GNNGraphs.sample_neighbors","page":"GNNGraph","title":"GNNGraphs.sample_neighbors","text":"sample_neighbors(g, nodes, K=-1; dir=:in, replace=false, dropnodes=false)\n\nSample neighboring edges of the given nodes and return the induced subgraph. For each node, a number of inbound (or outbound when dir = :out) edges will be randomly chosen. Ifdropnodes=false`, the graph returned will then contain all the nodes in the original graph, but only the sampled edges.\n\nThe returned graph will contain an edge feature EID corresponding to the id of the edge in the original graph. If dropnodes=true, it will also contain a node feature NID with the node ids in the original graph.\n\nArguments\n\ng. The graph.\nnodes. A list of node IDs to sample neighbors from.\nK. The maximum number of edges to be sampled for each node. If -1, all the neighboring edges will be selected.\ndir. Determines whether to sample inbound (:in) or outbound (`:out) edges (Default :in).\nreplace. If true, sample with replacement.\ndropnodes. If true, the resulting subgraph will contain only the nodes involved in the sampled edges.\n\nExamples\n\njulia> g = rand_graph(20, 100)\nGNNGraph:\n num_nodes = 20\n num_edges = 100\n\njulia> sample_neighbors(g, 2:3)\nGNNGraph:\n num_nodes = 20\n num_edges = 9\n edata:\n EID => (9,)\n\njulia> sg = sample_neighbors(g, 2:3, dropnodes=true)\nGNNGraph:\n num_nodes = 10\n num_edges = 9\n ndata:\n NID => (10,)\n edata:\n EID => (9,)\n\njulia> sg.ndata.NID\n10-element Vector{Int64}:\n 2\n 3\n 17\n 14\n 18\n 15\n 16\n 20\n 7\n 10\n\njulia> sample_neighbors(g, 2:3, 5, replace=true)\nGNNGraph:\n num_nodes = 20\n num_edges = 10\n edata:\n EID => (10,)\n\n\n\n\n\n","category":"function"},{"location":"heterograph/#Heterogeneous-Graphs","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Heterogeneous graphs (also called heterographs), are graphs where each node has a type, that we denote with symbols such as :user and :movie. Relations such as :rate or :like can connect nodes of different types. We call a triplet (source_node_type, relation_type, target_node_type) the type of a edge, e.g. (:user, :rate, :movie).","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Different node/edge types can store different groups of features and this makes heterographs a very flexible modeling tools and data containers. In GraphNeuralNetworks.jl heterographs are implemented in the type GNNHeteroGraph.","category":"page"},{"location":"heterograph/#Creating-a-Heterograph","page":"Heterogeneous Graphs","title":"Creating a Heterograph","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"A heterograph can be created empty or by passing pairs edge_type => data to the constructor.","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> g = GNNHeteroGraph()\nGNNHeteroGraph:\n num_nodes: Dict()\n num_edges: Dict()\n \njulia> g = GNNHeteroGraph((:user, :like, :actor) => ([1,2,2,3], [1,3,2,9]),\n (:user, :rate, :movie) => ([1,1,2,3], [7,13,5,7]))\nGNNHeteroGraph:\n num_nodes: Dict(:actor => 9, :movie => 13, :user => 3)\n num_edges: Dict((:user, :like, :actor) => 4, (:user, :rate, :movie) => 4)\n\njulia> g = GNNHeteroGraph((:user, :rate, :movie) => ([1,1,2,3], [7,13,5,7]))\nGNNHeteroGraph:\n num_nodes: Dict(:movie => 13, :user => 3)\n num_edges: Dict((:user, :rate, :movie) => 4)","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"New relations, possibly with new node types, can be added with the function add_edges.","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> g = add_edges(g, (:user, :like, :actor) => ([1,2,3,3,3], [3,5,1,9,4]))\nGNNHeteroGraph:\n num_nodes: Dict(:actor => 9, :movie => 13, :user => 3)\n num_edges: Dict((:user, :like, :actor) => 5, (:user, :rate, :movie) => 4)","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"See rand_heterograph, rand_bipartite_heterograph for generating random heterographs. ","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> g = rand_bipartite_heterograph((10, 15), 20)\nGNNHeteroGraph:\n num_nodes: Dict(:A => 10, :B => 15)\n num_edges: Dict((:A, :to, :B) => 20, (:B, :to, :A) => 20)","category":"page"},{"location":"heterograph/#Basic-Queries","page":"Heterogeneous Graphs","title":"Basic Queries","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Basic queries are similar to those for homogeneous graphs:","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> g = GNNHeteroGraph((:user, :rate, :movie) => ([1,1,2,3], [7,13,5,7]))\nGNNHeteroGraph:\n num_nodes: Dict(:movie => 13, :user => 3)\n num_edges: Dict((:user, :rate, :movie) => 4)\n\njulia> g.num_nodes\nDict{Symbol, Int64} with 2 entries:\n :user => 3\n :movie => 13\n\njulia> g.num_edges\nDict{Tuple{Symbol, Symbol, Symbol}, Int64} with 1 entry:\n (:user, :rate, :movie) => 4\n\n# source and target node for a given relation\njulia> edge_index(g, (:user, :rate, :movie))\n([1, 1, 2, 3], [7, 13, 5, 7])\n\n# node types\njulia> g.ntypes\n2-element Vector{Symbol}:\n :user\n :movie\n\n# edge types\njulia> g.etypes\n1-element Vector{Tuple{Symbol, Symbol, Symbol}}:\n (:user, :rate, :movie)","category":"page"},{"location":"heterograph/#Data-Features","page":"Heterogeneous Graphs","title":"Data Features","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Node, edge, and graph features can be added at construction time or later using:","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"# equivalent to g.ndata[:user][:x] = ...\njulia> g[:user].x = rand(Float32, 64, 3);\n\njulia> g[:movie].z = rand(Float32, 64, 13);\n\n# equivalent to g.edata[(:user, :rate, :movie)][:e] = ...\njulia> g[:user, :rate, :movie].e = rand(Float32, 64, 4);\n\njulia> g\nGNNHeteroGraph:\n num_nodes: Dict(:movie => 13, :user => 3)\n num_edges: Dict((:user, :rate, :movie) => 4)\n ndata:\n :movie => DataStore(z = [64×13 Matrix{Float32}])\n :user => DataStore(x = [64×3 Matrix{Float32}])\n edata:\n (:user, :rate, :movie) => DataStore(e = [64×4 Matrix{Float32}])","category":"page"},{"location":"heterograph/#Batching","page":"Heterogeneous Graphs","title":"Batching","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Similarly to graphs, also heterographs can be batched together.","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"julia> gs = [rand_bipartite_heterograph((5, 10), 20) for _ in 1:32];\n\njulia> Flux.batch(gs)\nGNNHeteroGraph:\n num_nodes: Dict(:A => 160, :B => 320)\n num_edges: Dict((:A, :to, :B) => 640, (:B, :to, :A) => 640)\n num_graphs: 32","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Batching is automatically performed by the DataLoader iterator when the collate option is set to true.","category":"page"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"using Flux: DataLoader\n\ndata = [rand_bipartite_heterograph((5, 10), 20, \n ndata=Dict(:A=>rand(Float32, 3, 5))) \n for _ in 1:320];\n\ntrain_loader = DataLoader(data, batchsize=16, shuffle=true, collate=true)\n\nfor g in train_loader\n @assert g.num_graphs == 16\n @assert g.num_nodes[:A] == 80\n @assert size(g.ndata[:A].x) == (3, 80) \n # ...\nend","category":"page"},{"location":"heterograph/#Graph-convolutions-on-heterographs","page":"Heterogeneous Graphs","title":"Graph convolutions on heterographs","text":"","category":"section"},{"location":"heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"See HeteroGraphConv for how to perform convolutions on heterogeneous graphs.","category":"page"},{"location":"datasets/#Datasets","page":"Datasets","title":"Datasets","text":"","category":"section"},{"location":"datasets/","page":"Datasets","title":"Datasets","text":"GraphNeuralNetworks.jl doesn't come with its own datasets, but leverages those available in the Julia (and non-Julia) ecosystem. In particular, the examples in the GraphNeuralNetworks.jl repository make use of the MLDatasets.jl package. There you will find common graph datasets such as Cora, PubMed, Citeseer, TUDataset and many others.","category":"page"},{"location":"datasets/","page":"Datasets","title":"Datasets","text":"GraphNeuralNetworks.jl provides the mldataset2gnngraph method for interfacing with MLDatasets.jl.","category":"page"},{"location":"datasets/","page":"Datasets","title":"Datasets","text":"mldataset2gnngraph","category":"page"},{"location":"datasets/#GNNGraphs.mldataset2gnngraph","page":"Datasets","title":"GNNGraphs.mldataset2gnngraph","text":"mldataset2gnngraph(dataset)\n\nConvert a graph dataset from the package MLDatasets.jl into one or many GNNGraphs.\n\nExamples\n\njulia> using MLDatasets, GraphNeuralNetworks\n\njulia> mldataset2gnngraph(Cora())\nGNNGraph:\n num_nodes = 2708\n num_edges = 10556\n ndata:\n features => 1433×2708 Matrix{Float32}\n targets => 2708-element Vector{Int64}\n train_mask => 2708-element BitVector\n val_mask => 2708-element BitVector\n test_mask => 2708-element BitVector\n\n\n\n\n\n","category":"function"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"EditURL = \"/home/runner/work/GraphNeuralNetworks.jl/GraphNeuralNetworks.jl/GraphNeuralNetworks/docs/tutorials/index.md\"","category":"page"},{"location":"tutorials/#tutorials","page":"Tutorials","title":"Tutorials","text":"","category":"section"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"","category":"page"},{"location":"tutorials/#Introductory-tutorials","page":"Tutorials","title":"Introductory tutorials","text":"","category":"section"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"A beginner level introduction to graph machine learning using GraphNeuralNetworks.jl","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"(Image: card-cover-image)","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Hands-on introduction to Graph Neural Networks","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Tutorial for Graph Classification using GraphNeuralNetworks.jl","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"(Image: card-cover-image)","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Graph Classification with Graph Neural Networks","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Tutorial for Node classification using GraphNeuralNetworks.jl","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"(Image: card-cover-image)","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"Node Classification with Graph Neural Networks","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
\n
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"
","category":"page"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"","category":"page"},{"location":"tutorials/#Contributions","page":"Tutorials","title":"Contributions","text":"","category":"section"},{"location":"tutorials/","page":"Tutorials","title":"Tutorials","text":"If you have a suggestion on adding new tutorials, feel free to create a new issue here. Users are invited to contribute demonstrations of their own. If you want to contribute new tutorials and looking for inspiration, checkout these tutorials from PyTorch Geometric. You are expected to use Pluto.jl notebooks with DemoCards.jl. Please check out existing tutorials for more details.","category":"page"},{"location":"dev/#Developer-Notes","page":"Developer Notes","title":"Developer Notes","text":"","category":"section"},{"location":"dev/#Develop-and-Managing-the-Monorepo","page":"Developer Notes","title":"Develop and Managing the Monorepo","text":"","category":"section"},{"location":"dev/#Development-Enviroment","page":"Developer Notes","title":"Development Enviroment","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"GraphNeuralNetworks.jl is package hosted in a monorepo that contains multiple packages. The GraphNeuralNetworks.jl package depends on GNNGraphs.jl, also hosted in the same monorepo.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"pkg> activate .\n\npkg> dev ./GNNGraphs","category":"page"},{"location":"dev/#Add-a-New-Layer","page":"Developer Notes","title":"Add a New Layer","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"To add a new graph convolutional layer and make it available in both the Flux-based frontend (GraphNeuralNetworks.jl) and the Lux-based frontend (GNNLux), you need to:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Add the functional version to GNNlib\nAdd the stateful version to GraphNeuralNetworks\nAdd the stateless version to GNNLux\nAdd the layer to the table in docs/api/conv.md","category":"page"},{"location":"dev/#Versions-and-Tagging","page":"Developer Notes","title":"Versions and Tagging","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Each PR should update the version number in the Porject.toml file of each involved package if needed by semnatic versioning. For instance, when adding new features GNNGraphs could move from \"1.17.5\" to \"1.18.0-DEV\". The \"DEV\" will be removed when the package is tagged and released. Pay also attention to updating the compat bounds, e.g. GraphNeuralNetworks might require a newer version of GNNGraphs.","category":"page"},{"location":"dev/#Generate-Documentation-Locally","page":"Developer Notes","title":"Generate Documentation Locally","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"For generating the documentation locally","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"cd docs\njulia","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"(@v1.10) pkg> activate .\n Activating project at `~/.julia/dev/GraphNeuralNetworks/docs`\n\n(docs) pkg> dev ../ ../GNNGraphs/\n Resolving package versions...\n No Changes to `~/.julia/dev/GraphNeuralNetworks/docs/Project.toml`\n No Changes to `~/.julia/dev/GraphNeuralNetworks/docs/Manifest.toml`\n\njulia> include(\"make.jl\")","category":"page"},{"location":"dev/#Benchmarking","page":"Developer Notes","title":"Benchmarking","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"You can benchmark the effect on performance of your commits using the script perf/perf.jl.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"First, checkout and benchmark the master branch:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"julia> include(\"perf.jl\")\n\njulia> df = run_benchmarks()\n\n# observe results\njulia> for g in groupby(df, :layer); println(g, \"\\n\"); end\n\njulia> @save \"perf_master_20210803_mymachine.jld2\" dfmaster=df","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Now checkout your branch and do the same:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"julia> df = run_benchmarks()\n\njulia> @save \"perf_pr_20210803_mymachine.jld2\" dfpr=df","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Finally, compare the results:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"julia> @load \"perf_master_20210803_mymachine.jld2\"\n\njulia> @load \"perf_pr_20210803_mymachine.jld2\"\n\njulia> compare(dfpr, dfmaster)","category":"page"},{"location":"dev/#Caching-tutorials","page":"Developer Notes","title":"Caching tutorials","text":"","category":"section"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Tutorials in GraphNeuralNetworks.jl are written in Pluto and rendered using DemoCards.jl and PlutoStaticHTML.jl. Rendering a Pluto notebook is time and resource-consuming, especially in a CI environment. So we use the caching functionality provided by PlutoStaticHTML.jl to reduce CI time.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"If you are contributing a new tutorial or making changes to the existing notebook, generate the docs locally before committing/pushing. For caching to work, the cache environment(your local) and the documenter CI should have the same Julia version (e.g. \"v1.9.1\", also the patch number must match). So use the documenter CI Julia version for generating docs locally.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"julia --version # check julia version before generating docs\njulia --project=docs docs/make.jl","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Note: Use juliaup for easy switching of Julia versions.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"During the doc generation process, DemoCards.jl stores the cache notebooks in docs/pluto_output. So add any changes made in this folder in your git commit. Remember that every file in this folder is machine-generated and should not be edited manually.","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"git add docs/pluto_output # add generated cache","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"Check the documenter CI logs to ensure that it used the local cache:","category":"page"},{"location":"dev/","page":"Developer Notes","title":"Developer Notes","text":"(Image: )","category":"page"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/utils/#Utility-Functions","page":"Utils","title":"Utility Functions","text":"","category":"section"},{"location":"api/utils/#Index","page":"Utils","title":"Index","text":"","category":"section"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"Order = [:type, :function]\nPages = [\"utils.md\"]","category":"page"},{"location":"api/utils/#Docs","page":"Utils","title":"Docs","text":"","category":"section"},{"location":"api/utils/#Graph-wise-operations","page":"Utils","title":"Graph-wise operations","text":"","category":"section"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"GraphNeuralNetworks.reduce_nodes\nGraphNeuralNetworks.reduce_edges\nGraphNeuralNetworks.softmax_nodes\nGraphNeuralNetworks.softmax_edges\nGraphNeuralNetworks.broadcast_nodes\nGraphNeuralNetworks.broadcast_edges","category":"page"},{"location":"api/utils/#GNNlib.reduce_nodes","page":"Utils","title":"GNNlib.reduce_nodes","text":"reduce_nodes(aggr, g, x)\n\nFor a batched graph g, return the graph-wise aggregation of the node features x. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.\n\nSee also: reduce_edges.\n\n\n\n\n\nreduce_nodes(aggr, indicator::AbstractVector, x)\n\nReturn the graph-wise aggregation of the node features x given the graph indicator indicator. The aggregation operator aggr can be +, mean, max, or min.\n\nSee also graph_indicator.\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.reduce_edges","page":"Utils","title":"GNNlib.reduce_edges","text":"reduce_edges(aggr, g, e)\n\nFor a batched graph g, return the graph-wise aggregation of the edge features e. The aggregation operator aggr can be +, mean, max, or min. The returned array will have last dimension g.num_graphs.\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.softmax_nodes","page":"Utils","title":"GNNlib.softmax_nodes","text":"softmax_nodes(g, x)\n\nGraph-wise softmax of the node features x.\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.softmax_edges","page":"Utils","title":"GNNlib.softmax_edges","text":"softmax_edges(g, e)\n\nGraph-wise softmax of the edge features e.\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.broadcast_nodes","page":"Utils","title":"GNNlib.broadcast_nodes","text":"broadcast_nodes(g, x)\n\nGraph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_nodes).\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#GNNlib.broadcast_edges","page":"Utils","title":"GNNlib.broadcast_edges","text":"broadcast_edges(g, x)\n\nGraph-wise broadcast array x of size (*, g.num_graphs) to size (*, g.num_edges).\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#Neighborhood-operations","page":"Utils","title":"Neighborhood operations","text":"","category":"section"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"GraphNeuralNetworks.softmax_edge_neighbors","category":"page"},{"location":"api/utils/#GNNlib.softmax_edge_neighbors","page":"Utils","title":"GNNlib.softmax_edge_neighbors","text":"softmax_edge_neighbors(g, e)\n\nSoftmax over each node's neighborhood of the edge features e.\n\nmathbfe_jto i = frace^mathbfe_jto i\n sum_jin N(i) e^mathbfe_jto i\n\n\n\n\n\n","category":"function"},{"location":"api/utils/#NNlib","page":"Utils","title":"NNlib","text":"","category":"section"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"Primitive functions implemented in NNlib.jl:","category":"page"},{"location":"api/utils/","page":"Utils","title":"Utils","text":"gather!\ngather\nscatter!\nscatter","category":"page"},{"location":"gnngraph/#Working-with-GNNGraph","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"The fundamental graph type in GraphNeuralNetworks.jl is the GNNGraph. A GNNGraph g is a directed graph with nodes labeled from 1 to g.num_nodes. The underlying implementation allows for efficient application of graph neural network operators, gpu movement, and storage of node/edge/graph related feature arrays.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"GNNGraph inherits from Graphs.jl's AbstractGraph, therefore it supports most functionality from that library. ","category":"page"},{"location":"gnngraph/#Graph-Creation","page":"Working with GNNGraph","title":"Graph Creation","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"A GNNGraph can be created from several different data sources encoding the graph topology:","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"using GraphNeuralNetworks, Graphs, SparseArrays\n\n\n# Construct a GNNGraph from from a Graphs.jl's graph\nlg = erdos_renyi(10, 30)\ng = GNNGraph(lg)\n\n# Same as above using convenience method rand_graph\ng = rand_graph(10, 60)\n\n# From an adjacency matrix\nA = sprand(10, 10, 0.3)\ng = GNNGraph(A)\n\n# From an adjacency list\nadjlist = [[2,3], [1,3], [1,2,4], [3]]\ng = GNNGraph(adjlist)\n\n# From COO representation\nsource = [1,1,2,2,3,3,3,4]\ntarget = [2,3,1,3,1,2,4,3]\ng = GNNGraph(source, target)","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"See also the related methods Graphs.adjacency_matrix, edge_index, and adjacency_list.","category":"page"},{"location":"gnngraph/#Basic-Queries","page":"Working with GNNGraph","title":"Basic Queries","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"julia> source = [1,1,2,2,3,3,3,4];\n\njulia> target = [2,3,1,3,1,2,4,3];\n\njulia> g = GNNGraph(source, target)\nGNNGraph:\n num_nodes: 4\n num_edges: 8\n\n\njulia> @assert g.num_nodes == 4 # number of nodes\n\njulia> @assert g.num_edges == 8 # number of edges\n\njulia> @assert g.num_graphs == 1 # number of subgraphs (a GNNGraph can batch many graphs together)\n\njulia> is_directed(g) # a GNNGraph is always directed\ntrue\n\njulia> is_bidirected(g) # for each edge, also the reverse edge is present\ntrue\n\njulia> has_self_loops(g)\nfalse\n\njulia> has_multi_edges(g) \nfalse","category":"page"},{"location":"gnngraph/#Data-Features","page":"Working with GNNGraph","title":"Data Features","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"One or more arrays can be associated to nodes, edges, and (sub)graphs of a GNNGraph. They will be stored in the fields g.ndata, g.edata, and g.gdata respectively.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"The data fields are DataStore objects. DataStores conveniently offer an interface similar to both dictionaries and named tuples. Similarly to dictionaries, DataStores support addition of new features after creation time.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"The array contained in the datastores have last dimension equal to num_nodes (in ndata), num_edges (in edata), or num_graphs (in gdata) respectively.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"# Create a graph with a single feature array `x` associated to nodes\ng = rand_graph(10, 60, ndata = (; x = rand(Float32, 32, 10)))\n\ng.ndata.x # access the features\n\n# Equivalent definition passing directly the array\ng = rand_graph(10, 60, ndata = rand(Float32, 32, 10))\n\ng.ndata.x # `:x` is the default name for node features\n\ng.ndata.z = rand(Float32, 3, 10) # add new feature array `z`\n\n# For convenience, we can access the features through the shortcut\ng.x \n\n# You can have multiple feature arrays\ng = rand_graph(10, 60, ndata = (; x=rand(Float32, 32, 10), y=rand(Float32, 10)))\n\ng.ndata.y, g.ndata.x # or g.x, g.y\n\n# Attach an array with edge features.\n# Since `GNNGraph`s are directed, the number of edges\n# will be double that of the original Graphs' undirected graph.\ng = GNNGraph(erdos_renyi(10, 30), edata = rand(Float32, 60))\n@assert g.num_edges == 60\n\ng.edata.e # or g.e\n\n# If we pass only half of the edge features, they will be copied\n# on the reversed edges.\ng = GNNGraph(erdos_renyi(10, 30), edata = rand(Float32, 30))\n\n\n# Create a new graph from previous one, inheriting edge data\n# but replacing node data\ng′ = GNNGraph(g, ndata =(; z = ones(Float32, 16, 10)))\n\ng′.z\ng′.e","category":"page"},{"location":"gnngraph/#Edge-weights","page":"Working with GNNGraph","title":"Edge weights","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"It is common to denote scalar edge features as edge weights. The GNNGraph has specific support for edge weights: they can be stored as part of internal representations of the graph (COO or adjacency matrix). Some graph convolutional layers, most notably the GCNConv, can use the edge weights to perform weighted sums over the nodes' neighborhoods.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"julia> source = [1, 1, 2, 2, 3, 3];\n\njulia> target = [2, 3, 1, 3, 1, 2];\n\njulia> weight = [1.0, 0.5, 2.1, 2.3, 4, 4.1];\n\njulia> g = GNNGraph(source, target, weight)\nGNNGraph:\n num_nodes: 3\n num_edges: 6\n\njulia> get_edge_weight(g)\n6-element Vector{Float64}:\n 1.0\n 0.5\n 2.1\n 2.3\n 4.0\n 4.1","category":"page"},{"location":"gnngraph/#Batches-and-Subgraphs","page":"Working with GNNGraph","title":"Batches and Subgraphs","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"Multiple GNNGraphs can be batched together into a single graph that contains the total number of the original nodes and where the original graphs are disjoint subgraphs.","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"using Flux\nusing Flux: DataLoader\n\ndata = [rand_graph(10, 30, ndata=rand(Float32, 3, 10)) for _ in 1:160]\ngall = Flux.batch(data)\n\n# gall is a GNNGraph containing many graphs\n@assert gall.num_graphs == 160 \n@assert gall.num_nodes == 1600 # 10 nodes x 160 graphs\n@assert gall.num_edges == 4800 # 30 undirected edges x 160 graphs\n\n# Let's create a mini-batch from gall\ng23 = getgraph(gall, 2:3)\n@assert g23.num_graphs == 2\n@assert g23.num_nodes == 20 # 10 nodes x 2 graphs\n@assert g23.num_edges == 60 # 30 undirected edges X 2 graphs\n\n# We can pass a GNNGraph to Flux's DataLoader\ntrain_loader = DataLoader(gall, batchsize=16, shuffle=true)\n\nfor g in train_loader\n @assert g.num_graphs == 16\n @assert g.num_nodes == 160\n @assert size(g.ndata.x) = (3, 160) \n # .....\nend\n\n# Access the nodes' graph memberships \ngraph_indicator(gall)","category":"page"},{"location":"gnngraph/#DataLoader-and-mini-batch-iteration","page":"Working with GNNGraph","title":"DataLoader and mini-batch iteration","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"While constructing a batched graph and passing it to the DataLoader is always an option for mini-batch iteration, the recommended way for better performance is to pass an array of graphs directly and set the collate option to true:","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"using Flux: DataLoader\n\ndata = [rand_graph(10, 30, ndata=rand(Float32, 3, 10)) for _ in 1:320]\n\ntrain_loader = DataLoader(data, batchsize=16, shuffle=true, collate=true)\n\nfor g in train_loader\n @assert g.num_graphs == 16\n @assert g.num_nodes == 160\n @assert size(g.ndata.x) = (3, 160) \n # .....\nend","category":"page"},{"location":"gnngraph/#Graph-Manipulation","page":"Working with GNNGraph","title":"Graph Manipulation","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"g′ = add_self_loops(g)\ng′ = remove_self_loops(g)\ng′ = add_edges(g, [1, 2], [2, 3]) # add edges 1->2 and 2->3","category":"page"},{"location":"gnngraph/#GPU-movement","page":"Working with GNNGraph","title":"GPU movement","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"Move a GNNGraph to a CUDA device using Flux.gpu method. ","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"using CUDA, Flux\n\ng_gpu = g |> Flux.gpu","category":"page"},{"location":"gnngraph/#Integration-with-Graphs.jl","page":"Working with GNNGraph","title":"Integration with Graphs.jl","text":"","category":"section"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"Since GNNGraph <: Graphs.AbstractGraph, we can use any functionality from Graphs.jl for querying and analyzing the graph structure. Moreover, a GNNGraph can be easily constructed from a Graphs.Graph or a Graphs.DiGraph:","category":"page"},{"location":"gnngraph/","page":"Working with GNNGraph","title":"Working with GNNGraph","text":"julia> import Graphs\n\njulia> using GraphNeuralNetworks\n\n# A Graphs.jl undirected graph\njulia> gu = Graphs.erdos_renyi(10, 20) \n{10, 20} undirected simple Int64 graph\n\n# Since GNNGraphs are undirected, the edges are doubled when converting \n# to GNNGraph\njulia> GNNGraph(gu)\nGNNGraph:\n num_nodes: 10\n num_edges: 40\n\n# A Graphs.jl directed graph\njulia> gd = Graphs.erdos_renyi(10, 20, is_directed=true)\n{10, 20} directed simple Int64 graph\n\njulia> GNNGraph(gd)\nGNNGraph:\n num_nodes: 10\n num_edges: 20","category":"page"},{"location":"gsoc/#Graph-Neural-Networks-Summer-of-Code","page":"Summer Of Code","title":"Graph Neural Networks - Summer of Code","text":"","category":"section"},{"location":"gsoc/","page":"Summer Of Code","title":"Summer Of Code","text":"Potential candidates to Google Summer of Code's scholarships can find out about the available projects involving GraphNeuralNetworks.jl on the dedicated page in the Julia Language website.","category":"page"},{"location":"models/#Models","page":"Model Building","title":"Models","text":"","category":"section"},{"location":"models/","page":"Model Building","title":"Model Building","text":"GraphNeuralNetworks.jl provides common graph convolutional layers by which you can assemble arbitrarily deep or complex models. GNN layers are compatible with Flux.jl ones, therefore expert Flux users are promptly able to define and train their models. ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"In what follows, we discuss two different styles for model creation: the explicit modeling style, more verbose but more flexible, and the implicit modeling style based on GNNChain, more concise but less flexible.","category":"page"},{"location":"models/#Explicit-modeling","page":"Model Building","title":"Explicit modeling","text":"","category":"section"},{"location":"models/","page":"Model Building","title":"Model Building","text":"In the explicit modeling style, the model is created according to the following steps:","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"Define a new type for your model (GNN in the example below). Layers and submodels are fields.\nApply Flux.@layer to the new type to make it Flux's compatible (parameters' collection, gpu movement, etc...)\nOptionally define a convenience constructor for your model.\nDefine the forward pass by implementing the call method for your type.\nInstantiate the model. ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"Here is an example of this construction:","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"using Flux, Graphs, GraphNeuralNetworks\n\nstruct GNN # step 1\n conv1\n bn\n conv2\n dropout\n dense\nend\n\nFlux.@layer GNN # step 2\n\nfunction GNN(din::Int, d::Int, dout::Int) # step 3 \n GNN(GCNConv(din => d),\n BatchNorm(d),\n GraphConv(d => d, relu),\n Dropout(0.5),\n Dense(d, dout))\nend\n\nfunction (model::GNN)(g::GNNGraph, x) # step 4\n x = model.conv1(g, x)\n x = relu.(model.bn(x))\n x = model.conv2(g, x)\n x = model.dropout(x)\n x = model.dense(x)\n return x \nend\n\ndin, d, dout = 3, 4, 2 \nmodel = GNN(din, d, dout) # step 5\n\ng = rand_graph(10, 30)\nX = randn(Float32, din, 10) \n\ny = model(g, X) # output size: (dout, g.num_nodes)\ngrad = gradient(model -> sum(model(g, X)), model)","category":"page"},{"location":"models/#Implicit-modeling-with-GNNChains","page":"Model Building","title":"Implicit modeling with GNNChains","text":"","category":"section"},{"location":"models/","page":"Model Building","title":"Model Building","text":"While very flexible, the way in which we defined GNN model definition in last section is a bit verbose. In order to simplify things, we provide the GNNChain type. It is very similar to Flux's well known Chain. It allows to compose layers in a sequential fashion as Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles propagates the input graph as well, providing it as a first argument to layers subtyping the GNNLayer abstract type. ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"Using GNNChain, the previous example becomes","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"using Flux, Graphs, GraphNeuralNetworks\n\ndin, d, dout = 3, 4, 2 \ng = rand_graph(10, 30)\nX = randn(Float32, din, 10)\n\nmodel = GNNChain(GCNConv(din => d),\n BatchNorm(d),\n x -> relu.(x),\n GCNConv(d => d, relu),\n Dropout(0.5),\n Dense(d, dout))","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"The GNNChain only propagates the graph and the node features. More complex scenarios, e.g. when also edge features are updated, have to be handled using the explicit definition of the forward pass. ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"A GNNChain opportunely propagates the graph into the branches created by the Flux.Parallel layer:","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"AddResidual(l) = Parallel(+, identity, l) # implementing a skip/residual connection\n\nmodel = GNNChain( ResGatedGraphConv(din => d, relu),\n AddResidual(ResGatedGraphConv(d => d, relu)),\n AddResidual(ResGatedGraphConv(d => d, relu)),\n AddResidual(ResGatedGraphConv(d => d, relu)),\n GlobalPooling(mean),\n Dense(d, dout))\n\ny = model(g, X) # output size: (dout, g.num_graphs)","category":"page"},{"location":"models/#Embedding-a-graph-in-the-model","page":"Model Building","title":"Embedding a graph in the model","text":"","category":"section"},{"location":"models/","page":"Model Building","title":"Model Building","text":"Sometimes it is useful to consider a specific graph as a part of a model instead of its input. GraphNeuralNetworks.jl provides the WithGraph type to deal with this scenario.","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"chain = GNNChain(GCNConv(din => d, relu),\n GCNConv(d => d))\n\n\ng = rand_graph(10, 30)\n\nmodel = WithGraph(chain, g)\n\nX = randn(Float32, din, 10)\n\n# Pass only X as input, the model already contains the graph.\ny = model(X) ","category":"page"},{"location":"models/","page":"Model Building","title":"Model Building","text":"An example of WithGraph usage is given in the graph neural ODE script in the examples folder.","category":"page"},{"location":"api/pool/","page":"Pooling Layers","title":"Pooling Layers","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/pool/#Pooling-Layers","page":"Pooling Layers","title":"Pooling Layers","text":"","category":"section"},{"location":"api/pool/#Index","page":"Pooling Layers","title":"Index","text":"","category":"section"},{"location":"api/pool/","page":"Pooling Layers","title":"Pooling Layers","text":"Order = [:type, :function]\nPages = [\"pool.md\"]","category":"page"},{"location":"api/pool/#Docs","page":"Pooling Layers","title":"Docs","text":"","category":"section"},{"location":"api/pool/","page":"Pooling Layers","title":"Pooling Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"layers/pool.jl\"]\nPrivate = false","category":"page"},{"location":"api/pool/#GraphNeuralNetworks.GlobalAttentionPool","page":"Pooling Layers","title":"GraphNeuralNetworks.GlobalAttentionPool","text":"GlobalAttentionPool(fgate, ffeat=identity)\n\nGlobal soft attention layer from the Gated Graph Sequence Neural Networks paper\n\nmathbfu_V = sum_iin V alpha_i f_feat(mathbfx_i)\n\nwhere the coefficients alpha_i are given by a softmax_nodes operation:\n\nalpha_i = frace^f_gate(mathbfx_i)\n sum_iin V e^f_gate(mathbfx_i)\n\nArguments\n\nfgate: The function f_gate mathbbR^D_in to mathbbR. It is tipically expressed by a neural network.\nffeat: The function f_feat mathbbR^D_in to mathbbR^D_out. It is tipically expressed by a neural network.\n\nExamples\n\nchin = 6\nchout = 5 \n\nfgate = Dense(chin, 1)\nffeat = Dense(chin, chout)\npool = GlobalAttentionPool(fgate, ffeat)\n\ng = Flux.batch([GNNGraph(random_regular_graph(10, 4), \n ndata=rand(Float32, chin, 10)) \n for i=1:3])\n\nu = pool(g, g.ndata.x)\n\n@assert size(u) == (chout, g.num_graphs)\n\n\n\n\n\n","category":"type"},{"location":"api/pool/#GraphNeuralNetworks.GlobalPool","page":"Pooling Layers","title":"GraphNeuralNetworks.GlobalPool","text":"GlobalPool(aggr)\n\nGlobal pooling layer for graph neural networks. Takes a graph and feature nodes as inputs and performs the operation\n\nmathbfu_V = square_i in V mathbfx_i\n\nwhere V is the set of nodes of the input graph and the type of aggregation represented by square is selected by the aggr argument. Commonly used aggregations are mean, max, and +.\n\nSee also reduce_nodes.\n\nExamples\n\nusing Flux, GraphNeuralNetworks, Graphs\n\npool = GlobalPool(mean)\n\ng = GNNGraph(erdos_renyi(10, 4))\nX = rand(32, 10)\npool(g, X) # => 32x1 matrix\n\n\ng = Flux.batch([GNNGraph(erdos_renyi(10, 4)) for _ in 1:5])\nX = rand(32, 50)\npool(g, X) # => 32x5 matrix\n\n\n\n\n\n","category":"type"},{"location":"api/pool/#GraphNeuralNetworks.Set2Set","page":"Pooling Layers","title":"GraphNeuralNetworks.Set2Set","text":"Set2Set(n_in, n_iters, n_layers = 1)\n\nSet2Set layer from the paper Order Matters: Sequence to sequence for sets.\n\nFor each graph in the batch, the layer computes an output vector of size 2*n_in by iterating the following steps n_iters times:\n\nmathbfq = mathrmLSTM(mathbfq_t-1^*)\nalpha_i = fracexp(mathbfq^T mathbfx_i)sum_j=1^N exp(mathbfq^T mathbfx_j) \nmathbfr = sum_i=1^N alpha_i mathbfx_i\nmathbfq^*_t = mathbfq mathbfr\n\nwhere N is the number of nodes in the graph, LSTM is a Long-Short-Term-Memory network with n_layers layers, input size 2*n_in and output size n_in.\n\nGiven a batch of graphs g and node features x, the layer returns a matrix of size (2*n_in, n_graphs). ```\n\n\n\n\n\n","category":"type"},{"location":"api/pool/#GraphNeuralNetworks.TopKPool","page":"Pooling Layers","title":"GraphNeuralNetworks.TopKPool","text":"TopKPool(adj, k, in_channel)\n\nTop-k pooling layer.\n\nArguments\n\nadj: Adjacency matrix of a graph.\nk: Top-k nodes are selected to pool together.\nin_channel: The dimension of input channel.\n\n\n\n\n\n","category":"type"},{"location":"messagepassing/#Message-Passing","page":"Message Passing","title":"Message Passing","text":"","category":"section"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"A generic message passing on graph takes the form","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"beginaligned\nmathbfm_jto i = phi(mathbfx_i mathbfx_j mathbfe_jto i) \nbarmathbfm_i = square_jin N(i) mathbfm_jto i \nmathbfx_i = gamma_x(mathbfx_i barmathbfm_i)\nmathbfe_jto i^prime = gamma_e(mathbfe_j to imathbfm_j to i)\nendaligned","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"where we refer to phi as to the message function, and to gamma_x and gamma_e as to the node update and edge update function respectively. The aggregation square is over the neighborhood N(i) of node i, and it is usually equal either to sum, to max or to a mean operation. ","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"In GraphNeuralNetworks.jl, the message passing mechanism is exposed by the propagate function. propagate takes care of materializing the node features on each edge, applying the message function, performing the aggregation, and returning barmathbfm. It is then left to the user to perform further node and edge updates, manipulating arrays of size D_node times num_nodes and D_edge times num_edges.","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"propagate is composed of two steps, also available as two independent methods:","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"apply_edges materializes node features on edges and applies the message function. \naggregate_neighbors applies a reduction operator on the messages coming from the neighborhood of each node.","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"The whole propagation mechanism internally relies on the NNlib.gather and NNlib.scatter methods.","category":"page"},{"location":"messagepassing/#Examples","page":"Message Passing","title":"Examples","text":"","category":"section"},{"location":"messagepassing/#Basic-use-of-apply_edges-and-propagate","page":"Message Passing","title":"Basic use of apply_edges and propagate","text":"","category":"section"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"The function apply_edges can be used to broadcast node data on each edge and produce new edge data.","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"julia> using GraphNeuralNetworks, Graphs, Statistics\n\njulia> g = rand_graph(10, 20)\nGNNGraph:\n num_nodes = 10\n num_edges = 20\n\njulia> x = ones(2,10);\n\njulia> z = 2ones(2,10);\n\n# Return an edge features arrays (D × num_edges)\njulia> apply_edges((xi, xj, e) -> xi .+ xj, g, xi=x, xj=z)\n2×20 Matrix{Float64}:\n 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0\n 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0 3.0\n\n# now returning a named tuple\njulia> apply_edges((xi, xj, e) -> (a=xi .+ xj, b=xi .- xj), g, xi=x, xj=z)\n(a = [3.0 3.0 … 3.0 3.0; 3.0 3.0 … 3.0 3.0], b = [-1.0 -1.0 … -1.0 -1.0; -1.0 -1.0 … -1.0 -1.0])\n\n# Here we provide a named tuple input\njulia> apply_edges((xi, xj, e) -> xi.a + xi.b .* xj, g, xi=(a=x,b=z), xj=z)\n2×20 Matrix{Float64}:\n 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0\n 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0 5.0","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"The function propagate instead performs the apply_edges operation but then also applies a reduction over each node's neighborhood (see aggregate_neighbors).","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"julia> propagate((xi, xj, e) -> xi .+ xj, g, +, xi=x, xj=z)\n2×10 Matrix{Float64}:\n 3.0 6.0 9.0 9.0 0.0 6.0 6.0 3.0 15.0 3.0\n 3.0 6.0 9.0 9.0 0.0 6.0 6.0 3.0 15.0 3.0\n\n# Previous output can be understood by looking at the degree\njulia> degree(g)\n10-element Vector{Int64}:\n 1\n 2\n 3\n 3\n 0\n 2\n 2\n 1\n 5\n 1","category":"page"},{"location":"messagepassing/#Implementing-a-custom-Graph-Convolutional-Layer","page":"Message Passing","title":"Implementing a custom Graph Convolutional Layer","text":"","category":"section"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"Let's implement a simple graph convolutional layer using the message passing framework. The convolution reads ","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"mathbfx_i = W cdot sum_j in N(i) mathbfx_j","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"We will also add a bias and an activation function.","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"using Flux, Graphs, GraphNeuralNetworks\n\nstruct GCN{A<:AbstractMatrix, B, F} <: GNNLayer\n weight::A\n bias::B\n σ::F\nend\n\nFlux.@layer GCN # allow gpu movement, select trainable params etc...\n\nfunction GCN(ch::Pair{Int,Int}, σ=identity)\n in, out = ch\n W = Flux.glorot_uniform(out, in)\n b = zeros(Float32, out)\n GCN(W, b, σ)\nend\n\nfunction (l::GCN)(g::GNNGraph, x::AbstractMatrix{T}) where T\n @assert size(x, 2) == g.num_nodes\n\n # Computes messages from source/neighbour nodes (j) to target/root nodes (i).\n # The message function will have to handle matrices of size (*, num_edges).\n # In this simple case we just let the neighbor features go through.\n message(xi, xj, e) = xj \n\n # The + operator gives the sum aggregation.\n # `mean`, `max`, `min`, and `*` are other possibilities.\n x = propagate(message, g, +, xj=x) \n\n return l.σ.(l.weight * x .+ l.bias)\nend","category":"page"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"See the GATConv implementation here for a more complex example.","category":"page"},{"location":"messagepassing/#Built-in-message-functions","page":"Message Passing","title":"Built-in message functions","text":"","category":"section"},{"location":"messagepassing/","page":"Message Passing","title":"Message Passing","text":"In order to exploit optimized specializations of the propagate, it is recommended to use built-in message functions such as copy_xj whenever possible. ","category":"page"},{"location":"api/basic/","page":"Basic Layers","title":"Basic Layers","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/basic/#Basic-Layers","page":"Basic Layers","title":"Basic Layers","text":"","category":"section"},{"location":"api/basic/#Index","page":"Basic Layers","title":"Index","text":"","category":"section"},{"location":"api/basic/","page":"Basic Layers","title":"Basic Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"basic.md\"]","category":"page"},{"location":"api/basic/#Docs","page":"Basic Layers","title":"Docs","text":"","category":"section"},{"location":"api/basic/","page":"Basic Layers","title":"Basic Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"layers/basic.jl\"]\nPrivate = false","category":"page"},{"location":"api/basic/#GraphNeuralNetworks.DotDecoder","page":"Basic Layers","title":"GraphNeuralNetworks.DotDecoder","text":"DotDecoder()\n\nA graph neural network layer that for given input graph g and node features x, returns the dot product x_i ⋅ xj on each edge. \n\nExamples\n\njulia> g = rand_graph(5, 6)\nGNNGraph:\n num_nodes = 5\n num_edges = 6\n\njulia> dotdec = DotDecoder()\nDotDecoder()\n\njulia> dotdec(g, rand(2, 5))\n1×6 Matrix{Float64}:\n 0.345098 0.458305 0.106353 0.345098 0.458305 0.106353\n\n\n\n\n\n","category":"type"},{"location":"api/basic/#GraphNeuralNetworks.GNNChain","page":"Basic Layers","title":"GraphNeuralNetworks.GNNChain","text":"GNNChain(layers...)\nGNNChain(name = layer, ...)\n\nCollects multiple layers / functions to be called in sequence on given input graph and input node features. \n\nIt allows to compose layers in a sequential fashion as Flux.Chain does, propagating the output of each layer to the next one. In addition, GNNChain handles the input graph as well, providing it as a first argument only to layers subtyping the GNNLayer abstract type. \n\nGNNChain supports indexing and slicing, m[2] or m[1:end-1], and if names are given, m[:name] == m[1] etc.\n\nExamples\n\njulia> using Flux, GraphNeuralNetworks\n\njulia> m = GNNChain(GCNConv(2=>5), \n BatchNorm(5), \n x -> relu.(x), \n Dense(5, 4))\nGNNChain(GCNConv(2 => 5), BatchNorm(5), #7, Dense(5 => 4))\n\njulia> x = randn(Float32, 2, 3);\n\njulia> g = rand_graph(3, 6)\nGNNGraph:\n num_nodes = 3\n num_edges = 6\n\njulia> m(g, x)\n4×3 Matrix{Float32}:\n -0.795592 -0.795592 -0.795592\n -0.736409 -0.736409 -0.736409\n 0.994925 0.994925 0.994925\n 0.857549 0.857549 0.857549\n\njulia> m2 = GNNChain(enc = m, \n dec = DotDecoder())\nGNNChain(enc = GNNChain(GCNConv(2 => 5), BatchNorm(5), #7, Dense(5 => 4)), dec = DotDecoder())\n\njulia> m2(g, x)\n1×6 Matrix{Float32}:\n 2.90053 2.90053 2.90053 2.90053 2.90053 2.90053\n\njulia> m2[:enc](g, x) == m(g, x)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"api/basic/#GraphNeuralNetworks.GNNLayer","page":"Basic Layers","title":"GraphNeuralNetworks.GNNLayer","text":"abstract type GNNLayer end\n\nAn abstract type from which graph neural network layers are derived.\n\nSee also GNNChain.\n\n\n\n\n\n","category":"type"},{"location":"api/basic/#GraphNeuralNetworks.WithGraph","page":"Basic Layers","title":"GraphNeuralNetworks.WithGraph","text":"WithGraph(model, g::GNNGraph; traingraph=false)\n\nA type wrapping the model and tying it to the graph g. In the forward pass, can only take feature arrays as inputs, returning model(g, x...; kws...).\n\nIf traingraph=false, the graph's parameters won't be part of the trainable parameters in the gradient updates.\n\nExamples\n\ng = GNNGraph([1,2,3], [2,3,1])\nx = rand(Float32, 2, 3)\nmodel = SAGEConv(2 => 3)\nwg = WithGraph(model, g)\n# No need to feed the graph to `wg`\n@assert wg(x) == model(g, x)\n\ng2 = GNNGraph([1,1,2,3], [2,4,1,1])\nx2 = rand(Float32, 2, 4)\n# WithGraph will ignore the internal graph if fed with a new one. \n@assert wg(g2, x2) == model(g2, x2)\n\n\n\n\n\n","category":"type"},{"location":"api/temporalgraph/#Temporal-Graphs","page":"Temporal Graphs","title":"Temporal Graphs","text":"","category":"section"},{"location":"api/temporalgraph/#TemporalSnapshotsGNNGraph","page":"Temporal Graphs","title":"TemporalSnapshotsGNNGraph","text":"","category":"section"},{"location":"api/temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Documentation page for the graph type TemporalSnapshotsGNNGraph and related methods, representing time varying graphs with time varying features.","category":"page"},{"location":"api/temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Modules = [GNNGraphs]\nPages = [\"temporalsnapshotsgnngraph.jl\"]\nPrivate = false","category":"page"},{"location":"api/temporalgraph/#GNNGraphs.TemporalSnapshotsGNNGraph","page":"Temporal Graphs","title":"GNNGraphs.TemporalSnapshotsGNNGraph","text":"TemporalSnapshotsGNNGraph(snapshots::AbstractVector{<:GNNGraph})\n\nA type representing a temporal graph as a sequence of snapshots. In this case a snapshot is a GNNGraph.\n\nTemporalSnapshotsGNNGraph can store the feature array associated to the graph itself as a DataStore object, and it uses the DataStore objects of each snapshot for the node and edge features. The features can be passed at construction time or added later.\n\nConstructor Arguments\n\nsnapshot: a vector of snapshots, where each snapshot must have the same number of nodes.\n\nExamples\n\njulia> using GraphNeuralNetworks\n\njulia> snapshots = [rand_graph(10,20) for i in 1:5];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [20, 20, 20, 20, 20]\n num_snapshots: 5\n\njulia> tg.tgdata.x = rand(4); # add temporal graph feature\n\njulia> tg # show temporal graph with new feature\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [20, 20, 20, 20, 20]\n num_snapshots: 5\n tgdata:\n x = 4-element Vector{Float64}\n\n\n\n\n\n","category":"type"},{"location":"api/temporalgraph/#GNNGraphs.add_snapshot-Tuple{TemporalSnapshotsGNNGraph, Int64, GNNGraph}","page":"Temporal Graphs","title":"GNNGraphs.add_snapshot","text":"add_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int, g::GNNGraph)\n\nReturn a TemporalSnapshotsGNNGraph created starting from tg by adding the snapshot g at time index t.\n\nExamples\n\njulia> using GraphNeuralNetworks\n\njulia> snapshots = [rand_graph(10, 20) for i in 1:5];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [20, 20, 20, 20, 20]\n num_snapshots: 5\n\njulia> new_tg = add_snapshot(tg, 3, rand_graph(10, 16)) # add a new snapshot at time 3\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10, 10]\n num_edges: [20, 20, 16, 20, 20, 20]\n num_snapshots: 6\n\n\n\n\n\n","category":"method"},{"location":"api/temporalgraph/#GNNGraphs.remove_snapshot-Tuple{TemporalSnapshotsGNNGraph, Int64}","page":"Temporal Graphs","title":"GNNGraphs.remove_snapshot","text":"remove_snapshot(tg::TemporalSnapshotsGNNGraph, t::Int)\n\nReturn a TemporalSnapshotsGNNGraph created starting from tg by removing the snapshot at time index t.\n\nExamples\n\njulia> using GraphNeuralNetworks\n\njulia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n\njulia> new_tg = remove_snapshot(tg, 2) # remove snapshot at time 2\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10]\n num_edges: [20, 22]\n num_snapshots: 2\n\n\n\n\n\n","category":"method"},{"location":"api/temporalgraph/#TemporalSnapshotsGNNGraph-random-generators","page":"Temporal Graphs","title":"TemporalSnapshotsGNNGraph random generators","text":"","category":"section"},{"location":"api/temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"rand_temporal_radius_graph\nrand_temporal_hyperbolic_graph","category":"page"},{"location":"api/temporalgraph/#GNNGraphs.rand_temporal_radius_graph","page":"Temporal Graphs","title":"GNNGraphs.rand_temporal_radius_graph","text":"rand_temporal_radius_graph(number_nodes::Int, \n number_snapshots::Int,\n speed::AbstractFloat,\n r::AbstractFloat;\n self_loops = false,\n dir = :in,\n kws...)\n\nCreate a random temporal graph given number_nodes nodes and number_snapshots snapshots. First, the positions of the nodes are randomly generated in the unit square. Two nodes are connected if their distance is less than a given radius r. Each following snapshot is obtained by applying the same construction to new positions obtained as follows. For each snapshot, the new positions of the points are determined by applying random independent displacement vectors to the previous positions. The direction of the displacement is chosen uniformly at random and its length is chosen uniformly in [0, speed]. Then the connections are recomputed. If a point happens to move outside the boundary, its position is updated as if it had bounced off the boundary.\n\nArguments\n\nnumber_nodes: The number of nodes of each snapshot.\nnumber_snapshots: The number of snapshots.\nspeed: The speed to update the nodes.\nr: The radius of connection.\nself_loops: If true, consider the node itself among its neighbors, in which case the graph will contain self-loops. \ndir: The direction of the edges. If dir=:in edges go from the neighbors to the central node. If dir=:out we have the opposite direction.\nkws: Further keyword arguments will be passed to the GNNGraph constructor of each snapshot.\n\nExample\n\njulia> n, snaps, s, r = 10, 5, 0.1, 1.5;\n\njulia> tg = rand_temporal_radius_graph(n,snaps,s,r) # complete graph at each snapshot\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [90, 90, 90, 90, 90]\n num_snapshots: 5\n\n\n\n\n\n","category":"function"},{"location":"api/temporalgraph/#GNNGraphs.rand_temporal_hyperbolic_graph","page":"Temporal Graphs","title":"GNNGraphs.rand_temporal_hyperbolic_graph","text":"rand_temporal_hyperbolic_graph(number_nodes::Int, \n number_snapshots::Int;\n α::Real,\n R::Real,\n speed::Real,\n ζ::Real=1,\n self_loop = false,\n kws...)\n\nCreate a random temporal graph given number_nodes nodes and number_snapshots snapshots. First, the positions of the nodes are generated with a quasi-uniform distribution (depending on the parameter α) in hyperbolic space within a disk of radius R. Two nodes are connected if their hyperbolic distance is less than R. Each following snapshot is created in order to keep the same initial distribution.\n\nArguments\n\nnumber_nodes: The number of nodes of each snapshot.\nnumber_snapshots: The number of snapshots.\nα: The parameter that controls the position of the points. If α=ζ, the points are uniformly distributed on the disk of radius R. If α>ζ, the points are more concentrated in the center of the disk. If α<ζ, the points are more concentrated at the boundary of the disk.\nR: The radius of the disk and of connection.\nspeed: The speed to update the nodes.\nζ: The parameter that controls the curvature of the disk.\nself_loops: If true, consider the node itself among its neighbors, in which case the graph will contain self-loops.\nkws: Further keyword arguments will be passed to the GNNGraph constructor of each snapshot.\n\nExample\n\njulia> n, snaps, α, R, speed, ζ = 10, 5, 1.0, 4.0, 0.1, 1.0;\n\njulia> thg = rand_temporal_hyperbolic_graph(n, snaps; α, R, speed, ζ)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [44, 46, 48, 42, 38]\n num_snapshots: 5\n\nReferences\n\nSection D of the paper Dynamic Hidden-Variable Network Models and the paper Hyperbolic Geometry of Complex Networks\n\n\n\n\n\n","category":"function"},{"location":"api/heterograph/#Hetereogeneous-Graphs","page":"Heterogeneous Graphs","title":"Hetereogeneous Graphs","text":"","category":"section"},{"location":"api/heterograph/#GNNHeteroGraph","page":"Heterogeneous Graphs","title":"GNNHeteroGraph","text":"","category":"section"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Documentation page for the type GNNHeteroGraph representing heterogeneous graphs, where nodes and edges can have different types.","category":"page"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Modules = [GNNGraphs]\nPages = [\"gnnheterograph.jl\"]\nPrivate = false","category":"page"},{"location":"api/heterograph/#GNNGraphs.GNNHeteroGraph","page":"Heterogeneous Graphs","title":"GNNGraphs.GNNHeteroGraph","text":"GNNHeteroGraph(data; [ndata, edata, gdata, num_nodes])\nGNNHeteroGraph(pairs...; [ndata, edata, gdata, num_nodes])\n\nA type representing a heterogeneous graph structure. It is similar to GNNGraph but nodes and edges are of different types.\n\nConstructor Arguments\n\ndata: A dictionary or an iterable object that maps (source_type, edge_type, target_type) triples to (source, target) index vectors (or to (source, target, weight) if also edge weights are present).\npairs: Passing multiple relations as pairs is equivalent to passing data=Dict(pairs...).\nndata: Node features. A dictionary of arrays or named tuple of arrays. The size of the last dimension of each array must be given by g.num_nodes.\nedata: Edge features. A dictionary of arrays or named tuple of arrays. Default nothing. The size of the last dimension of each array must be given by g.num_edges. Default nothing.\ngdata: Graph features. An array or named tuple of arrays whose last dimension has size num_graphs. Default nothing.\nnum_nodes: The number of nodes for each type. If not specified, inferred from data. Default nothing.\n\nFields\n\ngraph: A dictionary that maps (sourcetype, edgetype, target_type) triples to (source, target) index vectors.\nnum_nodes: The number of nodes for each type.\nnum_edges: The number of edges for each type.\nndata: Node features.\nedata: Edge features.\ngdata: Graph features.\nntypes: The node types.\netypes: The edge types.\n\nExamples\n\njulia> using GraphNeuralNetworks\n\njulia> nA, nB = 10, 20;\n\njulia> num_nodes = Dict(:A => nA, :B => nB);\n\njulia> edges1 = (rand(1:nA, 20), rand(1:nB, 20))\n([4, 8, 6, 3, 4, 7, 2, 7, 3, 2, 3, 4, 9, 4, 2, 9, 10, 1, 3, 9], [6, 4, 20, 8, 16, 7, 12, 16, 5, 4, 6, 20, 11, 19, 17, 9, 12, 2, 18, 12])\n\njulia> edges2 = (rand(1:nB, 30), rand(1:nA, 30))\n([17, 5, 2, 4, 5, 3, 8, 7, 9, 7 … 19, 8, 20, 7, 16, 2, 9, 15, 8, 13], [1, 1, 3, 1, 1, 3, 2, 7, 4, 4 … 7, 10, 6, 3, 4, 9, 1, 5, 8, 5])\n\njulia> data = ((:A, :rel1, :B) => edges1, (:B, :rel2, :A) => edges2);\n\njulia> hg = GNNHeteroGraph(data; num_nodes)\nGNNHeteroGraph:\n num_nodes: (:A => 10, :B => 20)\n num_edges: ((:A, :rel1, :B) => 20, (:B, :rel2, :A) => 30)\n\njulia> hg.num_edges\nDict{Tuple{Symbol, Symbol, Symbol}, Int64} with 2 entries:\n(:A, :rel1, :B) => 20\n(:B, :rel2, :A) => 30\n\n# Let's add some node features\njulia> ndata = Dict(:A => (x = rand(2, nA), y = rand(3, num_nodes[:A])),\n :B => rand(10, nB));\n\njulia> hg = GNNHeteroGraph(data; num_nodes, ndata)\nGNNHeteroGraph:\n num_nodes: (:A => 10, :B => 20)\n num_edges: ((:A, :rel1, :B) => 20, (:B, :rel2, :A) => 30)\n ndata:\n :A => (x = 2×10 Matrix{Float64}, y = 3×10 Matrix{Float64})\n :B => x = 10×20 Matrix{Float64}\n\n# Access features of nodes of type :A\njulia> hg.ndata[:A].x\n2×10 Matrix{Float64}:\n 0.825882 0.0797502 0.245813 0.142281 0.231253 0.685025 0.821457 0.888838 0.571347 0.53165\n 0.631286 0.316292 0.705325 0.239211 0.533007 0.249233 0.473736 0.595475 0.0623298 0.159307\n\nSee also GNNGraph for a homogeneous graph type and rand_heterograph for a function to generate random heterographs.\n\n\n\n\n\n","category":"type"},{"location":"api/heterograph/#GNNGraphs.edge_type_subgraph-Tuple{GNNHeteroGraph, Tuple{Symbol, Symbol, Symbol}}","page":"Heterogeneous Graphs","title":"GNNGraphs.edge_type_subgraph","text":"edge_type_subgraph(g::GNNHeteroGraph, edge_ts)\n\nReturn a subgraph of g that contains only the edges of type edge_ts. Edge types can be specified as a single edge type (i.e. a tuple containing 3 symbols) or a vector of edge types.\n\n\n\n\n\n","category":"method"},{"location":"api/heterograph/#GNNGraphs.num_edge_types-Tuple{GNNGraph}","page":"Heterogeneous Graphs","title":"GNNGraphs.num_edge_types","text":"num_edge_types(g)\n\nReturn the number of edge types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique edge types.\n\n\n\n\n\n","category":"method"},{"location":"api/heterograph/#GNNGraphs.num_node_types-Tuple{GNNGraph}","page":"Heterogeneous Graphs","title":"GNNGraphs.num_node_types","text":"num_node_types(g)\n\nReturn the number of node types in the graph. For GNNGraphs, this is always 1. For GNNHeteroGraphs, this is the number of unique node types.\n\n\n\n\n\n","category":"method"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Graphs.has_edge(::GNNHeteroGraph, ::Tuple{Symbol, Symbol, Symbol}, ::Integer, ::Integer)","category":"page"},{"location":"api/heterograph/#Graphs.has_edge-Tuple{GNNHeteroGraph, Tuple{Symbol, Symbol, Symbol}, Integer, Integer}","page":"Heterogeneous Graphs","title":"Graphs.has_edge","text":"has_edge(g::GNNHeteroGraph, edge_t, i, j)\n\nReturn true if there is an edge of type edge_t from node i to node j in g.\n\nExamples\n\njulia> g = rand_bipartite_heterograph((2, 2), (4, 0), bidirected=false)\nGNNHeteroGraph:\n num_nodes: (:A => 2, :B => 2)\n num_edges: ((:A, :to, :B) => 4, (:B, :to, :A) => 0)\n\njulia> has_edge(g, (:A,:to,:B), 1, 1)\ntrue\n\njulia> has_edge(g, (:B,:to,:A), 1, 1)\nfalse\n\n\n\n\n\n","category":"method"},{"location":"api/heterograph/#Heterogeneous-Graph-Convolutions","page":"Heterogeneous Graphs","title":"Heterogeneous Graph Convolutions","text":"","category":"section"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"Heterogeneous graph convolutions are implemented in the type HeteroGraphConv. HeteroGraphConv relies on standard graph convolutional layers to perform message passing on the different relations. See the table at this page for the supported layers.","category":"page"},{"location":"api/heterograph/","page":"Heterogeneous Graphs","title":"Heterogeneous Graphs","text":"HeteroGraphConv","category":"page"},{"location":"api/heterograph/#GraphNeuralNetworks.HeteroGraphConv","page":"Heterogeneous Graphs","title":"GraphNeuralNetworks.HeteroGraphConv","text":"HeteroGraphConv(itr; aggr = +)\nHeteroGraphConv(pairs...; aggr = +)\n\nA convolutional layer for heterogeneous graphs.\n\nThe itr argument is an iterator of pairs of the form edge_t => layer, where edge_t is a 3-tuple of the form (src_node_type, edge_type, dst_node_type), and layer is a convolutional layers for homogeneous graphs. \n\nEach convolution is applied to the corresponding relation. Since a node type can be involved in multiple relations, the single convolution outputs have to be aggregated using the aggr function. The default is to sum the outputs.\n\nForward Arguments\n\ng::GNNHeteroGraph: The input graph.\nx::Union{NamedTuple,Dict}: The input node features. The keys are node types and the values are node feature tensors.\n\nExamples\n\njulia> g = rand_bipartite_heterograph((10, 15), 20)\nGNNHeteroGraph:\n num_nodes: Dict(:A => 10, :B => 15)\n num_edges: Dict((:A, :to, :B) => 20, (:B, :to, :A) => 20)\n\njulia> x = (A = rand(Float32, 64, 10), B = rand(Float32, 64, 15));\n\njulia> layer = HeteroGraphConv((:A, :to, :B) => GraphConv(64 => 32, relu),\n (:B, :to, :A) => GraphConv(64 => 32, relu));\n\njulia> y = layer(g, x); # output is a named tuple\n\njulia> size(y.A) == (32, 10) && size(y.B) == (32, 15)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"api/temporalconv/","page":"Temporal Graph-Convolutional Layers","title":"Temporal Graph-Convolutional Layers","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/temporalconv/#Temporal-Graph-Convolutional-Layers","page":"Temporal Graph-Convolutional Layers","title":"Temporal Graph-Convolutional Layers","text":"","category":"section"},{"location":"api/temporalconv/","page":"Temporal Graph-Convolutional Layers","title":"Temporal Graph-Convolutional Layers","text":"Convolutions for time-varying graphs (temporal graphs) such as the TemporalSnapshotsGNNGraph.","category":"page"},{"location":"api/temporalconv/#Docs","page":"Temporal Graph-Convolutional Layers","title":"Docs","text":"","category":"section"},{"location":"api/temporalconv/","page":"Temporal Graph-Convolutional Layers","title":"Temporal Graph-Convolutional Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"layers/temporalconv.jl\"]\nPrivate = false","category":"page"},{"location":"api/temporalconv/#GraphNeuralNetworks.A3TGCN","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.A3TGCN","text":"A3TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])\n\nAttention Temporal Graph Convolutional Network (A3T-GCN) model from the paper A3T-GCN: Attention Temporal Graph Convolutional Network for Traffic Forecasting.\n\nPerforms a TGCN layer, followed by a soft attention layer.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the GRU layer. Default zeros32.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default false.\nuse_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.\n\nExamples\n\njulia> a3tgcn = A3TGCN(2 => 6)\nA3TGCN(2 => 6)\n\njulia> g, x = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> y = a3tgcn(g,x);\n\njulia> size(y)\n(6, 5)\n\njulia> Flux.reset!(a3tgcn);\n\njulia> y = a3tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20));\n\njulia> size(y)\n(6, 5)\n\nwarning: Batch size changes\nFailing to call reset! when the input batch size changes can lead to unexpected behavior.\n\n\n\n\n\n","category":"type"},{"location":"api/temporalconv/#GraphNeuralNetworks.EvolveGCNO","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.EvolveGCNO","text":"EvolveGCNO(ch; bias = true, init = glorot_uniform, init_state = Flux.zeros32)\n\nEvolving Graph Convolutional Network (EvolveGCNO) layer from the paper EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs.\n\nPerfoms a Graph Convolutional layer with parameters derived from a Long Short-Term Memory (LSTM) layer across the snapshots of the temporal graph.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.\n\nExamples\n\njulia> tg = TemporalSnapshotsGNNGraph([rand_graph(10,20; ndata = rand(4,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(4,10))])\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n\njulia> ev = EvolveGCNO(4 => 5)\nEvolveGCNO(4 => 5)\n\njulia> size(ev(tg, tg.ndata.x))\n(3,)\n\njulia> size(ev(tg, tg.ndata.x)[1])\n(5, 10)\n\n\n\n\n\n","category":"type"},{"location":"api/temporalconv/#GraphNeuralNetworks.DCGRU-Tuple{Any, Any, Any}","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.DCGRU","text":"DCGRU(in => out, k, n; [bias, init, init_state])\n\nDiffusion Convolutional Recurrent Neural Network (DCGRU) layer from the paper Diffusion Convolutional Recurrent Neural Network: Data-driven Traffic Forecasting.\n\nPerforms a Diffusion Convolutional layer to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk: Diffusion step.\nn: Number of nodes in the graph.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.\n\nExamples\n\njulia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> dcgru = DCGRU(2 => 5, 2, g1.num_nodes);\n\njulia> y = dcgru(g1, x1);\n\njulia> size(y)\n(5, 5)\n\njulia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);\n\njulia> z = dcgru(g2, x2);\n\njulia> size(z)\n(5, 5, 30)\n\n\n\n\n\n","category":"method"},{"location":"api/temporalconv/#GraphNeuralNetworks.GConvGRU-Tuple{Any, Any, Any}","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.GConvGRU","text":"GConvGRU(in => out, k, n; [bias, init, init_state])\n\nGraph Convolutional Gated Recurrent Unit (GConvGRU) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks.\n\nPerforms a layer of ChebConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk: Chebyshev polynomial order.\nn: Number of nodes in the graph.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the GRU layer. Default zeros32.\n\nExamples\n\njulia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> ggru = GConvGRU(2 => 5, 2, g1.num_nodes);\n\njulia> y = ggru(g1, x1);\n\njulia> size(y)\n(5, 5)\n\njulia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);\n\njulia> z = ggru(g2, x2);\n\njulia> size(z)\n(5, 5, 30)\n\n\n\n\n\n","category":"method"},{"location":"api/temporalconv/#GraphNeuralNetworks.GConvLSTM-Tuple{Any, Any, Any}","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.GConvLSTM","text":"GConvLSTM(in => out, k, n; [bias, init, init_state])\n\nGraph Convolutional Long Short-Term Memory (GConvLSTM) recurrent layer from the paper Structured Sequence Modeling with Graph Convolutional Recurrent Networks. \n\nPerforms a layer of ChebConv to model spatial dependencies, followed by a Long Short-Term Memory (LSTM) cell to model temporal dependencies.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk: Chebyshev polynomial order.\nn: Number of nodes in the graph.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the LSTM layer. Default zeros32.\n\nExamples\n\njulia> g1, x1 = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> gclstm = GConvLSTM(2 => 5, 2, g1.num_nodes);\n\njulia> y = gclstm(g1, x1);\n\njulia> size(y)\n(5, 5)\n\njulia> g2, x2 = rand_graph(5, 10), rand(Float32, 2, 5, 30);\n\njulia> z = gclstm(g2, x2);\n\njulia> size(z)\n(5, 5, 30)\n\n\n\n\n\n","category":"method"},{"location":"api/temporalconv/#GraphNeuralNetworks.TGCN-Tuple{Any}","page":"Temporal Graph-Convolutional Layers","title":"GraphNeuralNetworks.TGCN","text":"TGCN(in => out; [bias, init, init_state, add_self_loops, use_edge_weight])\n\nTemporal Graph Convolutional Network (T-GCN) recurrent layer from the paper T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction.\n\nPerforms a layer of GCNConv to model spatial dependencies, followed by a Gated Recurrent Unit (GRU) cell to model temporal dependencies.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\ninit_state: Initial state of the hidden stat of the GRU layer. Default zeros32.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default false.\nuse_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.\n\nExamples\n\njulia> tgcn = TGCN(2 => 6)\nRecur(\n TGCNCell(\n GCNConv(2 => 6, σ), # 18 parameters\n GRUv3Cell(6 => 6), # 240 parameters\n Float32[0.0; 0.0; … ; 0.0; 0.0;;], # 6 parameters (all zero)\n 2,\n 6,\n ),\n) # Total: 8 trainable arrays, 264 parameters,\n # plus 1 non-trainable, 6 parameters, summarysize 1.492 KiB.\n\njulia> g, x = rand_graph(5, 10), rand(Float32, 2, 5);\n\njulia> y = tgcn(g, x);\n\njulia> size(y)\n(6, 5)\n\njulia> Flux.reset!(tgcn);\n\njulia> tgcn(rand_graph(5, 10), rand(Float32, 2, 5, 20)) |> size # batch size of 20\n(6, 5, 20)\n\nwarning: Batch size changes\nFailing to call reset! when the input batch size changes can lead to unexpected behavior.\n\n\n\n\n\n","category":"method"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/#Graph-Classification-with-Graph-Neural-Networks","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"(Image: Source code) (Image: Author) (Image: Update time)","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"\n\n\n\n
begin\n    using Flux\n    using Flux: onecold, onehotbatch, logitcrossentropy\n    using Flux: DataLoader\n    using GraphNeuralNetworks\n    using MLDatasets\n    using MLUtils\n    using LinearAlgebra, Random, Statistics\n\n    ENV[\"DATADEPS_ALWAYS_ACCEPT\"] = \"true\"  # don't ask for dataset download confirmation\n    Random.seed!(17) # for reproducibility\nend;
\n\n\n\n

This Pluto notebook is a julia adaptation of the Pytorch Geometric tutorials that can be found here.

In this tutorial session we will have a closer look at how to apply Graph Neural Networks (GNNs) to the task of graph classification. Graph classification refers to the problem of classifying entire graphs (in contrast to nodes), given a dataset of graphs, based on some structural graph properties. Here, we want to embed entire graphs, and we want to embed those graphs in such a way so that they are linearly separable given a task at hand.

The most common task for graph classification is molecular property prediction, in which molecules are represented as graphs, and the task may be to infer whether a molecule inhibits HIV virus replication or not.

The TU Dortmund University has collected a wide range of different graph classification datasets, known as the TUDatasets, which are also accessible via MLDatasets.jl. Let's load and inspect one of the smaller ones, the MUTAG dataset:

\n\n
dataset = TUDataset(\"MUTAG\")
\n
dataset TUDataset:\n  name        =>    MUTAG\n  metadata    =>    Dict{String, Any} with 1 entry\n  graphs      =>    188-element Vector{MLDatasets.Graph}\n  graph_data  =>    (targets = \"188-element Vector{Int64}\",)\n  num_nodes   =>    3371\n  num_edges   =>    7442\n  num_graphs  =>    188
\n\n
dataset.graph_data.targets |> union
\n
2-element Vector{Int64}:\n  1\n -1
\n\n
g1, y1 = dataset[1] #get the first graph and target
\n
(graphs = Graph(17, 38), targets = 1)
\n\n
reduce(vcat, g.node_data.targets for (g, _) in dataset) |> union
\n
7-element Vector{Int64}:\n 0\n 1\n 2\n 3\n 4\n 5\n 6
\n\n
reduce(vcat, g.edge_data.targets for (g, _) in dataset) |> union
\n
4-element Vector{Int64}:\n 0\n 1\n 2\n 3
\n\n\n

This dataset provides 188 different graphs, and the task is to classify each graph into one out of two classes.

By inspecting the first graph object of the dataset, we can see that it comes with 17 nodes and 38 edges. It also comes with exactly one graph label, and provides additional node labels (7 classes) and edge labels (4 classes). However, for the sake of simplicity, we will not make use of edge labels.

\n\n\n

We now convert the MLDatasets.jl graph types to our GNNGraphs and we also onehot encode both the node labels (which will be used as input features) and the graph labels (what we want to predict):

\n\n
begin\n    graphs = mldataset2gnngraph(dataset)\n    graphs = [GNNGraph(g,\n                       ndata = Float32.(onehotbatch(g.ndata.targets, 0:6)),\n                       edata = nothing)\n              for g in graphs]\n    y = onehotbatch(dataset.graph_data.targets, [-1, 1])\nend
\n
2×188 OneHotMatrix(::Vector{UInt32}) with eltype Bool:\n ⋅  1  1  ⋅  1  ⋅  1  ⋅  1  ⋅  ⋅  ⋅  ⋅  1  …  ⋅  ⋅  ⋅  1  ⋅  1  1  ⋅  ⋅  1  1  ⋅  1\n 1  ⋅  ⋅  1  ⋅  1  ⋅  1  ⋅  1  1  1  1  ⋅     1  1  1  ⋅  1  ⋅  ⋅  1  1  ⋅  ⋅  1  ⋅
\n\n\n

We have some useful utilities for working with graph datasets, e.g., we can shuffle the dataset and use the first 150 graphs as training graphs, while using the remaining ones for testing:

\n\n
train_data, test_data = splitobs((graphs, y), at = 150, shuffle = true) |> getobs
\n
((GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(16, 34) with x: 7×16 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(18, 38) with x: 7×18 data  …  GNNGraph(12, 26) with x: 7×12 data, GNNGraph(19, 40) with x: 7×19 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(26, 60) with x: 7×26 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(19, 42) with x: 7×19 data, GNNGraph(22, 50) with x: 7×22 data], Bool[0 0 … 0 0; 1 1 … 1 1]), (GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(26, 60) with x: 7×26 data, GNNGraph(15, 34) with x: 7×15 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(24, 50) with x: 7×24 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(21, 44) with x: 7×21 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(17, 38) with x: 7×17 data  …  GNNGraph(12, 26) with x: 7×12 data, GNNGraph(23, 52) with x: 7×23 data, GNNGraph(12, 24) with x: 7×12 data, GNNGraph(23, 50) with x: 7×23 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(18, 40) with x: 7×18 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(13, 26) with x: 7×13 data, GNNGraph(28, 62) with x: 7×28 data, GNNGraph(11, 22) with x: 7×11 data], Bool[0 0 … 0 1; 1 1 … 1 0]))
\n\n
begin\n    train_loader = DataLoader(train_data, batchsize = 32, shuffle = true)\n    test_loader = DataLoader(test_data, batchsize = 32, shuffle = false)\nend
\n
2-element DataLoader(::Tuple{Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}, OneHotArrays.OneHotMatrix{UInt32, Vector{UInt32}}}, batchsize=32)\n  with first element:\n  (32-element Vector{GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}, 2×32 OneHotMatrix(::Vector{UInt32}) with eltype Bool,)
\n\n\n

Here, we opt for a batch_size of 32, leading to 5 (randomly shuffled) mini-batches, containing all \\(4 \\cdot 32+22 = 150\\) graphs.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/#Mini-batching-of-graphs","page":"Graph Classification with Graph Neural Networks","title":"Mini-batching of graphs","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"
\n

Since graphs in graph classification datasets are usually small, a good idea is to batch the graphs before inputting them into a Graph Neural Network to guarantee full GPU utilization. In the image or language domain, this procedure is typically achieved by rescaling or padding each example into a set of equally-sized shapes, and examples are then grouped in an additional dimension. The length of this dimension is then equal to the number of examples grouped in a mini-batch and is typically referred to as the batchsize.

However, for GNNs the two approaches described above are either not feasible or may result in a lot of unnecessary memory consumption. Therefore, GraphNeuralNetworks.jl opts for another approach to achieve parallelization across a number of examples. Here, adjacency matrices are stacked in a diagonal fashion (creating a giant graph that holds multiple isolated subgraphs), and node and target features are simply concatenated in the node dimension (the last dimension).

This procedure has some crucial advantages over other batching procedures:

  1. GNN operators that rely on a message passing scheme do not need to be modified since messages are not exchanged between two nodes that belong to different graphs.

  2. There is no computational or memory overhead since adjacency matrices are saved in a sparse fashion holding only non-zero entries, i.e., the edges.

GraphNeuralNetworks.jl can batch multiple graphs into a single giant graph:

\n\n
vec_gs, _ = first(train_loader)
\n
(GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 46) with x: 7×20 data, GNNGraph(15, 34) with x: 7×15 data, GNNGraph(25, 56) with x: 7×25 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 44) with x: 7×20 data  …  GNNGraph(12, 24) with x: 7×12 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data], Bool[0 0 … 0 0; 1 1 … 1 1])
\n\n
MLUtils.batch(vec_gs)
\n
GNNGraph:\n  num_nodes: 575\n  num_edges: 1276\n  num_graphs: 32\n  ndata:\n\tx = 7×575 Matrix{Float32}
\n\n\n

Each batched graph object is equipped with a graph_indicator vector, which maps each node to its respective graph in the batch:

$$\\textrm{graph\\_indicator} = [1, \\ldots, 1, 2, \\ldots, 2, 3, \\ldots ]$$

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/#Training-a-Graph-Neural-Network-(GNN)","page":"Graph Classification with Graph Neural Networks","title":"Training a Graph Neural Network (GNN)","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"
\n

Training a GNN for graph classification usually follows a simple recipe:

  1. Embed each node by performing multiple rounds of message passing

  2. Aggregate node embeddings into a unified graph embedding (readout layer)

  3. Train a final classifier on the graph embedding

There exists multiple readout layers in literature, but the most common one is to simply take the average of node embeddings:

$$\\mathbf{x}_{\\mathcal{G}} = \\frac{1}{|\\mathcal{V}|} \\sum_{v \\in \\mathcal{V}} \\mathcal{x}^{(L)}_v$$

GraphNeuralNetworks.jl provides this functionality via GlobalPool(mean), which takes in the node embeddings of all nodes in the mini-batch and the assignment vector graph_indicator to compute a graph embedding of size [hidden_channels, batchsize].

The final architecture for applying GNNs to the task of graph classification then looks as follows and allows for complete end-to-end training:

\n\n
function create_model(nin, nh, nout)\n    GNNChain(GCNConv(nin => nh, relu),\n             GCNConv(nh => nh, relu),\n             GCNConv(nh => nh),\n             GlobalPool(mean),\n             Dropout(0.5),\n             Dense(nh, nout))\nend
\n
create_model (generic function with 1 method)
\n\n\n

Here, we again make use of the GCNConv with \\(\\mathrm{ReLU}(x) = \\max(x, 0)\\) activation for obtaining localized node embeddings, before we apply our final classifier on top of a graph readout layer.

Let's train our network for a few epochs to see how well it performs on the training as well as test set:

\n\n
function eval_loss_accuracy(model, data_loader, device)\n    loss = 0.0\n    acc = 0.0\n    ntot = 0\n    for (g, y) in data_loader\n        g, y = MLUtils.batch(g) |> device, y |> device\n        n = length(y)\n        ŷ = model(g, g.ndata.x)\n        loss += logitcrossentropy(ŷ, y) * n\n        acc += mean((ŷ .> 0) .== y) * n\n        ntot += n\n    end\n    return (loss = round(loss / ntot, digits = 4),\n            acc = round(acc * 100 / ntot, digits = 2))\nend
\n
eval_loss_accuracy (generic function with 1 method)
\n\n
function train!(model; epochs = 200, η = 1e-2, infotime = 10)\n    # device = Flux.gpu # uncomment this for GPU training\n    device = Flux.cpu\n    model = model |> device\n    opt = Flux.setup(Adam(1e-3), model)\n\n    function report(epoch)\n        train = eval_loss_accuracy(model, train_loader, device)\n        test = eval_loss_accuracy(model, test_loader, device)\n        @info (; epoch, train, test)\n    end\n\n    report(0)\n    for epoch in 1:epochs\n        for (g, y) in train_loader\n            g, y = MLUtils.batch(g) |> device, y |> device\n            grad = Flux.gradient(model) do model\n                ŷ = model(g, g.ndata.x)\n                logitcrossentropy(ŷ, y)\n            end\n            Flux.update!(opt, model, grad[1])\n        end\n        epoch % infotime == 0 && report(epoch)\n    end\nend
\n
train! (generic function with 1 method)
\n\n
begin\n    nin = 7\n    nh = 64\n    nout = 2\n    model = create_model(nin, nh, nout)\n    train!(model)\nend
\n\n\n\n

As one can see, our model reaches around 74% test accuracy. Reasons for the fluctuations in accuracy can be explained by the rather small dataset (only 38 test graphs), and usually disappear once one applies GNNs to larger datasets.

(Optional) Exercise

Can we do better than this? As multiple papers pointed out (Xu et al. (2018), Morris et al. (2018)), applying neighborhood normalization decreases the expressivity of GNNs in distinguishing certain graph structures. An alternative formulation (Morris et al. (2018)) omits neighborhood normalization completely and adds a simple skip-connection to the GNN layer in order to preserve central node information:

$$\\mathbf{x}_i^{(\\ell+1)} = \\mathbf{W}^{(\\ell + 1)}_1 \\mathbf{x}_i^{(\\ell)} + \\mathbf{W}^{(\\ell + 1)}_2 \\sum_{j \\in \\mathcal{N}(i)} \\mathbf{x}_j^{(\\ell)}$$

This layer is implemented under the name GraphConv in GraphNeuralNetworks.jl.

As an exercise, you are invited to complete the following code to the extent that it makes use of GraphConv rather than GCNConv. This should bring you close to 82% test accuracy.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/#Conclusion","page":"Graph Classification with Graph Neural Networks","title":"Conclusion","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"
\n

In this chapter, you have learned how to apply GNNs to the task of graph classification. You have learned how graphs can be batched together for better GPU utilization, and how to apply readout layers for obtaining graph embeddings rather than node embeddings.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"EditURL = \"https://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/master/docs/src/tutorials/introductory_tutorials/graph_classification_pluto.jl\"","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"","category":"page"},{"location":"tutorials/introductory_tutorials/graph_classification_pluto/","page":"Graph Classification with Graph Neural Networks","title":"Graph Classification with Graph Neural Networks","text":"This page was generated using DemoCards.jl. and PlutoStaticHTML.jl","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Node-Classification-with-Graph-Neural-Networks","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"(Image: Source code) (Image: Author) (Image: Update time)","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"\n\n\n\n\n

In this tutorial, we will be learning how to use Graph Neural Networks (GNNs) for node classification. Given the ground-truth labels of only a small subset of nodes, and want to infer the labels for all the remaining nodes (transductive learning).

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Import","page":"Node Classification with Graph Neural Networks","title":"Import","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

Let us start off by importing some libraries. We will be using Flux.jl and GraphNeuralNetworks.jl for our tutorial.

\n\n
begin\n    using MLDatasets\n    using GraphNeuralNetworks\n    using Flux\n    using Flux: onecold, onehotbatch, logitcrossentropy\n    using Plots\n    using PlutoUI\n    using TSne\n    using Random\n    using Statistics\n\n    ENV[\"DATADEPS_ALWAYS_ACCEPT\"] = \"true\"\n    Random.seed!(17) # for reproducibility\nend;
\n\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Visualize","page":"Node Classification with Graph Neural Networks","title":"Visualize","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

We want to visualize the the outputs of the results using t-distributed stochastic neighbor embedding (tsne) to embed our output embeddings onto a 2D plane.

\n\n
function visualize_tsne(out, targets)\n    z = tsne(out, 2)\n    scatter(z[:, 1], z[:, 2], color = Int.(targets[1:size(z, 1)]), leg = false)\nend
\n
visualize_tsne (generic function with 1 method)
\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Dataset:-Cora","page":"Node Classification with Graph Neural Networks","title":"Dataset: Cora","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

For our tutorial, we will be using the Cora dataset. Cora is a citation network of 2708 documents classified into one of seven classes and 5429 links. Each node represent articles/documents and the edges between these nodes if one of them cite each other.

Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 1433 unique words.

This dataset was first introduced by Yang et al. (2016) as one of the datasets of the Planetoid benchmark suite. We will be using MLDatasets.jl for an easy access to this dataset.

\n\n
dataset = Cora()
\n
dataset Cora:\n  metadata  =>    Dict{String, Any} with 3 entries\n  graphs    =>    1-element Vector{MLDatasets.Graph}
\n\n\n

Datasets in MLDatasets.jl have metadata containing information about the dataset itself.

\n\n
dataset.metadata
\n
Dict{String, Any} with 3 entries:\n  \"name\"        => \"cora\"\n  \"classes\"     => [1, 2, 3, 4, 5, 6, 7]\n  \"num_classes\" => 7
\n\n\n

The graphs variable GraphDataset contains the graph. The Cora dataset contains only 1 graph.

\n\n
dataset.graphs
\n
1-element Vector{MLDatasets.Graph}:\n Graph(2708, 10556)
\n\n\n

There is only one graph of the dataset. The node_data contains features indicating if certain words are present or not and targets indicating the class for each document. We convert the single-graph dataset to a GNNGraph.

\n\n
g = mldataset2gnngraph(dataset)
\n
GNNGraph:\n  num_nodes: 2708\n  num_edges: 10556\n  ndata:\n\tval_mask = 2708-element BitVector\n\ttargets = 2708-element Vector{Int64}\n\ttest_mask = 2708-element BitVector\n\tfeatures = 1433×2708 Matrix{Float32}\n\ttrain_mask = 2708-element BitVector
\n\n
with_terminal() do\n    # Gather some statistics about the graph.\n    println(\"Number of nodes: $(g.num_nodes)\")\n    println(\"Number of edges: $(g.num_edges)\")\n    println(\"Average node degree: $(g.num_edges / g.num_nodes)\")\n    println(\"Number of training nodes: $(sum(g.ndata.train_mask))\")\n    println(\"Training node label rate: $(mean(g.ndata.train_mask))\")\n    # println(\"Has isolated nodes: $(has_isolated_nodes(g))\")\n    println(\"Has self-loops: $(has_self_loops(g))\")\n    println(\"Is undirected: $(is_bidirected(g))\")\nend
\n
Number of nodes: 2708\nNumber of edges: 10556\nAverage node degree: 3.8980797636632203\nNumber of training nodes: 140\nTraining node label rate: 0.051698670605613\nHas self-loops: false\nIs undirected: true\n
\n\n\n

Overall, this dataset is quite similar to the previously used KarateClub network. We can see that the Cora network holds 2,708 nodes and 10,556 edges, resulting in an average node degree of 3.9. For training this dataset, we are given the ground-truth categories of 140 nodes (20 for each class). This results in a training node label rate of only 5%.

We can further see that this network is undirected, and that there exists no isolated nodes (each document has at least one citation).

\n\n
begin\n    x = g.ndata.features\n    # we onehot encode both the node labels (what we want to predict):\n    y = onehotbatch(g.ndata.targets, 1:7)\n    train_mask = g.ndata.train_mask\n    num_features = size(x)[1]\n    hidden_channels = 16\n    num_classes = dataset.metadata[\"num_classes\"]\nend;
\n\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Multi-layer-Perception-Network-(MLP)","page":"Node Classification with Graph Neural Networks","title":"Multi-layer Perception Network (MLP)","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

In theory, we should be able to infer the category of a document solely based on its content, i.e. its bag-of-words feature representation, without taking any relational information into account.

Let's verify that by constructing a simple MLP that solely operates on input node features (using shared weights across all nodes):

\n\n
begin\n    struct MLP\n        layers::NamedTuple\n    end\n\n    Flux.@layer :expand MLP\n\n    function MLP(num_features, num_classes, hidden_channels; drop_rate = 0.5)\n        layers = (hidden = Dense(num_features => hidden_channels),\n                  drop = Dropout(drop_rate),\n                  classifier = Dense(hidden_channels => num_classes))\n        return MLP(layers)\n    end\n\n    function (model::MLP)(x::AbstractMatrix)\n        l = model.layers\n        x = l.hidden(x)\n        x = relu(x)\n        x = l.drop(x)\n        x = l.classifier(x)\n        return x\n    end\nend
\n\n\n\n

Training a Multilayer Perceptron

Our MLP is defined by two linear layers and enhanced by ReLU non-linearity and Dropout. Here, we first reduce the 1433-dimensional feature vector to a low-dimensional embedding (hidden_channels=16), while the second linear layer acts as a classifier that should map each low-dimensional node embedding to one of the 7 classes.

Let's train our simple MLP by following a similar procedure as described in the first part of this tutorial. We again make use of the cross entropy loss and Adam optimizer. This time, we also define a accuracy function to evaluate how well our final model performs on the test node set (which labels have not been observed during training).

\n\n
function train(model::MLP, data::AbstractMatrix, epochs::Int, opt)\n    Flux.trainmode!(model)\n\n    for epoch in 1:epochs\n        loss, grad = Flux.withgradient(model) do model\n            ŷ = model(data)\n            logitcrossentropy(ŷ[:, train_mask], y[:, train_mask])\n        end\n\n        Flux.update!(opt, model, grad[1])\n        if epoch % 200 == 0\n            @show epoch, loss\n        end\n    end\nend
\n
train (generic function with 1 method)
\n\n
function accuracy(model::MLP, x::AbstractMatrix, y::Flux.OneHotArray, mask::BitVector)\n    Flux.testmode!(model)\n    mean(onecold(model(x))[mask] .== onecold(y)[mask])\nend
\n
accuracy (generic function with 1 method)
\n\n
begin\n    mlp = MLP(num_features, num_classes, hidden_channels)\n    opt_mlp = Flux.setup(Adam(1e-3), mlp)\n    epochs = 2000\n    train(mlp, g.ndata.features, epochs, opt_mlp)\nend
\n\n\n\n

After training the model, we can call the accuracy function to see how well our model performs on unseen labels. Here, we are interested in the accuracy of the model, i.e., the ratio of correctly classified nodes:

\n\n
accuracy(mlp, g.ndata.features, y, .!train_mask)
\n
0.4929906542056075
\n\n\n

As one can see, our MLP performs rather bad with only about 47% test accuracy. But why does the MLP do not perform better? The main reason for that is that this model suffers from heavy overfitting due to only having access to a small amount of training nodes, and therefore generalizes poorly to unseen node representations.

It also fails to incorporate an important bias into the model: Cited papers are very likely related to the category of a document. That is exactly where Graph Neural Networks come into play and can help to boost the performance of our model.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Training-a-Graph-Convolutional-Neural-Network-(GNN)","page":"Node Classification with Graph Neural Networks","title":"Training a Graph Convolutional Neural Network (GNN)","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

Following-up on the first part of this tutorial, we replace the Dense linear layers by the GCNConv module. To recap, the GCN layer (Kipf et al. (2017)) is defined as

$$\\mathbf{x}_v^{(\\ell + 1)} = \\mathbf{W}^{(\\ell + 1)} \\sum_{w \\in \\mathcal{N}(v) \\, \\cup \\, \\{ v \\}} \\frac{1}{c_{w,v}} \\cdot \\mathbf{x}_w^{(\\ell)}$$

where \\(\\mathbf{W}^{(\\ell + 1)}\\) denotes a trainable weight matrix of shape [num_output_features, num_input_features] and \\(c_{w,v}\\) refers to a fixed normalization coefficient for each edge. In contrast, a single Linear layer is defined as

$$\\mathbf{x}_v^{(\\ell + 1)} = \\mathbf{W}^{(\\ell + 1)} \\mathbf{x}_v^{(\\ell)}$$

which does not make use of neighboring node information.

\n\n
begin\n    struct GCN\n        layers::NamedTuple\n    end\n\n    Flux.@layer GCN # provides parameter collection, gpu movement and more\n\n    function GCN(num_features, num_classes, hidden_channels; drop_rate = 0.5)\n        layers = (conv1 = GCNConv(num_features => hidden_channels),\n                  drop = Dropout(drop_rate),\n                  conv2 = GCNConv(hidden_channels => num_classes))\n        return GCN(layers)\n    end\n\n    function (gcn::GCN)(g::GNNGraph, x::AbstractMatrix)\n        l = gcn.layers\n        x = l.conv1(g, x)\n        x = relu.(x)\n        x = l.drop(x)\n        x = l.conv2(g, x)\n        return x\n    end\nend
\n\n\n\n

Now let's visualize the node embeddings of our untrained GCN network.

\n\n
begin\n    gcn = GCN(num_features, num_classes, hidden_channels)\n    h_untrained = gcn(g, x) |> transpose\n    visualize_tsne(h_untrained, g.ndata.targets)\nend
\n\n\n\n

We certainly can do better by training our model. The training and testing procedure is once again the same, but this time we make use of the node features xand the graph g as input to our GCN model.

\n\n
function train(model::GCN, g::GNNGraph, x::AbstractMatrix, epochs::Int, opt)\n    Flux.trainmode!(model)\n\n    for epoch in 1:epochs\n        loss, grad = Flux.withgradient(model) do model\n            ŷ = model(g, x)\n            logitcrossentropy(ŷ[:, train_mask], y[:, train_mask])\n        end\n\n        Flux.update!(opt, model, grad[1])\n        if epoch % 200 == 0\n            @show epoch, loss\n        end\n    end\nend
\n
train (generic function with 2 methods)
\n\n
function accuracy(model::GCN, g::GNNGraph, x::AbstractMatrix, y::Flux.OneHotArray,\n                  mask::BitVector)\n    Flux.testmode!(model)\n    mean(onecold(model(g, x))[mask] .== onecold(y)[mask])\nend
\n
accuracy (generic function with 2 methods)
\n\n
begin\n    opt_gcn = Flux.setup(Adam(1e-2), gcn)\n    train(gcn, g, x, epochs, opt_gcn)\nend
\n\n\n\n

Now let's evaluate the loss of our trained GCN.

\n\n
with_terminal() do\n    train_accuracy = accuracy(gcn, g, g.ndata.features, y, train_mask)\n    test_accuracy = accuracy(gcn, g, g.ndata.features, y, .!train_mask)\n\n    println(\"Train accuracy: $(train_accuracy)\")\n    println(\"Test accuracy: $(test_accuracy)\")\nend
\n
Train accuracy: 1.0\nTest accuracy: 0.7394859813084113\n
\n\n\n

There it is! By simply swapping the linear layers with GNN layers, we can reach 75.77% of test accuracy! This is in stark contrast to the 59% of test accuracy obtained by our MLP, indicating that relational information plays a crucial role in obtaining better performance.

We can also verify that once again by looking at the output embeddings of our trained model, which now produces a far better clustering of nodes of the same category.

\n\n
begin\n    Flux.testmode!(gcn) # inference mode\n\n    out_trained = gcn(g, x) |> transpose\n    visualize_tsne(out_trained, g.ndata.targets)\nend
\n\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#(Optional)-Exercises","page":"Node Classification with Graph Neural Networks","title":"(Optional) Exercises","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n
  1. To achieve better model performance and to avoid overfitting, it is usually a good idea to select the best model based on an additional validation set. The Cora dataset provides a validation node set as g.ndata.val_mask, but we haven't used it yet. Can you modify the code to select and test the model with the highest validation performance? This should bring test performance to 82% accuracy.

  2. How does GCN behave when increasing the hidden feature dimensionality or the number of layers? Does increasing the number of layers help at all?

  3. You can try to use different GNN layers to see how model performance changes. What happens if you swap out all GCNConv instances with GATConv layers that make use of attention? Try to write a 2-layer GAT model that makes use of 8 attention heads in the first layer and 1 attention head in the second layer, uses a dropout ratio of 0.6 inside and outside each GATConv call, and uses a hidden_channels dimensions of 8 per head.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/#Conclusion","page":"Node Classification with Graph Neural Networks","title":"Conclusion","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"
\n

In this tutorial, we have seen how to apply GNNs to real-world problems, and, in particular, how they can effectively be used for boosting a model's performance. In the next tutorial, we will look into how GNNs can be used for the task of graph classification.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"EditURL = \"https://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/master/docs/src/tutorials/introductory_tutorials/node_classification_pluto.jl\"","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"","category":"page"},{"location":"tutorials/introductory_tutorials/node_classification_pluto/","page":"Node Classification with Graph Neural Networks","title":"Node Classification with Graph Neural Networks","text":"This page was generated using DemoCards.jl. and PlutoStaticHTML.jl","category":"page"},{"location":"temporalgraph/#Temporal-Graphs","page":"Temporal Graphs","title":"Temporal Graphs","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Temporal Graphs are graphs with time varying topologies and node features. In GraphNeuralNetworks.jl temporal graphs with fixed number of nodes over time are supported by the TemporalSnapshotsGNNGraph type.","category":"page"},{"location":"temporalgraph/#Creating-a-TemporalSnapshotsGNNGraph","page":"Temporal Graphs","title":"Creating a TemporalSnapshotsGNNGraph","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"A temporal graph can be created by passing a list of snapshots to the constructor. Each snapshot is a GNNGraph. ","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> snapshots = [rand_graph(10,20) for i in 1:5];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10]\n num_edges: [20, 20, 20, 20, 20]\n num_snapshots: 5","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"A new temporal graph can be created by adding or removing snapshots to an existing temporal graph. ","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> new_tg = add_snapshot(tg, 3, rand_graph(10, 16)) # add a new snapshot at time 3\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10, 10, 10, 10]\n num_edges: [20, 20, 16, 20, 20, 20]\n num_snapshots: 6","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n\njulia> new_tg = remove_snapshot(tg, 2) # remove snapshot at time 2\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10]\n num_edges: [20, 22]\n num_snapshots: 2","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"See rand_temporal_radius_graph and rand_temporal_hyperbolic_graph for generating random temporal graphs. ","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> tg = rand_temporal_radius_graph(10, 3, 0.1, 0.5)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [32, 30, 34]\n num_snapshots: 3","category":"page"},{"location":"temporalgraph/#Basic-Queries","page":"Temporal Graphs","title":"Basic Queries","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Basic queries are similar to those for GNNGraphs:","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> snapshots = [rand_graph(10,20), rand_graph(10,14), rand_graph(10,22)];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots)\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n\njulia> tg.num_nodes # number of nodes in each snapshot\n3-element Vector{Int64}:\n 10\n 10\n 10\n\njulia> tg.num_edges # number of edges in each snapshot\n3-element Vector{Int64}:\n 20\n 14\n 22\n\njulia> tg.num_snapshots # number of snapshots\n3\n\njulia> tg.snapshots # list of snapshots\n3-element Vector{GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}:\n GNNGraph(10, 20) with no data\n GNNGraph(10, 14) with no data\n GNNGraph(10, 22) with no data\n\njulia> tg.snapshots[1] # first snapshot, same as tg[1]\nGNNGraph:\n num_nodes: 10\n num_edges: 20","category":"page"},{"location":"temporalgraph/#Data-Features","page":"Temporal Graphs","title":"Data Features","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"Node, edge, and graph features can be added at construction time or later using:","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> snapshots = [rand_graph(10,20; ndata = rand(3,10)), rand_graph(10,14; ndata = rand(4,10)), rand_graph(10,22; ndata = rand(5,10))]; # node features at construction time\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots);\n\njulia> tg.tgdata.y = rand(3,1); # graph features after construction\n\njulia> tg\nTemporalSnapshotsGNNGraph:\n num_nodes: [10, 10, 10]\n num_edges: [20, 14, 22]\n num_snapshots: 3\n tgdata:\n y = 3×1 Matrix{Float64}\n\njulia> tg.ndata # vector of Datastore for node features\n3-element Vector{DataStore}:\n DataStore(10) with 1 element:\n x = 3×10 Matrix{Float64}\n DataStore(10) with 1 element:\n x = 4×10 Matrix{Float64}\n DataStore(10) with 1 element:\n x = 5×10 Matrix{Float64}\n\njulia> typeof(tg.ndata.x) # vector containing the x feature of each snapshot\nVector{Matrix{Float64}}","category":"page"},{"location":"temporalgraph/#Graph-convolutions-on-TemporalSnapshotsGNNGraph","page":"Temporal Graphs","title":"Graph convolutions on TemporalSnapshotsGNNGraph","text":"","category":"section"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"A graph convolutional layer can be applied to each snapshot independently, in the next example we apply a GINConv layer to each snapshot of a TemporalSnapshotsGNNGraph. The list of compatible graph convolution layers can be found here. ","category":"page"},{"location":"temporalgraph/","page":"Temporal Graphs","title":"Temporal Graphs","text":"julia> using GraphNeuralNetworks, Flux\n\njulia> snapshots = [rand_graph(10, 20; ndata = rand(3, 10)), rand_graph(10, 14; ndata = rand(3, 10))];\n\njulia> tg = TemporalSnapshotsGNNGraph(snapshots);\n\njulia> m = GINConv(Dense(3 => 1), 0.4);\n\njulia> output = m(tg, tg.ndata.x);\n\njulia> size(output[1])\n(1, 10)","category":"page"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/conv/#Convolutional-Layers","page":"Convolutional Layers","title":"Convolutional Layers","text":"","category":"section"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"Many different types of graphs convolutional layers have been proposed in the literature. Choosing the right layer for your application could involve a lot of exploration. Some of the most commonly used layers are the GCNConv and the GATv2Conv. Multiple graph convolutional layers are typically stacked together to create a graph neural network model (see GNNChain).","category":"page"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"The table below lists all graph convolutional layers implemented in the GraphNeuralNetworks.jl. It also highlights the presence of some additional capabilities with respect to basic message passing:","category":"page"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"Sparse Ops: implements message passing as multiplication by sparse adjacency matrix instead of the gather/scatter mechanism. This can lead to better CPU performances but it is not supported on GPU yet. \nEdge Weight: supports scalar weights (or equivalently scalar features) on edges. \nEdge Features: supports feature vectors on edges.\nHeterograph: supports heterogeneous graphs (see GNNHeteroGraph).\nTemporalSnapshotsGNNGraphs: supports temporal graphs (see TemporalSnapshotsGNNGraph) by applying the convolution layers to each snapshot independently.","category":"page"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"Layer Sparse Ops Edge Weight Edge Features Heterograph TemporalSnapshotsGNNGraphs\nAGNNConv ✓ \nCGConv ✓ ✓ ✓\nChebConv ✓\nEGNNConv ✓ \nEdgeConv ✓ \nGATConv ✓ ✓ ✓\nGATv2Conv ✓ ✓ ✓\nGatedGraphConv ✓ ✓\nGCNConv ✓ ✓ ✓ \nGINConv ✓ ✓ ✓\nGMMConv ✓ \nGraphConv ✓ ✓ ✓\nMEGNetConv ✓ \nNNConv ✓ \nResGatedGraphConv ✓ ✓\nSAGEConv ✓ ✓ ✓\nSGConv ✓ ✓\nTransformerConv ✓ ","category":"page"},{"location":"api/conv/#Docs","page":"Convolutional Layers","title":"Docs","text":"","category":"section"},{"location":"api/conv/","page":"Convolutional Layers","title":"Convolutional Layers","text":"Modules = [GraphNeuralNetworks]\nPages = [\"layers/conv.jl\"]\nPrivate = false","category":"page"},{"location":"api/conv/#GraphNeuralNetworks.AGNNConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.AGNNConv","text":"AGNNConv(; init_beta=1.0f0, trainable=true, add_self_loops=true)\n\nAttention-based Graph Neural Network layer from paper Attention-based Graph Neural Network for Semi-Supervised Learning.\n\nThe forward pass is given by\n\nmathbfx_i = sum_j in N(i) alpha_ij mathbfx_j\n\nwhere the attention coefficients alpha_ij are given by\n\nalpha_ij =frace^beta cos(mathbfx_i mathbfx_j)\n sum_je^beta cos(mathbfx_i mathbfx_j)\n\nwith the cosine distance defined by\n\ncos(mathbfx_i mathbfx_j) = \n fracmathbfx_i cdot mathbfx_jlVertmathbfx_irVert lVertmathbfx_jrVert\n\nand beta a trainable parameter if trainable=true.\n\nArguments\n\ninit_beta: The initial value of beta. Default 1.0f0.\ntrainable: If true, beta is trainable. Default true.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default true.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\n\n# create layer\nl = AGNNConv(init_beta=2.0f0)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.CGConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.CGConv","text":"CGConv((in, ein) => out, act=identity; bias=true, init=glorot_uniform, residual=false)\nCGConv(in => out, ...)\n\nThe crystal graph convolutional layer from the paper Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties. Performs the operation\n\nmathbfx_i = mathbfx_i + sum_jin N(i)sigma(W_f mathbfz_ij + mathbfb_f) act(W_s mathbfz_ij + mathbfb_s)\n\nwhere mathbfz_ij is the node and edge features concatenation mathbfx_i mathbfx_j mathbfe_jto i and sigma is the sigmoid function. The residual mathbfx_i is added only if residual=true and the output size is the same as the input size.\n\nArguments\n\nin: The dimension of input node features.\nein: The dimension of input edge features. \n\nIf ein is not given, assumes that no edge features are passed as input in the forward pass.\n\nout: The dimension of output node features.\nact: Activation function.\nbias: Add learnable bias.\ninit: Weights' initializer.\nresidual: Add a residual connection.\n\nExamples\n\ng = rand_graph(5, 6)\nx = rand(Float32, 2, g.num_nodes)\ne = rand(Float32, 3, g.num_edges)\n\nl = CGConv((2, 3) => 4, tanh)\ny = l(g, x, e) # size: (4, num_nodes)\n\n# No edge features\nl = CGConv(2 => 4, tanh)\ny = l(g, x) # size: (4, num_nodes)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.ChebConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.ChebConv","text":"ChebConv(in => out, k; bias=true, init=glorot_uniform)\n\nChebyshev spectral graph convolutional layer from paper Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.\n\nImplements\n\nX = sum^K-1_k=0 W^(k) Z^(k)\n\nwhere Z^(k) is the k-th term of Chebyshev polynomials, and can be calculated by the following recursive form:\n\nbeginaligned\nZ^(0) = X \nZ^(1) = hatL X \nZ^(k) = 2 hatL Z^(k-1) - Z^(k-2)\nendaligned\n\nwith hatL the scaled_laplacian.\n\nArguments\n\nin: The dimension of input features.\nout: The dimension of output features.\nk: The order of Chebyshev polynomial.\nbias: Add learnable bias.\ninit: Weights' initializer.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = ChebConv(3 => 5, 5) \n\n# forward pass\ny = l(g, x) # size: 5 × num_nodes\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.DConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.DConv","text":"DConv(ch::Pair{Int, Int}, k::Int; init = glorot_uniform, bias = true)\n\nDiffusion convolution layer from the paper Diffusion Convolutional Recurrent Neural Networks: Data-Driven Traffic Forecasting.\n\nArguments\n\nch: Pair of input and output dimensions.\nk: Number of diffusion steps.\ninit: Weights' initializer. Default glorot_uniform.\nbias: Add learnable bias. Default true.\n\nExamples\n\njulia> g = GNNGraph(rand(10, 10), ndata = rand(Float32, 2, 10));\n\njulia> dconv = DConv(2 => 4, 4)\nDConv(2 => 4, 4)\n\njulia> y = dconv(g, g.ndata.x);\n\njulia> size(y)\n(4, 10)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.EGNNConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.EGNNConv","text":"EGNNConv((in, ein) => out; hidden_size=2in, residual=false)\nEGNNConv(in => out; hidden_size=2in, residual=false)\n\nEquivariant Graph Convolutional Layer from E(n) Equivariant Graph Neural Networks.\n\nThe layer performs the following operation:\n\nbeginaligned\nmathbfm_jto i =phi_e(mathbfh_i mathbfh_j lVertmathbfx_i-mathbfx_jrVert^2 mathbfe_jto i)\nmathbfx_i = mathbfx_i + C_isum_jinmathcalN(i)(mathbfx_i-mathbfx_j)phi_x(mathbfm_jto i)\nmathbfm_i = C_isum_jinmathcalN(i) mathbfm_jto i\nmathbfh_i = mathbfh_i + phi_h(mathbfh_i mathbfm_i)\nendaligned\n\nwhere mathbfh_i, mathbfx_i, mathbfe_jto i are invariant node features, equivariant node features, and edge features respectively. phi_e, phi_h, and phi_x are two-layer MLPs. C is a constant for normalization, computed as 1mathcalN(i).\n\nConstructor Arguments\n\nin: Number of input features for h.\nout: Number of output features for h.\nein: Number of input edge features.\nhidden_size: Hidden representation size.\nresidual: If true, add a residual connection. Only possible if in == out. Default false.\n\nForward Pass\n\nl(g, x, h, e=nothing)\n\nForward Pass Arguments:\n\ng : The graph.\nx : Matrix of equivariant node coordinates.\nh : Matrix of invariant node features.\ne : Matrix of invariant edge features. Default nothing.\n\nReturns updated h and x.\n\nExamples\n\ng = rand_graph(10, 10)\nh = randn(Float32, 5, g.num_nodes)\nx = randn(Float32, 3, g.num_nodes)\negnn = EGNNConv(5 => 6, 10)\nhnew, xnew = egnn(g, h, x)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.EdgeConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.EdgeConv","text":"EdgeConv(nn; aggr=max)\n\nEdge convolutional layer from paper Dynamic Graph CNN for Learning on Point Clouds.\n\nPerforms the operation\n\nmathbfx_i = square_j in N(i) nn(mathbfx_i mathbfx_j - mathbfx_i)\n\nwhere nn generally denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.\n\nArguments\n\nnn: A (possibly learnable) function. \naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\n\n# create layer\nl = EdgeConv(Dense(2 * in_channel, out_channel), aggr = +)\n\n# forward pass\ny = l(g, x)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GATConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GATConv","text":"GATConv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])\nGATConv((in, ein) => out, ...)\n\nGraph attentional layer from the paper Graph Attention Networks.\n\nImplements the operation\n\nmathbfx_i = sum_j in N(i) cup i alpha_ij W mathbfx_j\n\nwhere the attention coefficients alpha_ij are given by\n\nalpha_ij = frac1z_i exp(LeakyReLU(mathbfa^T W mathbfx_i W mathbfx_j))\n\nwith z_i a normalization factor. \n\nIn case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as \n\nalpha_ij = frac1z_i exp(LeakyReLU(mathbfa^T W_e mathbfe_jto i W mathbfx_i W mathbfx_j))\n\nArguments\n\nin: The dimension of input node features.\nein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).\nout: The dimension of output node features.\nσ: Activation function. Default identity.\nbias: Learn the additive bias if true. Default true.\nheads: Number attention heads. Default 1.\nconcat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.\nnegative_slope: The parameter of LeakyReLU.Default 0.2.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default true.\ndropout: Dropout probability on the normalized attention coefficient. Default 0.0.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = GATConv(in_channel => out_channel, add_self_loops = false, bias = false; heads=2, concat=true)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GATv2Conv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GATv2Conv","text":"GATv2Conv(in => out, [σ; heads, concat, init, bias, negative_slope, add_self_loops])\nGATv2Conv((in, ein) => out, ...)\n\nGATv2 attentional layer from the paper How Attentive are Graph Attention Networks?.\n\nImplements the operation\n\nmathbfx_i = sum_j in N(i) cup i alpha_ij W_1 mathbfx_j\n\nwhere the attention coefficients alpha_ij are given by\n\nalpha_ij = frac1z_i exp(mathbfa^T LeakyReLU(W_2 mathbfx_i + W_1 mathbfx_j))\n\nwith z_i a normalization factor.\n\nIn case ein > 0 is given, edge features of dimension ein will be expected in the forward pass and the attention coefficients will be calculated as \n\nalpha_ij = frac1z_i exp(mathbfa^T LeakyReLU(W_3 mathbfe_jto i + W_2 mathbfx_i + W_1 mathbfx_j))\n\nArguments\n\nin: The dimension of input node features.\nein: The dimension of input edge features. Default 0 (i.e. no edge features passed in the forward).\nout: The dimension of output node features.\nσ: Activation function. Default identity.\nbias: Learn the additive bias if true. Default true.\nheads: Number attention heads. Default 1.\nconcat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.\nnegative_slope: The parameter of LeakyReLU.Default 0.2.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default true.\ndropout: Dropout probability on the normalized attention coefficient. Default 0.0.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\nein = 3\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = GATv2Conv((in_channel, ein) => out_channel, add_self_loops = false)\n\n# edge features\ne = randn(Float32, ein, length(s))\n\n# forward pass\ny = l(g, x, e) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GCNConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GCNConv","text":"GCNConv(in => out, σ=identity; [bias, init, add_self_loops, use_edge_weight])\n\nGraph convolutional layer from paper Semi-supervised Classification with Graph Convolutional Networks.\n\nPerforms the operation\n\nmathbfx_i = sum_jin N(i) a_ij W mathbfx_j\n\nwhere a_ij = 1 sqrtN(i)N(j) is a normalization factor computed from the node degrees. \n\nIf the input graph has weighted edges and use_edge_weight=true, than a_ij will be computed as\n\na_ij = frace_jto isqrtsum_j in N(i) e_jto i sqrtsum_i in N(j) e_ito j\n\nThe input to the layer is a node feature array X of size (num_features, num_nodes) and optionally an edge weight vector.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nσ: Activation function. Default identity.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default false.\nuse_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. This option is ignored if the edge_weight is explicitly provided in the forward pass. Default false.\n\nForward\n\n(::GCNConv)(g::GNNGraph, x, edge_weight = nothing; norm_fn = d -> 1 ./ sqrt.(d), conv_weight = nothing) -> AbstractMatrix\n\nTakes as input a graph g, a node feature matrix x of size [in, num_nodes], and optionally an edge weight vector. Returns a node feature matrix of size [out, num_nodes].\n\nThe norm_fn parameter allows for custom normalization of the graph convolution operation by passing a function as argument. By default, it computes frac1sqrtd i.e the inverse square root of the degree (d) of each node in the graph. If conv_weight is an AbstractMatrix of size [out, in], then the convolution is performed using that weight matrix instead of the weights stored in the model.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = GCNConv(3 => 5) \n\n# forward pass\ny = l(g, x) # size: 5 × num_nodes\n\n# convolution with edge weights and custom normalization function\nw = [1.1, 0.1, 2.3, 0.5]\ncustom_norm_fn(d) = 1 ./ sqrt.(d + 1) # Custom normalization function\ny = l(g, x, w; norm_fn = custom_norm_fn)\n\n# Edge weights can also be embedded in the graph.\ng = GNNGraph(s, t, w)\nl = GCNConv(3 => 5, use_edge_weight=true) \ny = l(g, x) # same as l(g, x, w) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GINConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GINConv","text":"GINConv(f, ϵ; aggr=+)\n\nGraph Isomorphism convolutional layer from paper How Powerful are Graph Neural Networks?.\n\nImplements the graph convolution\n\nmathbfx_i = f_Thetaleft((1 + epsilon) mathbfx_i + sum_j in N(i) mathbfx_j right)\n\nwhere f_Theta typically denotes a learnable function, e.g. a linear layer or a multi-layer perceptron.\n\nArguments\n\nf: A (possibly learnable) function acting on node features. \nϵ: Weighting factor.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\n\n# create dense layer\nnn = Dense(in_channel, out_channel)\n\n# create layer\nl = GINConv(nn, 0.01f0, aggr = mean)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GMMConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GMMConv","text":"GMMConv((in, ein) => out, σ=identity; K=1, bias=true, init=glorot_uniform, residual=false)\n\nGraph mixture model convolution layer from the paper Geometric deep learning on graphs and manifolds using mixture model CNNs Performs the operation\n\nmathbfx_i = mathbfx_i + frac1N(i) sum_jin N(i)frac1Ksum_k=1^K mathbfw_k(mathbfe_jto i) odot Theta_k mathbfx_j\n\nwhere w^a_k(e^a) for feature a and kernel k is given by\n\nw^a_k(e^a) = exp(-frac12(e^a - mu^a_k)^T (Sigma^-1)^a_k(e^a - mu^a_k))\n\nTheta_k mu^a_k (Sigma^-1)^a_k are learnable parameters.\n\nThe input to the layer is a node feature array x of size (num_features, num_nodes) and edge pseudo-coordinate array e of size (num_features, num_edges) The residual mathbfx_i is added only if residual=true and the output size is the same as the input size.\n\nArguments\n\nin: Number of input node features.\nein: Number of input edge features.\nout: Number of output features.\nσ: Activation function. Default identity.\nK: Number of kernels. Default 1.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\nresidual: Residual conncetion. Default false.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s,t)\nnin, ein, out, K = 4, 10, 7, 8 \nx = randn(Float32, nin, g.num_nodes)\ne = randn(Float32, ein, g.num_edges)\n\n# create layer\nl = GMMConv((nin, ein) => out, K=K)\n\n# forward pass\nl(g, x, e)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GatedGraphConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GatedGraphConv","text":"GatedGraphConv(out, num_layers; aggr=+, init=glorot_uniform)\n\nGated graph convolution layer from Gated Graph Sequence Neural Networks.\n\nImplements the recursion\n\nbeginaligned\nmathbfh^(0)_i = mathbfx_i mathbf0 \nmathbfh^(l)_i = GRU(mathbfh^(l-1)_i square_j in N(i) W mathbfh^(l-1)_j)\nendaligned\n\nwhere mathbfh^(l)_i denotes the l-th hidden variables passing through GRU. The dimension of input mathbfx_i needs to be less or equal to out.\n\nArguments\n\nout: The dimension of output features.\nnum_layers: The number of recursion steps.\naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\ninit: Weight initialization function.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nout_channel = 5\nnum_layers = 3\ng = GNNGraph(s, t)\n\n# create layer\nl = GatedGraphConv(out_channel, num_layers)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.GraphConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.GraphConv","text":"GraphConv(in => out, σ=identity; aggr=+, bias=true, init=glorot_uniform)\n\nGraph convolution layer from Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks.\n\nPerforms:\n\nmathbfx_i = W_1 mathbfx_i + square_j in mathcalN(i) W_2 mathbfx_j\n\nwhere the aggregation type is selected by aggr.\n\nArguments\n\nin: The dimension of input features.\nout: The dimension of output features.\nσ: Activation function.\naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\nbias: Add learnable bias.\ninit: Weights' initializer.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = GraphConv(in_channel => out_channel, relu, bias = false, aggr = mean)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.MEGNetConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.MEGNetConv","text":"MEGNetConv(ϕe, ϕv; aggr=mean)\nMEGNetConv(in => out; aggr=mean)\n\nConvolution from Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals paper. In the forward pass, takes as inputs node features x and edge features e and returns updated features x' and e' according to \n\nbeginaligned\nmathbfe_ito j = phi_e(mathbfx_i mathbfx_j mathbfe_ito j)\nmathbfx_i = phi_v(mathbfx_i square_jin mathcalN(i)mathbfe_jto i)\nendaligned\n\naggr defines the aggregation to be performed.\n\nIf the neural networks ϕe and ϕv are not provided, they will be constructed from the in and out arguments instead as multi-layer perceptron with one hidden layer and relu activations.\n\nExamples\n\ng = rand_graph(10, 30)\nx = randn(Float32, 3, 10)\ne = randn(Float32, 3, 30)\nm = MEGNetConv(3 => 3)\nx′, e′ = m(g, x, e)\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.NNConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.NNConv","text":"NNConv(in => out, f, σ=identity; aggr=+, bias=true, init=glorot_uniform)\n\nThe continuous kernel-based convolutional operator from the Neural Message Passing for Quantum Chemistry paper. This convolution is also known as the edge-conditioned convolution from the Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs paper.\n\nPerforms the operation\n\nmathbfx_i = W mathbfx_i + square_j in N(i) f_Theta(mathbfe_jto i)mathbfx_j\n\nwhere f_Theta denotes a learnable function (e.g. a linear layer or a multi-layer perceptron). Given an input of batched edge features e of size (num_edge_features, num_edges), the function f will return an batched matrices array whose size is (out, in, num_edges). For convenience, also functions returning a single (out*in, num_edges) matrix are allowed.\n\nArguments\n\nin: The dimension of input node features.\nout: The dimension of output node features.\nf: A (possibly learnable) function acting on edge features.\naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\nσ: Activation function.\nbias: Add learnable bias.\ninit: Weights' initializer.\n\nExamples:\n\nn_in = 3\nn_in_edge = 10\nn_out = 5\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\n\n# create dense layer\nnn = Dense(n_in_edge => n_out * n_in)\n\n# create layer\nl = NNConv(n_in => n_out, nn, tanh, bias = true, aggr = +)\n\nx = randn(Float32, n_in, g.num_nodes)\ne = randn(Float32, n_in_edge, g.num_edges)\n\n# forward pass\ny = l(g, x, e) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.ResGatedGraphConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.ResGatedGraphConv","text":"ResGatedGraphConv(in => out, act=identity; init=glorot_uniform, bias=true)\n\nThe residual gated graph convolutional operator from the Residual Gated Graph ConvNets paper.\n\nThe layer's forward pass is given by\n\nmathbfx_i = actbig(Umathbfx_i + sum_j in N(i) eta_ij V mathbfx_jbig)\n\nwhere the edge gates eta_ij are given by\n\neta_ij = sigmoid(Amathbfx_i + Bmathbfx_j)\n\nArguments\n\nin: The dimension of input features.\nout: The dimension of output features.\nact: Activation function.\ninit: Weight matrices' initializing function. \nbias: Learn an additive bias if true.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\n\n# create layer\nl = ResGatedGraphConv(in_channel => out_channel, tanh, bias = true)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.SAGEConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.SAGEConv","text":"SAGEConv(in => out, σ=identity; aggr=mean, bias=true, init=glorot_uniform)\n\nGraphSAGE convolution layer from paper Inductive Representation Learning on Large Graphs.\n\nPerforms:\n\nmathbfx_i = W cdot mathbfx_i square_j in mathcalN(i) mathbfx_j\n\nwhere the aggregation type is selected by aggr.\n\nArguments\n\nin: The dimension of input features.\nout: The dimension of output features.\nσ: Activation function.\naggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).\nbias: Add learnable bias.\ninit: Weights' initializer.\n\nExamples:\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\nin_channel = 3\nout_channel = 5\ng = GNNGraph(s, t)\n\n# create layer\nl = SAGEConv(in_channel => out_channel, tanh, bias = false, aggr = +)\n\n# forward pass\ny = l(g, x) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.SGConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.SGConv","text":"SGConv(int => out, k=1; [bias, init, add_self_loops, use_edge_weight])\n\nSGC layer from Simplifying Graph Convolutional Networks Performs operation\n\nH^K = (tildeD^-12 tildeA tildeD^-12)^K X Theta\n\nwhere tildeA is A + I.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk : Number of hops k. Default 1.\nbias: Add learnable bias. Default true.\ninit: Weights' initializer. Default glorot_uniform.\nadd_self_loops: Add self loops to the graph before performing the convolution. Default false.\nuse_edge_weight: If true, consider the edge weights in the input graph (if available). If add_self_loops=true the new weights will be set to 1. Default false.\n\nExamples\n\n# create data\ns = [1,1,2,3]\nt = [2,3,1,1]\ng = GNNGraph(s, t)\nx = randn(Float32, 3, g.num_nodes)\n\n# create layer\nl = SGConv(3 => 5; add_self_loops = true) \n\n# forward pass\ny = l(g, x) # size: 5 × num_nodes\n\n# convolution with edge weights\nw = [1.1, 0.1, 2.3, 0.5]\ny = l(g, x, w)\n\n# Edge weights can also be embedded in the graph.\ng = GNNGraph(s, t, w)\nl = SGConv(3 => 5, add_self_loops = true, use_edge_weight=true) \ny = l(g, x) # same as l(g, x, w) \n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.TAGConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.TAGConv","text":"TAGConv(in => out, k=3; bias=true, init=glorot_uniform, add_self_loops=true, use_edge_weight=false)\n\nTAGConv layer from Topology Adaptive Graph Convolutional Networks. This layer extends the idea of graph convolutions by applying filters that adapt to the topology of the data. It performs the operation:\n\nH^K = sum_k=0^K (D^-12 A D^-12)^k X Theta_k\n\nwhere A is the adjacency matrix of the graph, D is the degree matrix, X is the input feature matrix, and Theta_k is a unique weight matrix for each hop k.\n\nArguments\n\nin: Number of input features.\nout: Number of output features.\nk: Maximum number of hops to consider. Default is 3.\nbias: Whether to include a learnable bias term. Default is true.\ninit: Initialization function for the weights. Default is glorot_uniform.\nadd_self_loops: Whether to add self-loops to the adjacency matrix. Default is true.\nuse_edge_weight: If true, edge weights are considered in the computation (if available). Default is false.\n\nExamples\n\n# Example graph data\ns = [1, 1, 2, 3]\nt = [2, 3, 1, 1]\ng = GNNGraph(s, t) # Create a graph\nx = randn(Float32, 3, g.num_nodes) # Random features for each node\n\n# Create a TAGConv layer\nl = TAGConv(3 => 5, k=3; add_self_loops=true)\n\n# Apply the TAGConv layer\ny = l(g, x) # Output size: 5 × num_nodes\n\n\n\n\n\n","category":"type"},{"location":"api/conv/#GraphNeuralNetworks.TransformerConv","page":"Convolutional Layers","title":"GraphNeuralNetworks.TransformerConv","text":"TransformerConv((in, ein) => out; [heads, concat, init, add_self_loops, bias_qkv,\n bias_root, root_weight, gating, skip_connection, batch_norm, ff_channels]))\n\nThe transformer-like multi head attention convolutional operator from the Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification paper, which also considers edge features. It further contains options to also be configured as the transformer-like convolutional operator from the Attention, Learn to Solve Routing Problems! paper, including a successive feed-forward network as well as skip layers and batch normalization.\n\nThe layer's basic forward pass is given by\n\nx_i = W_1x_i + sum_jin N(i) alpha_ij (W_2 x_j + W_6e_ij)\n\nwhere the attention scores are\n\nalpha_ij = mathrmsoftmaxleft(frac(W_3x_i)^T(W_4x_j+\nW_6e_ij)sqrtdright)\n\nOptionally, a combination of the aggregated value with transformed root node features by a gating mechanism via\n\nx_i = beta_i W_1 x_i + (1 - beta_i) underbraceleft(sum_j in mathcalN(i)\nalpha_ij W_2 x_j right)_=m_i\n\nwith\n\nbeta_i = textrmsigmoid(W_5^top W_1 x_i m_i W_1 x_i - m_i )\n\ncan be performed.\n\nArguments\n\nin: Dimension of input features, which also corresponds to the dimension of the output features.\nein: Dimension of the edge features; if 0, no edge features will be used.\nout: Dimension of the output.\nheads: Number of heads in output. Default 1.\nconcat: Concatenate layer output or not. If not, layer output is averaged over the heads. Default true.\ninit: Weight matrices' initializing function. Default glorot_uniform.\nadd_self_loops: Add self loops to the input graph. Default false.\nbias_qkv: If set, bias is used in the key, query and value transformations for nodes. Default true.\nbias_root: If set, the layer will also learn an additive bias for the root when root weight is used. Default true.\nroot_weight: If set, the layer will add the transformed root node features to the output. Default true.\ngating: If set, will combine aggregation and transformed root node features by a gating mechanism. Default false.\nskip_connection: If set, a skip connection will be made from the input and added to the output. Default false.\nbatch_norm: If set, a batch normalization will be applied to the output. Default false.\nff_channels: If positive, a feed-forward NN is appended, with the first having the given number of hidden nodes; this NN also gets a skip connection and batch normalization if the respective parameters are set. Default: 0.\n\nExamples\n\nN, in_channel, out_channel = 4, 3, 5\nein, heads = 2, 3\ng = GNNGraph([1,1,2,4], [2,3,1,1])\nl = TransformerConv((in_channel, ein) => in_channel; heads, gating = true, bias_qkv = true)\nx = rand(Float32, in_channel, N)\ne = rand(Float32, ein, g.num_edges)\nl(g, x, e)\n\n\n\n\n\n","category":"type"},{"location":"api/messagepassing/","page":"Message Passing","title":"Message Passing","text":"CurrentModule = GraphNeuralNetworks","category":"page"},{"location":"api/messagepassing/#Message-Passing","page":"Message Passing","title":"Message Passing","text":"","category":"section"},{"location":"api/messagepassing/#Index","page":"Message Passing","title":"Index","text":"","category":"section"},{"location":"api/messagepassing/","page":"Message Passing","title":"Message Passing","text":"Order = [:type, :function]\nPages = [\"messagepassing.md\"]","category":"page"},{"location":"api/messagepassing/#Interface","page":"Message Passing","title":"Interface","text":"","category":"section"},{"location":"api/messagepassing/","page":"Message Passing","title":"Message Passing","text":"GNNlib.apply_edges\nGNNlib.aggregate_neighbors\nGNNlib.propagate","category":"page"},{"location":"api/messagepassing/#GNNlib.apply_edges","page":"Message Passing","title":"GNNlib.apply_edges","text":"apply_edges(fmsg, g; [xi, xj, e])\napply_edges(fmsg, g, xi, xj, e=nothing)\n\nReturns the message from node j to node i applying the message function fmsg on the edges in graph g. In the message-passing scheme, the incoming messages from the neighborhood of i will later be aggregated in order to update the features of node i (see aggregate_neighbors).\n\nThe function fmsg operates on batches of edges, therefore xi, xj, and e are tensors whose last dimension is the batch size, or can be named tuples of such tensors.\n\nArguments\n\ng: An AbstractGNNGraph.\nxi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).\nxj: As xi, but now to be materialized on each edge's source node. \ne: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.\nfmsg: A function that takes as inputs the edge-materialized xi, xj, and e. These are arrays (or named tuples of arrays) whose last dimension' size is the size of a batch of edges. The output of f has to be an array (or a named tuple of arrays) with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).\n\nSee also propagate and aggregate_neighbors.\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.aggregate_neighbors","page":"Message Passing","title":"GNNlib.aggregate_neighbors","text":"aggregate_neighbors(g, aggr, m)\n\nGiven a graph g, edge features m, and an aggregation operator aggr (e.g +, min, max, mean), returns the new node features \n\nmathbfx_i = square_j in mathcalN(i) mathbfm_jto i\n\nNeighborhood aggregation is the second step of propagate, where it comes after apply_edges.\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.propagate","page":"Message Passing","title":"GNNlib.propagate","text":"propagate(fmsg, g, aggr; [xi, xj, e])\npropagate(fmsg, g, aggr xi, xj, e=nothing)\n\nPerforms message passing on graph g. Takes care of materializing the node features on each edge, applying the message function fmsg, and returning an aggregated message barmathbfm (depending on the return value of fmsg, an array or a named tuple of arrays with last dimension's size g.num_nodes).\n\nIt can be decomposed in two steps:\n\nm = apply_edges(fmsg, g, xi, xj, e)\nm̄ = aggregate_neighbors(g, aggr, m)\n\nGNN layers typically call propagate in their forward pass, providing as input f a closure. \n\nArguments\n\ng: A GNNGraph.\nxi: An array or a named tuple containing arrays whose last dimension's size is g.num_nodes. It will be appropriately materialized on the target node of each edge (see also edge_index).\nxj: As xj, but to be materialized on edges' sources. \ne: An array or a named tuple containing arrays whose last dimension's size is g.num_edges.\nfmsg: A generic function that will be passed over to apply_edges. Has to take as inputs the edge-materialized xi, xj, and e (arrays or named tuples of arrays whose last dimension' size is the size of a batch of edges). Its output has to be an array or a named tuple of arrays with the same batch size. If also layer is passed to propagate, the signature of fmsg has to be fmsg(layer, xi, xj, e) instead of fmsg(xi, xj, e).\naggr: Neighborhood aggregation operator. Use +, mean, max, or min. \n\nExamples\n\nusing GraphNeuralNetworks, Flux\n\nstruct GNNConv <: GNNLayer\n W\n b\n σ\nend\n\nFlux.@layer GNNConv\n\nfunction GNNConv(ch::Pair{Int,Int}, σ=identity)\n in, out = ch\n W = Flux.glorot_uniform(out, in)\n b = zeros(Float32, out)\n GNNConv(W, b, σ)\nend\n\nfunction (l::GNNConv)(g::GNNGraph, x::AbstractMatrix)\n message(xi, xj, e) = l.W * xj\n m̄ = propagate(message, g, +, xj=x)\n return l.σ.(m̄ .+ l.bias)\nend\n\nl = GNNConv(10 => 20)\nl(g, x)\n\nSee also apply_edges and aggregate_neighbors.\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#Built-in-message-functions","page":"Message Passing","title":"Built-in message functions","text":"","category":"section"},{"location":"api/messagepassing/","page":"Message Passing","title":"Message Passing","text":"GNNlib.copy_xi\nGNNlib.copy_xj\nGNNlib.xi_dot_xj\nGNNlib.xi_sub_xj\nGNNlib.xj_sub_xi\nGNNlib.e_mul_xj\nGNNlib.w_mul_xj","category":"page"},{"location":"api/messagepassing/#GNNlib.copy_xi","page":"Message Passing","title":"GNNlib.copy_xi","text":"copy_xi(xi, xj, e) = xi\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.copy_xj","page":"Message Passing","title":"GNNlib.copy_xj","text":"copy_xj(xi, xj, e) = xj\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.xi_dot_xj","page":"Message Passing","title":"GNNlib.xi_dot_xj","text":"xi_dot_xj(xi, xj, e) = sum(xi .* xj, dims=1)\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.xi_sub_xj","page":"Message Passing","title":"GNNlib.xi_sub_xj","text":"xi_sub_xj(xi, xj, e) = xi .- xj\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.xj_sub_xi","page":"Message Passing","title":"GNNlib.xj_sub_xi","text":"xj_sub_xi(xi, xj, e) = xj .- xi\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.e_mul_xj","page":"Message Passing","title":"GNNlib.e_mul_xj","text":"e_mul_xj(xi, xj, e) = reshape(e, (...)) .* xj\n\nReshape e into broadcast compatible shape with xj (by prepending singleton dimensions) then perform broadcasted multiplication.\n\n\n\n\n\n","category":"function"},{"location":"api/messagepassing/#GNNlib.w_mul_xj","page":"Message Passing","title":"GNNlib.w_mul_xj","text":"w_mul_xj(xi, xj, w) = reshape(w, (...)) .* xj\n\nSimilar to e_mul_xj but specialized on scalar edge features (weights).\n\n\n\n\n\n","category":"function"},{"location":"#GraphNeuralNetworks","page":"Home","title":"GraphNeuralNetworks","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This is the documentation page for GraphNeuralNetworks.jl, a graph neural network library written in Julia and based on the deep learning framework Flux.jl. GraphNeuralNetworks.jl is largely inspired by PyTorch Geometric, Deep Graph Library, and GeometricFlux.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Among its features:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Implements common graph convolutional layers.\nSupports computations on batched graphs. \nEasy to define custom layers.\nCUDA support.\nIntegration with Graphs.jl.\nExamples of node, edge, and graph level machine learning tasks. ","category":"page"},{"location":"#Package-overview","page":"Home","title":"Package overview","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Let's give a brief overview of the package by solving a graph regression problem with synthetic data. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Usage examples on real datasets can be found in the examples folder. ","category":"page"},{"location":"#Data-preparation","page":"Home","title":"Data preparation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"We create a dataset consisting in multiple random graphs and associated data features. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"using GraphNeuralNetworks, Graphs, Flux, CUDA, Statistics, MLUtils\nusing Flux: DataLoader\n\nall_graphs = GNNGraph[]\n\nfor _ in 1:1000\n g = rand_graph(10, 40, \n ndata=(; x = randn(Float32, 16,10)), # input node features\n gdata=(; y = randn(Float32))) # regression target \n push!(all_graphs, g)\nend","category":"page"},{"location":"#Model-building","page":"Home","title":"Model building","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"We concisely define our model as a GNNChain containing two graph convolutional layers. If CUDA is available, our model will live on the gpu.","category":"page"},{"location":"","page":"Home","title":"Home","text":"device = CUDA.functional() ? Flux.gpu : Flux.cpu;\n\nmodel = GNNChain(GCNConv(16 => 64),\n BatchNorm(64), # Apply batch normalization on node features (nodes dimension is batch dimension)\n x -> relu.(x), \n GCNConv(64 => 64, relu),\n GlobalPool(mean), # aggregate node-wise features into graph-wise features\n Dense(64, 1)) |> device\n\nopt = Flux.setup(Adam(1f-4), model)","category":"page"},{"location":"#Training","page":"Home","title":"Training","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Finally, we use a standard Flux training pipeline to fit our dataset. We use Flux's DataLoader to iterate over mini-batches of graphs that are glued together into a single GNNGraph using the Flux.batch method. This is what happens under the hood when creating a DataLoader with the collate=true option. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"train_graphs, test_graphs = MLUtils.splitobs(all_graphs, at=0.8)\n\ntrain_loader = DataLoader(train_graphs, \n batchsize=32, shuffle=true, collate=true)\ntest_loader = DataLoader(test_graphs, \n batchsize=32, shuffle=false, collate=true)\n\nloss(model, g::GNNGraph) = mean((vec(model(g, g.x)) - g.y).^2)\n\nloss(model, loader) = mean(loss(model, g |> device) for g in loader)\n\nfor epoch in 1:100\n for g in train_loader\n g = g |> device\n grad = gradient(model -> loss(model, g), model)\n Flux.update!(opt, model, grad[1])\n end\n\n @info (; epoch, train_loss=loss(model, train_loader), test_loss=loss(model, test_loader))\nend","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/#Hands-on-introduction-to-Graph-Neural-Networks","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"(Image: Source code) (Image: Author) (Image: Update time)","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"\n\n\n\n\n

This Pluto notebook is a Julia adaptation of the Pytorch Geometric tutorials that can be found here.

Recently, deep learning on graphs has emerged to one of the hottest research fields in the deep learning community. Here, Graph Neural Networks (GNNs) aim to generalize classical deep learning concepts to irregular structured data (in contrast to images or texts) and to enable neural networks to reason about objects and their relations.

This is done by following a simple neural message passing scheme, where node features \\(\\mathbf{x}_i^{(\\ell)}\\) of all nodes \\(i \\in \\mathcal{V}\\) in a graph \\(\\mathcal{G} = (\\mathcal{V}, \\mathcal{E})\\) are iteratively updated by aggregating localized information from their neighbors \\(\\mathcal{N}(i)\\):

$$\\mathbf{x}_i^{(\\ell + 1)} = f^{(\\ell + 1)}_{\\theta} \\left( \\mathbf{x}_i^{(\\ell)}, \\left\\{ \\mathbf{x}_j^{(\\ell)} : j \\in \\mathcal{N}(i) \\right\\} \\right)$$

This tutorial will introduce you to some fundamental concepts regarding deep learning on graphs via Graph Neural Networks based on the GraphNeuralNetworks.jl library. GraphNeuralNetworks.jl is an extension library to the popular deep learning framework Flux.jl, and consists of various methods and utilities to ease the implementation of Graph Neural Networks.

Let's first import the packages we need:

\n\n
begin\n    using Flux\n    using Flux: onecold, onehotbatch, logitcrossentropy\n    using MLDatasets\n    using LinearAlgebra, Random, Statistics\n    import GraphMakie\n    import CairoMakie as Makie\n    using Graphs\n    using PlutoUI\n    using GraphNeuralNetworks\nend
\n\n\n
begin\n    ENV[\"DATADEPS_ALWAYS_ACCEPT\"] = \"true\"  # don't ask for dataset download confirmation\n    Random.seed!(17) # for reproducibility\nend;
\n\n\n\n

Following Kipf et al. (2017), let's dive into the world of GNNs by looking at a simple graph-structured example, the well-known Zachary's karate club network. This graph describes a social network of 34 members of a karate club and documents links between members who interacted outside the club. Here, we are interested in detecting communities that arise from the member's interaction.

GraphNeuralNetworks.jl provides utilities to convert MLDatasets.jl's datasets to its own type:

\n\n
dataset = MLDatasets.KarateClub()
\n
dataset KarateClub:\n  metadata  =>    Dict{String, Any} with 0 entries\n  graphs    =>    1-element Vector{MLDatasets.Graph}
\n\n\n

After initializing the KarateClub dataset, we first can inspect some of its properties. For example, we can see that this dataset holds exactly one graph. Furthermore, the graph holds exactly 4 classes, which represent the community each node belongs to.

\n\n
karate = dataset[1]
\n
Graph:\n  num_nodes   =>    34\n  num_edges   =>    156\n  edge_index  =>    (\"156-element Vector{Int64}\", \"156-element Vector{Int64}\")\n  node_data   =>    (labels_clubs = \"34-element Vector{Int64}\", labels_comm = \"34-element Vector{Int64}\")\n  edge_data   =>    nothing
\n\n
karate.node_data.labels_comm
\n
34-element Vector{Int64}:\n 1\n 1\n 1\n 1\n 3\n 3\n 3\n ⋮\n 2\n 0\n 0\n 2\n 0\n 0
\n\n\n

Now we convert the single-graph dataset to a GNNGraph. Moreover, we add a an array of node features, a 34-dimensional feature vector for each node which uniquely describes the members of the karate club. We also add a training mask selecting the nodes to be used for training in our semi-supervised node classification task.

\n\n
begin\n    # convert a MLDataset.jl's dataset to a GNNGraphs (or a collection of graphs)\n    g = mldataset2gnngraph(dataset)\n\n    x = zeros(Float32, g.num_nodes, g.num_nodes)\n    x[diagind(x)] .= 1\n\n    train_mask = [true, false, false, false, true, false, false, false, true,\n        false, false, false, false, false, false, false, false, false, false, false,\n        false, false, false, false, true, false, false, false, false, false,\n        false, false, false, false]\n\n    labels = g.ndata.labels_comm\n    y = onehotbatch(labels, 0:3)\n\n    g = GNNGraph(g, ndata = (; x, y, train_mask))\nend
\n
GNNGraph:\n  num_nodes: 34\n  num_edges: 156\n  ndata:\n\ty = 4×34 OneHotMatrix(::Vector{UInt32}) with eltype Bool\n\ttrain_mask = 34-element Vector{Bool}\n\tx = 34×34 Matrix{Float32}
\n\n\n

Let's now look at the underlying graph in more detail:

\n\n
with_terminal() do\n    # Gather some statistics about the graph.\n    println(\"Number of nodes: $(g.num_nodes)\")\n    println(\"Number of edges: $(g.num_edges)\")\n    println(\"Average node degree: $(g.num_edges / g.num_nodes)\")\n    println(\"Number of training nodes: $(sum(g.ndata.train_mask))\")\n    println(\"Training node label rate: $(mean(g.ndata.train_mask))\")\n    # println(\"Has isolated nodes: $(has_isolated_nodes(g))\")\n    println(\"Has self-loops: $(has_self_loops(g))\")\n    println(\"Is undirected: $(is_bidirected(g))\")\nend
\n
Number of nodes: 34\nNumber of edges: 156\nAverage node degree: 4.588235294117647\nNumber of training nodes: 4\nTraining node label rate: 0.11764705882352941\nHas self-loops: false\nIs undirected: true\n
\n\n\n

Each graph in GraphNeuralNetworks.jl is represented by a GNNGraph object, which holds all the information to describe its graph representation. We can print the data object anytime via print(g) to receive a short summary about its attributes and their shapes.

The g object holds 3 attributes:

These attributes are NamedTuples that can store multiple feature arrays: we can access a specific set of features e.g. x, with g.ndata.x.

In our task, g.ndata.train_mask describes for which nodes we already know their community assignments. In total, we are only aware of the ground-truth labels of 4 nodes (one for each community), and the task is to infer the community assignment for the remaining nodes.

The g object also provides some utility functions to infer some basic properties of the underlying graph. For example, we can easily infer whether there exist isolated nodes in the graph (i.e. there exists no edge to any node), whether the graph contains self-loops (i.e., \\((v, v) \\in \\mathcal{E}\\)), or whether the graph is bidirected (i.e., for each edge \\((v, w) \\in \\mathcal{E}\\) there also exists the edge \\((w, v) \\in \\mathcal{E}\\)).

Let us now inspect the edge_index method:

\n\n
edge_index(g)
\n
([1, 1, 1, 1, 1, 1, 1, 1, 1, 1  …  34, 34, 34, 34, 34, 34, 34, 34, 34, 34], [2, 3, 4, 5, 6, 7, 8, 9, 11, 12  …  21, 23, 24, 27, 28, 29, 30, 31, 32, 33])
\n\n\n

By printing edge_index(g), we can understand how GraphNeuralNetworks.jl represents graph connectivity internally. We can see that for each edge, edge_index holds a tuple of two node indices, where the first value describes the node index of the source node and the second value describes the node index of the destination node of an edge.

This representation is known as the COO format (coordinate format) commonly used for representing sparse matrices. Instead of holding the adjacency information in a dense representation \\(\\mathbf{A} \\in \\{ 0, 1 \\}^{|\\mathcal{V}| \\times |\\mathcal{V}|}\\), GraphNeuralNetworks.jl represents graphs sparsely, which refers to only holding the coordinates/values for which entries in \\(\\mathbf{A}\\) are non-zero.

Importantly, GraphNeuralNetworks.jl does not distinguish between directed and undirected graphs, and treats undirected graphs as a special case of directed graphs in which reverse edges exist for every entry in the edge_index.

Since a GNNGraph is an AbstractGraph from the Graphs.jl library, it supports graph algorithms and visualization tools from the wider julia graph ecosystem:

\n\n
GraphMakie.graphplot(g |> to_unidirected, node_size = 20, node_color = labels,\n                     arrow_show = false)
\n\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/#Implementing-Graph-Neural-Networks","page":"Hands-on introduction to Graph Neural Networks","title":"Implementing Graph Neural Networks","text":"","category":"section"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"
\n

After learning about GraphNeuralNetworks.jl's data handling, it's time to implement our first Graph Neural Network!

For this, we will use on of the most simple GNN operators, the GCN layer (Kipf et al. (2017)), which is defined as

$$\\mathbf{x}_v^{(\\ell + 1)} = \\mathbf{W}^{(\\ell + 1)} \\sum_{w \\in \\mathcal{N}(v) \\, \\cup \\, \\{ v \\}} \\frac{1}{c_{w,v}} \\cdot \\mathbf{x}_w^{(\\ell)}$$

where \\(\\mathbf{W}^{(\\ell + 1)}\\) denotes a trainable weight matrix of shape [num_output_features, num_input_features] and \\(c_{w,v}\\) refers to a fixed normalization coefficient for each edge.

GraphNeuralNetworks.jl implements this layer via GCNConv, which can be executed by passing in the node feature representation x and the COO graph connectivity representation edge_index.

With this, we are ready to create our first Graph Neural Network by defining our network architecture:

\n\n
begin\n    struct GCN\n        layers::NamedTuple\n    end\n\n    Flux.@layer GCN # provides parameter collection, gpu movement and more\n\n    function GCN(num_features, num_classes)\n        layers = (conv1 = GCNConv(num_features => 4),\n                  conv2 = GCNConv(4 => 4),\n                  conv3 = GCNConv(4 => 2),\n                  classifier = Dense(2, num_classes))\n        return GCN(layers)\n    end\n\n    function (gcn::GCN)(g::GNNGraph, x::AbstractMatrix)\n        l = gcn.layers\n        x = l.conv1(g, x)\n        x = tanh.(x)\n        x = l.conv2(g, x)\n        x = tanh.(x)\n        x = l.conv3(g, x)\n        x = tanh.(x)  # Final GNN embedding space.\n        out = l.classifier(x)\n        # Apply a final (linear) classifier.\n        return out, x\n    end\nend
\n\n\n\n

Here, we first initialize all of our building blocks in the constructor and define the computation flow of our network in the call method. We first define and stack three graph convolution layers, which corresponds to aggregating 3-hop neighborhood information around each node (all nodes up to 3 \"hops\" away). In addition, the GCNConv layers reduce the node feature dimensionality to \\(2\\), i.e., \\(34 \\rightarrow 4 \\rightarrow 4 \\rightarrow 2\\). Each GCNConv layer is enhanced by a tanh non-linearity.

After that, we apply a single linear transformation (Flux.Dense that acts as a classifier to map our nodes to 1 out of the 4 classes/communities.

We return both the output of the final classifier as well as the final node embeddings produced by our GNN. We proceed to initialize our final model via GCN(), and printing our model produces a summary of all its used sub-modules.

Embedding the Karate Club Network

Let's take a look at the node embeddings produced by our GNN. Here, we pass in the initial node features x and the graph information g to the model, and visualize its 2-dimensional embedding.

\n\n
begin\n    num_features = 34\n    num_classes = 4\n    gcn = GCN(num_features, num_classes)\nend
\n
GCN((conv1 = GCNConv(34 => 4), conv2 = GCNConv(4 => 4), conv3 = GCNConv(4 => 2), classifier = Dense(2 => 4)))  # 182 parameters
\n\n
_, h = gcn(g, g.ndata.x)
\n
(Float32[-0.010103569 -0.0052728415 … -0.0029141798 -0.005299572; 0.07322627 0.026412923 … 0.08953842 0.11160924; -0.10429451 -0.03155778 … -0.16266607 -0.19655727; 0.02678022 0.013143934 … 0.012547926 0.01920776], Float32[0.10564233 0.03596353 … 0.14159243 0.1743016; 0.0875177 0.024502957 … 0.14796856 0.17720973])
\n\n
function visualize_embeddings(h; colors = nothing)\n    xs = h[1, :] |> vec\n    ys = h[2, :] |> vec\n    Makie.scatter(xs, ys, color = labels, markersize = 20)\nend
\n
visualize_embeddings (generic function with 1 method)
\n\n
visualize_embeddings(h, colors = labels)
\n\n\n\n

Remarkably, even before training the weights of our model, the model produces an embedding of nodes that closely resembles the community-structure of the graph. Nodes of the same color (community) are already closely clustered together in the embedding space, although the weights of our model are initialized completely at random and we have not yet performed any training so far! This leads to the conclusion that GNNs introduce a strong inductive bias, leading to similar embeddings for nodes that are close to each other in the input graph.

Training on the Karate Club Network

But can we do better? Let's look at an example on how to train our network parameters based on the knowledge of the community assignments of 4 nodes in the graph (one for each community).

Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observe how the embeddings react. Here, we make use of a semi-supervised or transductive learning procedure: we simply train against one node per class, but are allowed to make use of the complete input graph data.

Training our model is very similar to any other Flux model. In addition to defining our network architecture, we define a loss criterion (here, logitcrossentropy), and initialize a stochastic gradient optimizer (here, Adam). After that, we perform multiple rounds of optimization, where each round consists of a forward and backward pass to compute the gradients of our model parameters w.r.t. to the loss derived from the forward pass. If you are not new to Flux, this scheme should appear familiar to you.

Note that our semi-supervised learning scenario is achieved by the following line:

loss = logitcrossentropy(ŷ[:,train_mask], y[:,train_mask])

While we compute node embeddings for all of our nodes, we only make use of the training nodes for computing the loss. Here, this is implemented by filtering the output of the classifier out and ground-truth labels data.y to only contain the nodes in the train_mask.

Let us now start training and see how our node embeddings evolve over time (best experienced by explicitly running the code):

\n\n
begin\n    model = GCN(num_features, num_classes)\n    opt = Flux.setup(Adam(1e-2), model)\n    epochs = 2000\n\n    emb = h\n    function report(epoch, loss, h)\n        # p = visualize_embeddings(h)\n        @info (; epoch, loss)\n    end\n\n    report(0, 10.0, emb)\n    for epoch in 1:epochs\n        loss, grad = Flux.withgradient(model) do model\n            ŷ, emb = model(g, g.ndata.x)\n            logitcrossentropy(ŷ[:, train_mask], y[:, train_mask])\n        end\n\n        Flux.update!(opt, model, grad[1])\n        if epoch % 200 == 0\n            report(epoch, loss, emb)\n        end\n    end\nend
\n\n\n
ŷ, emb_final = model(g, g.ndata.x)
\n
(Float32[-0.071932115 -0.05543486 … 7.193458 7.121867; 7.4203916 7.468549 … -0.42112586 -0.34268615; 0.25314045 0.23715988 … -7.442696 -7.366837; -7.5500894 -7.6005163 … -0.36091828 -0.43294388], Float32[-0.9909464 -0.9997734 … -0.99991316 -0.9999812; 0.9881081 0.99246484 … -0.9986102 -0.9788761])
\n\n
# train accuracy\nmean(onecold(ŷ[:, train_mask]) .== onecold(y[:, train_mask]))
\n
1.0
\n\n
# test accuracy\nmean(onecold(ŷ[:, .!train_mask]) .== onecold(y[:, .!train_mask]))
\n
0.8666666666666667
\n\n
visualize_embeddings(emb_final, colors = labels)
\n\n\n\n

As one can see, our 3-layer GCN model manages to linearly separating the communities and classifying most of the nodes correctly.

Furthermore, we did this all with a few lines of code, thanks to the GraphNeuralNetworks.jl which helped us out with data handling and GNN implementations.

\n\n","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"EditURL = \"https://github.com/CarloLucibello/GraphNeuralNetworks.jl/blob/master/docs/src/tutorials/introductory_tutorials/gnn_intro_pluto.jl\"","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"","category":"page"},{"location":"tutorials/introductory_tutorials/gnn_intro_pluto/","page":"Hands-on introduction to Graph Neural Networks","title":"Hands-on introduction to Graph Neural Networks","text":"This page was generated using DemoCards.jl. and PlutoStaticHTML.jl","category":"page"}] } diff --git a/dev/temporalgraph/index.html b/dev/temporalgraph/index.html index 38165672..42e564a5 100644 --- a/dev/temporalgraph/index.html +++ b/dev/temporalgraph/index.html @@ -92,4 +92,4 @@ julia> output = m(tg, tg.ndata.x); julia> size(output[1]) -(1, 10) +(1, 10) diff --git a/dev/tutorials/index.html b/dev/tutorials/index.html index 1325af51..93f07459 100644 --- a/dev/tutorials/index.html +++ b/dev/tutorials/index.html @@ -11,4 +11,4 @@

Tutorial for Node classification using GraphNeuralNetworks.jl

card-cover-image

Node Classification with Graph Neural Networks

-

Contributions

If you have a suggestion on adding new tutorials, feel free to create a new issue here. Users are invited to contribute demonstrations of their own. If you want to contribute new tutorials and looking for inspiration, checkout these tutorials from PyTorch Geometric. You are expected to use Pluto.jl notebooks with DemoCards.jl. Please check out existing tutorials for more details.

+

Contributions

If you have a suggestion on adding new tutorials, feel free to create a new issue here. Users are invited to contribute demonstrations of their own. If you want to contribute new tutorials and looking for inspiration, checkout these tutorials from PyTorch Geometric. You are expected to use Pluto.jl notebooks with DemoCards.jl. Please check out existing tutorials for more details.

diff --git a/dev/tutorials/introductory_tutorials/gnn_intro_pluto/index.html b/dev/tutorials/introductory_tutorials/gnn_intro_pluto/index.html index 9bf040b1..e1074a2e 100644 --- a/dev/tutorials/introductory_tutorials/gnn_intro_pluto/index.html +++ b/dev/tutorials/introductory_tutorials/gnn_intro_pluto/index.html @@ -194,7 +194,7 @@

GCN((conv1 = GCNConv(34 => 4), conv2 = GCNConv(4 => 4), conv3 = GCNConv(4 => 2), classifier = Dense(2 => 4))) # 182 parameters
_, h = gcn(g, g.ndata.x)
-
(Float32[-0.005740445 -0.01884863 … 0.0049703615 0.004798473; -0.003971542 -0.00664741 … 0.002226899 0.0007848764; 0.00782498 0.03179506 … -0.00793192 -0.008960456; 0.03580918 0.015548051 … -0.011664602 0.010523779], Float32[-0.023297783 -0.03627783 … 0.012548346 0.003526861; 0.036984775 0.008870006 … -0.010684907 0.013719623])
+
(Float32[-0.010103569 -0.0052728415 … -0.0029141798 -0.005299572; 0.07322627 0.026412923 … 0.08953842 0.11160924; -0.10429451 -0.03155778 … -0.16266607 -0.19655727; 0.02678022 0.013143934 … 0.012547926 0.01920776], Float32[0.10564233 0.03596353 … 0.14159243 0.1743016; 0.0875177 0.024502957 … 0.14796856 0.17720973])
function visualize_embeddings(h; colors = nothing)
     xs = h[1, :] |> vec
@@ -204,7 +204,7 @@ 

visualize_embeddings (generic function with 1 method)

visualize_embeddings(h, colors = labels)
- +

Remarkably, even before training the weights of our model, the model produces an embedding of nodes that closely resembles the community-structure of the graph. Nodes of the same color (community) are already closely clustered together in the embedding space, although the weights of our model are initialized completely at random and we have not yet performed any training so far! This leads to the conclusion that GNNs introduce a strong inductive bias, leading to similar embeddings for nodes that are close to each other in the input graph.

Training on the Karate Club Network

But can we do better? Let's look at an example on how to train our network parameters based on the knowledge of the community assignments of 4 nodes in the graph (one for each community).

Since everything in our model is differentiable and parameterized, we can add some labels, train the model and observe how the embeddings react. Here, we make use of a semi-supervised or transductive learning procedure: we simply train against one node per class, but are allowed to make use of the complete input graph data.

Training our model is very similar to any other Flux model. In addition to defining our network architecture, we define a loss criterion (here, logitcrossentropy), and initialize a stochastic gradient optimizer (here, Adam). After that, we perform multiple rounds of optimization, where each round consists of a forward and backward pass to compute the gradients of our model parameters w.r.t. to the loss derived from the forward pass. If you are not new to Flux, this scheme should appear familiar to you.

Note that our semi-supervised learning scenario is achieved by the following line:

loss = logitcrossentropy(ŷ[:,train_mask], y[:,train_mask])

While we compute node embeddings for all of our nodes, we only make use of the training nodes for computing the loss. Here, this is implemented by filtering the output of the classifier out and ground-truth labels data.y to only contain the nodes in the train_mask.

Let us now start training and see how our node embeddings evolve over time (best experienced by explicitly running the code):

@@ -236,7 +236,7 @@

ŷ, emb_final = model(g, g.ndata.x) -
(Float32[-8.298183 -6.7843485 … 7.992106 7.9739995; 6.4358 5.3154697 … -6.552024 -6.5364995; -0.27719232 1.0525229 … 0.90657854 0.9205904; -0.5221016 -1.515696 … 1.1027651 1.0865755], Float32[0.99810773 0.6356816 … -0.9999997 -0.9999999; -0.9964947 -0.99806273 … 0.999749 0.9951793])
+
(Float32[-0.071932115 -0.05543486 … 7.193458 7.121867; 7.4203916 7.468549 … -0.42112586 -0.34268615; 0.25314045 0.23715988 … -7.442696 -7.366837; -7.5500894 -7.6005163 … -0.36091828 -0.43294388], Float32[-0.9909464 -0.9997734 … -0.99991316 -0.9999812; 0.9881081 0.99246484 … -0.9986102 -0.9788761])
# train accuracy
 mean(onecold(ŷ[:, train_mask]) .== onecold(y[:, train_mask]))
@@ -244,12 +244,12 @@

# test accuracy mean(onecold(ŷ[:, .!train_mask]) .== onecold(y[:, .!train_mask])) -
0.8
+
0.8666666666666667
visualize_embeddings(emb_final, colors = labels)
- +

As one can see, our 3-layer GCN model manages to linearly separating the communities and classifying most of the nodes correctly.

Furthermore, we did this all with a few lines of code, thanks to the GraphNeuralNetworks.jl which helped us out with data handling and GNN implementations.

-

This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

+

This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

diff --git a/dev/tutorials/introductory_tutorials/graph_classification_pluto/index.html b/dev/tutorials/introductory_tutorials/graph_classification_pluto/index.html index 1b641dec..f06e0bf9 100644 --- a/dev/tutorials/introductory_tutorials/graph_classification_pluto/index.html +++ b/dev/tutorials/introductory_tutorials/graph_classification_pluto/index.html @@ -207,4 +207,4 @@

Conclusion

In this chapter, you have learned how to apply GNNs to the task of graph classification. You have learned how graphs can be batched together for better GPU utilization, and how to apply readout layers for obtaining graph embeddings rather than node embeddings.

-

This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

+

This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

diff --git a/dev/tutorials/introductory_tutorials/node_classification_pluto/index.html b/dev/tutorials/introductory_tutorials/node_classification_pluto/index.html index c1cb8a5e..dd200daf 100644 --- a/dev/tutorials/introductory_tutorials/node_classification_pluto/index.html +++ b/dev/tutorials/introductory_tutorials/node_classification_pluto/index.html @@ -197,7 +197,7 @@

After training the model, we can call the accuracy function to see how well our model performs on unseen labels. Here, we are interested in the accuracy of the model, i.e., the ratio of correctly classified nodes:

accuracy(mlp, g.ndata.features, y, .!train_mask)
-
0.4517133956386293
+
0.4929906542056075

As one can see, our MLP performs rather bad with only about 47% test accuracy. But why does the MLP do not perform better? The main reason for that is that this model suffers from heavy overfitting due to only having access to a small amount of training nodes, and therefore generalizes poorly to unseen node representations.

It also fails to incorporate an important bias into the model: Cited papers are very likely related to the category of a document. That is exactly where Graph Neural Networks come into play and can help to boost the performance of our model.

@@ -238,7 +238,7 @@

+

We certainly can do better by training our model. The training and testing procedure is once again the same, but this time we make use of the node features xand the graph g as input to our GCN model.

@@ -284,7 +284,7 @@

Train accuracy: 1.0
-Test accuracy: 0.7476635514018691
+Test accuracy: 0.7394859813084113
 
@@ -296,7 +296,7 @@

(Optional) Exercises

  1. To achieve better model performance and to avoid overfitting, it is usually a good idea to select the best model based on an additional validation set. The Cora dataset provides a validation node set as g.ndata.val_mask, but we haven't used it yet. Can you modify the code to select and test the model with the highest validation performance? This should bring test performance to 82% accuracy.

  2. How does GCN behave when increasing the hidden feature dimensionality or the number of layers? Does increasing the number of layers help at all?

  3. You can try to use different GNN layers to see how model performance changes. What happens if you swap out all GCNConv instances with GATConv layers that make use of attention? Try to write a 2-layer GAT model that makes use of 8 attention heads in the first layer and 1 attention head in the second layer, uses a dropout ratio of 0.6 inside and outside each GATConv call, and uses a hidden_channels dimensions of 8 per head.

@@ -304,4 +304,4 @@

Conclusion

In this tutorial, we have seen how to apply GNNs to real-world problems, and, in particular, how they can effectively be used for boosting a model's performance. In the next tutorial, we will look into how GNNs can be used for the task of graph classification.

-

This page was generated using DemoCards.jl. and PlutoStaticHTML.jl

+

This page was generated using DemoCards.jl. and PlutoStaticHTML.jl