23.3 Shallow graph embeddings
Shallow embedding methods are transductive graph embedding methods, where the encoder maps categorical nodes IDs onto a Euclidean space through an embedding matrix.
Each node has a corresponding embedding and the shallow encoder function is:
The embedding dictionary is directly learned as model parameters.
In the unsupervised case, the embeddings are optimized to recover information about the input graph (e.g. the adjacency matrix ). This is similar to dimension reduction methods like PCA, but for graphs.
In the supervised case, the embedding are optimized to predict some labels, for nodes, edges and/or the whole graph.
23.3.1 Unsupervised embeddings
In the unsupervised case, we consider two main types of shallow embedding methods: distance-based and outer product-based.
Distance-based methods optimize embeddings such that nodes and which are close on the graph (as measured by some graph distance function) are embedded in such that the pairwise distance function is small.
can be customized, which lead to Euclidean or non-Euclidean embeddings. The decoder outputs:
with .
Pairwise dot-products compute node similarities. The decoder network can be written as:
In both cases, embeddings are learned by minimizing the graph regularization loss:
where is an optional transformation and is a pairwise distance function between matrix, which don’t need to be the same form as .
23.3.2 Distance-based: Euclidean methods
Distance-based methods minimize Euclidean distance between similar (connected) nodes.
Multi-dimensional scaling (MDS) is equivalent to setting to some distance matrix measuring the dissimilarity between nodes (e.g. proportional to pairwise shortest distance) and then defining:
where
Laplacian eigenmaps learn embeddings by solving the generalized eigenvector problem:
where is the graph Laplacian, and is the diagonal matrix of the row-wise sum of .
The first constraint removes an arbitrary scaling factor in the embedding and the second remove trivial solutions corresponding to the constant eigenvector (with eigenvalue zero for connected graphs).
Further, note that:
where is the ’th row of . Therefore the minimization of the objective can be written as a graph reconstruction term:
where
23.3.3 Distance-based: non-Euclidean methods
So far, we have assumed method creating embeddings in the Euclidean space.
However, hyperbolic geometry is ideal for embedding trees and graph exhibiting a hierarchical structure.
Embedding of hierarchical graphs can be learn using the Pointcaré model of hyperbolic space. We only need to change :
The optimization then learns embedding that minimize the distance between connected node, and maximize the distance between disconnected nodes:
where the denominator is approximated by negative sampling.
Note that since the hyperbolic space has a manifold structure, we need to make sure that the embedding stay remain on the manifold (by using Riemannian optimization techniques).
It has been shown that variant using Lorentz model of hyperbolic space provides better numerical stability.
23.3.5 Outer product-based: Skip-gram methods
Skip-gram word embeddings are optimized to predict context words given a center word. Given a sequence of words , skip-gram will optimize:
for each target word .
This idea has been leveraged for graph embedding in the DeepWalk framework, since the author have proven that the frequency statistics induced by random walk in the graph is similar to that of words in natural language.
Deepwalk train node embeddings to maximize the probability of context nodes for each center node. The context are the nodes reached during a single random walk.
Each random walk starts with a nodes and repeatedly samples the next node uniformly at random: . The walk length is a hyperparameter.
All generated random-walks (sentences) can then be encoded by a sequence model. This has been implemented by node2vec.
It is common to underlying representation to use two distinct representation for each node: one for when the node is a center and one for when it is in the context.
To present DeepWalk on the GraphEDM framework, we can set:
Training DeeWalk is equivalent to minimizing:
where and the left term can be approximated in time by hierarchical softmax.
Skip-gram methods can be viewed as implicit matrix factorization, which can inherit benefits of efficient sparse matrix operatioons.
23.3.6 Supervised embeddings
In many applications, we have labeled data in addition to node features and graph structure.
While we can tackle supervised problem by first learning unsupervised embeddings and apply them to a supervised task, this is not the recommended workflow. Unsupervised node embeddings might not preserve important graph properties (e.g., node neighborhoods) that are most useful for a downstream task.
A number of methods combine these two steps, like label propagation (LP), a very popular algorithm for semi-supervised node classification. The encoder is a shallow model represented by a lookup table .
LP use the label space to represent the node embedding directly (the decoder is the identity function):
Laplacian eigenmaps are used in the regularization to enforce a smoothness on labels, which represents the assumption that neighbor nodes should have the same labels:
LP minimizes this loss on the space of functions that take fixed values on label nodes (i.e. using an iterative algorithm that updates unlabeled node’s label distribution via the weighted average of its neighbors’ labels.
Label spreading (LS) is a variant of label propagation which minimizes the following function:
where is the degree of node .
In both methods, the supervised loss is the distance between the predicted labels and the ground truth (one-hot vectors)
These methods are expected to work well with consistent graphs, that is graph where node proximity is positively correlated with label similarity.