TOC
Introduction
The core assumption of existing machine learning algorithms is that instances are independent of each other. However, this assumption does not hold because each instance is related with each other by links of various types such as citations, friendships and interactions.
Taxonomy of GNN
Recurrent GNN is the pioneer works of GNN. It assums that nodes in a graph constantly exchanges messages with its neighbors until a equilibruim reaches (Infomation Difussion Mechanism).
The general ideas behind ConvGNNs is to generate a node’s representation from it’s own features and its neighbors’ features. It falls in two categotries: spectral-based and spatial-based. The first approch defines graph convolution by introducing filters from graph signal processing. The latter method inherit ideas from RecGNNs to define graph convolutions by information propagation.
Graph autoencoders are unsupersized learning which encode graph infomation into latent vector space and reconstruct the graph from the encoded infomation.
The key idea of STGNNs is to consider spatial dependency and temporal dependency at the same time. STGNNs follow two directions: RNN-based and CNN-based.
Open Challenges
There are open challenges in the field of GNN:
- Model depth of the network
- Scalability trade-off: The scalability of GNN is gained at prices of corrupting graph completeness.
- Heterogeneity: The mayority of current GNN assume homogeneous graphs.
- Danymicity: The graphs are in nature dynamic that nodes or edges and appear or disappear, and that node/edge input can change over time.