Zero-shot Graph Embedding (ZGE) refers to the process of learning discriminative graph embeddings when labeled data cannot cover all classes (also known as completely-imbalanced label setting). Here, "zero-shot" means to handle the nodes coming from unseen classes. This problem has practical significance, especially when the graph size is typical large and nodes can take on many values.
We propose a shallow method RSDNE [1] and a GNN method RECT [2]. Our methods generally perform best in this zero-shot label setting. Especially, our RECT outperforms GCN by [20%~300%]. In addition, we note that: even in the balanced label setting, our methods could still achieve comparable performance to state-of-the-art semi-supervised methods. Therefore, we always recommend our methods for the scenario where the quality of labels cannot be guaranteed.
Why RECT works? In [3], we show that its core part RECT-L actually learns a prototypical model with the labeled data of seen classes. This reflects its reasonability on seen classes. On the other hand, the learned prototypical model maps the data from the raw-input space into a semantic space, like ZSL methods. As validated by lots of ZSL methods, this enables the success of transferring supervised knowledge of seen classes to unseen classes, indicating its reasonability on unseen classes.
Future work: Following the line of zero-shot setting in network/graph scenario, we can define lots of Zero-shot Graph Learning (ZGL) problems. For example, we can conduct zero-shot graph/node classification [7], and zero-shot graph completion/recommendation/prediction. In addition, we can also explore various applications of ZGL, like in bioscience [5], chemistry, and materials.