Embedding Compression With Hashing For Efficient Representation Learning In Large-scale Graph | Awesome Learning to Hash Add your paper to Learning2Hash

Embedding Compression With Hashing For Efficient Representation Learning In Large-scale Graph

Yeh Chin-chia Michael, Gu Mengting, Zheng Yan, Chen Huiyuan, Ebrahimi Javid, Zhuang Zhongfang, Wang Junpeng, Wang Liang, Zhang Wei. Arxiv 2022

[Paper]    
ARXIV Deep Learning Graph Supervised

Graph neural networks (GNNs) are deep learning models designed specifically for graph data, and they typically rely on node features as the input to the first layer. When applying such a type of network on the graph without node features, one can extract simple graph-based node features (e.g., number of degrees) or learn the input node representations (i.e., embeddings) when training the network. While the latter approach, which trains node embeddings, more likely leads to better performance, the number of parameters associated with the embeddings grows linearly with the number of nodes. It is therefore impractical to train the input node embeddings together with GNNs within graphics processing unit (GPU) memory in an end-to-end fashion when dealing with industrial-scale graph data. Inspired by the embedding compression methods developed for natural language processing (NLP) tasks, we develop a node embedding compression method where each node is compactly represented with a bit vector instead of a floating-point vector. The parameters utilized in the compression method can be trained together with GNNs. We show that the proposed node embedding compression method achieves superior performance compared to the alternatives.

Similar Work