Graph Pattern Loss Based Diversified Attention Network For Cross-modal Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Graph Pattern Loss Based Diversified Attention Network For Cross-modal Retrieval

Xueying Chen, Rong Zhang, Yibing Zhan . 1 Chen X Zhang R Zhan Y . Graph Pattern Loss Based Diversified Attention Network For Cross-Modal RetrievalC 2020 IEEE International Conference on Image Processing (ICIP). IEEE 2020 2021 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Graph Based ANN Multimodal Retrieval Supervised Unsupervised

Cross-modal retrieval aims to enable flexible retrieval experience by combining multimedia data such as image, video, text, and audio. One core of unsupervised approaches is to dig the correlations among different object representations to complete satisfied retrieval performance without requiring expensive labels. In this paper, we propose a Graph Pattern Loss based Diversified Attention Network(GPLDAN) for unsupervised cross-modal retrieval to deeply analyze correlations among representations. First, we propose a diversified attention feature projector by considering the interaction between different representations to generate multiple representations of an instance. Then, we design a novel graph pattern loss to explore the correlations among different representations, in this graph all possible distances between different representations are considered. In addition, a modality classifier is added to explicitly declare the corresponding modalities of features before fusion and guide the network to enhance discrimination ability. We test GPLDAN on four public datasets. Compared with the state-of-the-art cross-modal retrieval methods, the experimental results demonstrate the performance and competitiveness of GPLDAN.

Similar Work