Generalized Multi-view Embedding For Visual Recognition And Cross-modal Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Generalized Multi-view Embedding For Visual Recognition And Cross-modal Retrieval

Guanqun Cao, Alexandros Iosifidis, Ke Chen, Moncef Gabbouj . IEEE Transactions on Cybernetics 2016 – 107 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Image Retrieval Multimodal Retrieval Supervised Tools & Libraries

In this paper, the problem of multi-view embedding from different visual cues and modalities is considered. We propose a unified solution for subspace learning methods using the Rayleigh quotient, which is extensible for multiple views, supervised learning, and non-linear embeddings. Numerous methods including Canonical Correlation Analysis, Partial Least Sqaure regression and Linear Discriminant Analysis are studied using specific intrinsic and penalty graphs within the same framework. Non-linear extensions based on kernels and (deep) neural networks are derived, achieving better performance than the linear ones. Moreover, a novel Multi-view Modular Discriminant Analysis (MvMDA) is proposed by taking the view difference into consideration. We demonstrate the effectiveness of the proposed multi-view embedding methods on visual object recognition and cross-modal image retrieval, and obtain superior results in both applications compared to related methods.

Similar Work