Descriptor Learning For Omnidirectional Image Matching | Awesome Learning to Hash Add your paper to Learning2Hash

Descriptor Learning For Omnidirectional Image Matching

Masci Jonathan, Migliore Davide, Bronstein Michael M., Schmidhuber Jürgen. Arxiv 2011

[Paper]    
ARXIV Supervised

Feature matching in omnidirectional vision systems is a challenging problem, mainly because complicated optical systems make the theoretical modelling of invariance and construction of invariant feature descriptors hard or even impossible. In this paper, we propose learning invariant descriptors using a training set of similar and dissimilar descriptor pairs. We use the similarity-preserving hashing framework, in which we are trying to map the descriptor data to the Hamming space preserving the descriptor similarity on the training set. A neural network is used to solve the underlying optimization problem. Our approach outperforms not only straightforward descriptor matching, but also state-of-the-art similarity-preserving hashing methods.

Similar Work