VXP: Voxel-cross-pixel Large-scale Image-lidar Place Recognition | Awesome Learning to Hash Add your paper to Learning2Hash

VXP: Voxel-cross-pixel Large-scale Image-lidar Place Recognition

Yun-Jin Li, Mariia Gladkova, Yan Xia, Rui Wang, Daniel Cremers . Arxiv 2024 – 4 citations

[Other] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Evaluation Multimodal Retrieval Self-Supervised Supervised Tools & Libraries

Cross-modal place recognition methods are flexible GPS-alternatives under varying environment conditions and sensor setups. However, this task is non-trivial since extracting consistent and robust global descriptors from different modalities is challenging. To tackle this issue, we propose Voxel-Cross-Pixel (VXP), a novel camera-to-LiDAR place recognition framework that enforces local similarities in a self-supervised manner and effectively brings global context from images and LiDAR scans into a shared feature space. Specifically, VXP is trained in three stages: first, we deploy a visual transformer to compactly represent input images. Secondly, we establish local correspondences between image-based and point cloud-based feature spaces using our novel geometric alignment module. We then aggregate local similarities into an expressive shared latent space. Extensive experiments on the three benchmarks (Oxford RobotCar, ViViD++ and KITTI) demonstrate that our method surpasses the state-of-the-art cross-modal retrieval by a large margin. Our evaluations show that the proposed method is accurate, efficient and light-weight. Our project page is available at: https://yunjinli.github.io/projects-vxp/

Similar Work