End-to-end Semantic Object Detection With Cross-modal Alignment | Awesome Learning to Hash Add your paper to Learning2Hash

End-to-end Semantic Object Detection With Cross-modal Alignment

Silvan Ferreira, Allan Martins, Ivanovitch Silva . Arxiv 2023 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Image Retrieval Self-Supervised

Traditional semantic image search methods aim to retrieve images that match the meaning of the text query. However, these methods typically search for objects on the whole image, without considering the localization of objects within the image. This paper presents an extension of existing object detection models for semantic image search that considers the semantic alignment between object proposals and text queries, with a focus on searching for objects within images. The proposed model uses a single feature extractor, a pre-trained Convolutional Neural Network, and a transformer encoder to encode the text query. Proposal-text alignment is performed using contrastive learning, producing a score for each proposal that reflects its semantic alignment with the text query. The Region Proposal Network (RPN) is used to generate object proposals, and the end-to-end training process allows for an efficient and effective solution for semantic image search. The proposed model was trained end-to-end, providing a promising solution for semantic image search that retrieves images that match the meaning of the text query and generates semantically relevant object proposals.

Similar Work