Transformer-empowered Multi-modal Item Embedding For Enhanced Image Search In E-commerce | Awesome Learning to Hash Add your paper to Learning2Hash

Transformer-empowered Multi-modal Item Embedding For Enhanced Image Search In E-commerce

Chang Liu, Peng Hou, Anxiang Zeng, Han Yu . Proceedings of the AAAI Conference on Artificial Intelligence 2023 – 3 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
AAAI Image Retrieval

Over the past decade, significant advances have been made in the field of image search for e-commerce applications. Traditional image-to-image retrieval models, which focus solely on image details such as texture, tend to overlook useful semantic information contained within the images. As a result, the retrieved products might possess similar image details, but fail to fulfil the user’s search goals. Moreover, the use of image-to-image retrieval models for products containing multiple images results in significant online product feature storage overhead and complex mapping implementations. In this paper, we report the design and deployment of the proposed Multi-modal Item Embedding Model (MIEM) to address these limitations. It is capable of utilizing both textual information and multiple images about a product to construct meaningful product features. By leveraging semantic information from images, MIEM effectively supplements the image search process, improving the overall accuracy of retrieval results. MIEM has become an integral part of the Shopee image search platform. Since its deployment in March 2023, it has achieved a remarkable 9.90% increase in terms of clicks per user and a 4.23% boost in terms of orders per user for the image search feature on the Shopee e-commerce platform.

Similar Work