Implementation Notes For The Soft Cosine Measure | Awesome Learning to Hash Add your paper to Learning2Hash

Implementation Notes For The Soft Cosine Measure

Vít Novotný . Proceedings of the 27th ACM International Conference on Information and Knowledge Management 2018 – 31 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
CIKM Text Retrieval Tools & Libraries

The standard bag-of-words vector space model (VSM) is efficient, and ubiquitous in information retrieval, but it underestimates the similarity of documents with the same meaning, but different terminology. To overcome this limitation, Sidorov et al. proposed the Soft Cosine Measure (SCM) that incorporates term similarity relations. Charlet and Damnati showed that the SCM is highly effective in question answering (QA) systems. However, the orthonormalization algorithm proposed by Sidorov et al. has an impractical time complexity of (\mathcal O(n^4)), where n is the size of the vocabulary. In this paper, we prove a tighter lower worst-case time complexity bound of (\mathcal O(n^3)). We also present an algorithm for computing the similarity between documents and we show that its worst-case time complexity is (\mathcal O(1)) given realistic conditions. Lastly, we describe implementation in general-purpose vector databases such as Annoy, and Faiss and in the inverted indices of text search engines such as Apache Lucene, and ElasticSearch. Our results enable the deployment of the SCM in real-world information retrieval systems.

Similar Work