Exploiting Twitter As Source Of Large Corpora Of Weakly Similar Pairs For Semantic Sentence Embeddings | Awesome Learning to Hash Add your paper to Learning2Hash

Exploiting Twitter As Source Of Large Corpora Of Weakly Similar Pairs For Semantic Sentence Embeddings

Marco di Giovanni, Marco Brambilla . Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021 – 8 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets EMNLP Evaluation Supervised Unsupervised

Semantic sentence embeddings are usually supervisedly built minimizing distances between pairs of embeddings of sentences labelled as semantically similar by annotators. Since big labelled datasets are rare, in particular for non-English languages, and expensive, recent studies focus on unsupervised approaches that require not-paired input sentences. We instead propose a language-independent approach to build large datasets of pairs of informal texts weakly similar, without manual human effort, exploiting Twitter’s intrinsic powerful signals of relatedness: replies and quotes of tweets. We use the collected pairs to train a Transformer model with triplet-like structures, and we test the generated embeddings on Twitter NLP similarity tasks (PIT and TURL) and STSb. We also introduce four new sentence ranking evaluation benchmarks of informal texts, carefully extracted from the initial collections of tweets, proving not only that our best model learns classical Semantic Textual Similarity, but also excels on tasks where pairs of sentences are not exact paraphrases. Ablation studies reveal how increasing the corpus size influences positively the results, even at 2M samples, suggesting that bigger collections of Tweets still do not contain redundant information about semantic similarities.

Similar Work