Metaphor Interpretation Using Word Embeddings | Awesome Learning to Hash Add your paper to Learning2Hash

Metaphor Interpretation Using Word Embeddings

Kfir Bar, Nachum Dershowitz, Lena Dankin . Arxiv 2020 – 0 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Uncategorized

We suggest a model for metaphor interpretation using word embeddings trained over a relatively large corpus. Our system handles nominal metaphors, like “time is money”. It generates a ranked list of potential interpretations of given metaphors. Candidate meanings are drawn from collocations of the topic (“time”) and vehicle (“money”) components, automatically extracted from a dependency-parsed corpus. We explore adding candidates derived from word association norms (common human responses to cues). Our ranking procedure considers similarity between candidate interpretations and metaphor components, measured in a semantic vector space. Lastly, a clustering algorithm removes semantically related duplicates, thereby allowing other candidate interpretations to attain higher rank. We evaluate using different sets of annotated metaphors, with encouraging preliminary results.

Similar Work