Awesome Papers on Learning to Hash | Awesome Learning to Hash Add your paper to Learning2Hash

Awesome Papers on Learning to Hash

šŸŒ Check Out Our Sister Site on Large Language Models

Explore our related resource, focusing on Large Language Models, at LLM Bible.

šŸ· Browse Papers by Tag

Explore the latest research by browsing papers categorized by tags. Select a tag below to dive deeper into specific topics within the field of learning to hash:

AAAI ARXIV CNN COLT Case Study Cross Modal Dataset Deep Learning FOCS GAN Graph Has Code ICIP ICML Image Retrieval Independent LSH NEURIPS Quantisation SIGIR Self Supervised Streaming Data Supervised Survey Paper TMLR Text Retrieval Theory Unsupervised Video Retrieval Weakly Supervised

Understanding Learning to Hash

This website is a resource for researchers looking to explore, share, and discover recent advancements in the field of learning to hash. It serves as a living literature review, allowing readers to navigate models organized by a taxonomy based on key properties. Anyone can contribute to this growing resource by submitting new papers via a simple form. For details, see the Contributing section.

To start, visit the ā€œAll Papersā€ section from the right-hand menu and browse the full list of contributions.

Background: What is Learning to Hash?

At its core, Nearest Neighbour Search is the task of finding the most similar data points to a given query in a large dataset. This operation is fundamental to many fields, from Bioinformatics to Natural Language Processing (NLP) and Computer Vision.

Some notable applications include:

How Learning to Hash Works

Learning to hash is about creating binary hash codes that capture the similarity between data points. These hash codes are then used to index data into hash tables, making it possible to quickly find similar items based on the query.

For example, in the image below, the system generates a hashcode for an image of a tiger and compares it only to data points within the same hash table bucket. This method dramatically reduces the number of comparisons needed, making search faster than brute-force approaches. Although thereā€™s a small trade-off in accuracy, the speed benefits are substantial in practice.

Locality Sensitive Hashing (LSH)

Image from the PhD thesis of Sean Moran.

For more detailed introductory material, visit our Resources page.

Contribute to the Growing Research

The field of learning to hash is rapidly evolving, and this website aims to stay current by inviting contributions from researchers. If you come across new work in this area, you can easily add it by creating a markdown file and submitting a pull request through our GitHub page. For full instructions, visit the Contributing section.

Copyright Ā© Sean Moran 2024. All opinions are my own.