Towards Robust And Truly Large-scale Audio-sheet Music Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Towards Robust And Truly Large-scale Audio-sheet Music Retrieval

Luis Carvalho, Gerhard Widmer . 2023 IEEE 6th International Conference on Multimedia Information Processing and Retrieval (MIPR) 2023 – 2 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Neural Hashing Scalability

A range of applications of multi-modal music information retrieval is centred around the problem of connecting large collections of sheet music (images) to corresponding audio recordings, that is, identifying pairs of audio and score excerpts that refer to the same musical content. One of the typical and most recent approaches to this task employs cross-modal deep learning architectures to learn joint embedding spaces that link the two distinct modalities - audio and sheet music images. While there has been steady improvement on this front over the past years, a number of open problems still prevent large-scale employment of this methodology. In this article we attempt to provide an insightful examination of the current developments on audio-sheet music retrieval via deep learning methods. We first identify a set of main challenges on the road towards robust and large-scale cross-modal music retrieval in real scenarios. We then highlight the steps we have taken so far to address some of these challenges, documenting step-by-step improvement along several dimensions. We conclude by analysing the remaining challenges and present ideas for solving these, in order to pave the way to a unified and robust methodology for cross-modal music retrieval.

Similar Work