Learning Deep Representations Of Medical Images Using Siamese Cnns With Application To Content-based Image Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Learning Deep Representations Of Medical Images Using Siamese Cnns With Application To Content-based Image Retrieval

Yu-An Chung, Wei-Hung Weng . Arxiv 2017 – 61 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Datasets Image Retrieval Scalability Supervised

Deep neural networks have been investigated in learning latent representations of medical images, yet most of the studies limit their approach in a single supervised convolutional neural network (CNN), which usually rely heavily on a large scale annotated dataset for training. To learn image representations with less supervision involved, we propose a deep Siamese CNN (SCNN) architecture that can be trained with only binary image pair information. We evaluated the learned image representations on a task of content-based medical image retrieval using a publicly available multiclass diabetic retinopathy fundus image dataset. The experimental results show that our proposed deep SCNN is comparable to the state-of-the-art single supervised CNN, and requires much less supervision for training.

Similar Work