Backdoor Attack On Hash-based Image Retrieval Via Clean-label Data Poisoning | Awesome Learning to Hash Add your paper to Learning2Hash

Backdoor Attack On Hash-based Image Retrieval Via Clean-label Data Poisoning

Gao Kuofeng, Bai Jiawang, Chen Bin, Wu Dongxian, Xia Shu-tao. Arxiv 2021

[Paper] [Code]    
ARXIV Has Code Image Retrieval Supervised

A backdoored deep hashing model is expected to behave normally on original query images and return the images with the target label when a specific trigger pattern presents. To this end, we propose the confusing perturbations-induced backdoor attack (CIBA). It injects a small number of poisoned images with the correct label into the training data, which makes the attack hard to be detected. To craft the poisoned images, we first propose the confusing perturbations to disturb the hashing code learning. As such, the hashing model can learn more about the trigger. The confusing perturbations are imperceptible and generated by optimizing the intra-class dispersion and inter-class shift in the Hamming space. We then employ the targeted adversarial patch as the backdoor trigger to improve the attack performance. We have conducted extensive experiments to verify the effectiveness of our proposed CIBA. Our code is available at https://github.com/KuofengGao/CIBA.

Similar Work