Retrieve In Style: Unsupervised Facial Feature Transfer And Retrieval | Awesome Learning to Hash Add your paper to Learning2Hash

Retrieve In Style: Unsupervised Facial Feature Transfer And Retrieval

Min Jin Chong, Wen-Sheng Chu, Abhishek Kumar, David Forsyth . 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021 – 23 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
ICCV Supervised Tools & Libraries Unsupervised

We present Retrieve in Style (RIS), an unsupervised framework for facial feature transfer and retrieval on real images. Recent work shows capabilities of transferring local facial features by capitalizing on the disentanglement property of the StyleGAN latent space. RIS improves existing art on the following: 1) Introducing more effective feature disentanglement to allow for challenging transfers (ie, hair, pose) that were not shown possible in SoTA methods. 2) Eliminating the need for per-image hyperparameter tuning, and for computing a catalog over a large batch of images. 3) Enabling fine-grained face retrieval using disentangled facial features (eg, eyes). To our best knowledge, this is the first work to retrieve face images at this fine level. 4) Demonstrating robust, natural editing on real images. Our qualitative and quantitative analyses show RIS achieves both high-fidelity feature transfers and accurate fine-grained retrievals on real images. We also discuss the responsible applications of RIS.

Similar Work