Papers
arXiv:2202.06358

Do Inpainting Yourself: Generative Facial Inpainting Guided by Exemplars

Published on Feb 13, 2022
Authors:
,
,
,
,
,
,
,

Abstract

EXE-GAN, a novel exemplar-guided facial inpainting framework using GANs, enhances image quality and attribute similarity while ensuring natural transitions in inpainted regions.

AI-generated summary

We present EXE-GAN, a novel exemplar-guided facial inpainting framework using generative adversarial networks. Our approach can not only preserve the quality of the input facial image but also complete the image with exemplar-like facial attributes. We achieve this by simultaneously leveraging the global style of the input image, the stochastic style generated from the random latent code, and the exemplar style of exemplar image. We introduce a novel attribute similarity metric to encourage networks to learn the style of facial attributes from the exemplar in a self-supervised way. To guarantee the natural transition across the boundaries of inpainted regions, we introduce a novel spatial variant gradient backpropagation technique to adjust the loss gradients based on the spatial location. Extensive evaluations and practical applications on public CelebA-HQ and FFHQ datasets validate the superiority of EXE-GAN in terms of the visual quality in facial inpainting.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2202.06358 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2202.06358 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2202.06358 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.