Volume 39, Issue 7 pp. 471-482
Image Restoration

Pixel-wise Dense Detector for Image Inpainting

Ruisong Zhang

Ruisong Zhang

National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China

School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049 China

Search for more papers by this author
Weize Quan

Weize Quan

National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China

School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049 China

Search for more papers by this author
Baoyuan Wu

Baoyuan Wu

School of Data Science, the Chinese University of Hong Kong, Shenzhen, China

Secure Computing Lab of Big Data, Shenzhen Research Institute of Big Data, China

Search for more papers by this author
Zhifeng Li

Zhifeng Li

Tencent AI Lab, Shenzhen, China

Search for more papers by this author
Dong-Ming Yan

Dong-Ming Yan

National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China

School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049 China

Search for more papers by this author
First published: 24 November 2020
Citations: 9

Abstract

Recent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar, which inevitably lose the position information of visual artifacts. Moreover, the adversarial loss and reconstruction loss (e.g., ℓ1 loss) are combined with tradeoff weights, which are also difficult to tune. In this paper, we propose a novel detection-based generative framework for image inpainting, which adopts the min-max strategy in an adversarial process. The generator follows an encoder-decoder architecture to fill the missing regions, and the detector using weakly supervised learning localizes the position of artifacts in a pixel-wise manner. Such position information makes the generator pay attention to artifacts and further enhance them. More importantly, we explicitly insert the output of the detector into the reconstruction loss with a weighting criterion, which balances the weight of the adversarial loss and reconstruction loss automatically rather than manual operation. Experiments on multiple public datasets show the superior performance of the proposed framework. The source code is available at https://github.com/Evergrow/GDN_Inpainting.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.