Which is the Better Inpainted Image?Training Data Generation Without Any Manual Operations

Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Daisuke Iwai, Kosuke Sato, Hideaki Kimata

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


This paper proposes a learning-based quality evaluation framework for inpainted results that does not require any subjectively annotated training data. Image inpainting, which removes and restores unwanted regions in images, is widely acknowledged as a task whose results are quite difficult to evaluate objectively. Thus, existing learning-based image quality assessment (IQA) methods for inpainting require subjectively annotated data for training. However, subjective annotation requires huge cost and subjects’ judgment occasionally differs from person to person in accordance with the judgment criteria. To overcome these difficulties, the proposed framework generates and uses simulated failure results of inpainted images whose subjective qualities are controlled as the training data. We also propose a masking method for generating training data towards fully automated training data generation. These approaches make it possible to successfully estimate better inpainted images, even though the task is quite subjective. To demonstrate the effectiveness of our approach, we test our algorithm with various datasets and show it outperforms existing IQA methods for inpainting.

Original languageEnglish
Pages (from-to)1751-1766
Number of pages16
JournalInternational Journal of Computer Vision
Issue number11-12
Publication statusPublished - 2019 Dec 1
Externally publishedYes


  • Image inpainting
  • Image quality assessment (IQA)
  • Learning to rank

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Which is the Better Inpainted Image?Training Data Generation Without Any Manual Operations'. Together they form a unique fingerprint.

Cite this