Image and video completion via feature reduction and compensation

Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Akira Kojima

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)


This paper proposes a novel framework for image and video completion that removes and restores unwanted regions inside them. Most existing works fail to carry out the completion processing when similar regions do not exist in undamaged regions. To overcome this, our approach creates similar regions by projecting a low dimensional space from the original space. The approach comprises three stages. First, input images/videos are converted to a lower dimensional feature space. Second, a damaged region is restored in the converted feature space. Finally, inverse conversion is performed from the lower dimensional space to the original space. This generates two advantages: (1) it enhances the possibility of applying patches dissimilar to those in the original color space and (2) it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. The framework’s effectiveness was verified in experiments using various methods, the feature space for restoration in the second stage, and inverse conversion methods.

Original languageEnglish
Pages (from-to)9443-9462
Number of pages20
JournalMultimedia Tools and Applications
Issue number7
Publication statusPublished - 2017 Apr 1
Externally publishedYes


  • Completion
  • Image transfer
  • Inpainting
  • Low-dimensional feature space
  • Restoration

ASJC Scopus subject areas

  • Software
  • Media Technology
  • Hardware and Architecture
  • Computer Networks and Communications


Dive into the research topics of 'Image and video completion via feature reduction and compensation'. Together they form a unique fingerprint.

Cite this