Enhanced Unpaired Image-to-Image Translation via Transformation in Saliency Domain

Kei Shibasaki, Masaaki Ikehara

研究成果: Article査読

抄録

Unpaired image to image translation is the task of converting images in unpaired datasets. The primary goal of the task is to translate a source image into the image aligned with the target domain while keeping the fundamental content. Existing researches have introduced effective techniques to translate images with unpaired datasets, focusing on preserving the fundamental content. However, these techniques have limitations in dealing with significant shape changes and preserving backgrounds that should not be transformed. The proposed method attempts to address these problems by utilizing the saliency domain for translation and simultaneously learning the translation in the saliency domain as well as in the image domain. The saliency domain represents the shape and position of the main object. The explicit learning of transformations within the saliency domain improves network's ability to transform shapes while maintaining the background. Experimental results show that the proposed method successfully addresses the problems of unpaired image to image translation and achieves competitive metrics with existing methods.

本文言語English
ページ(範囲)137495-137505
ページ数11
ジャーナルIEEE Access
11
DOI
出版ステータスPublished - 2023

ASJC Scopus subject areas

  • コンピュータサイエンス一般
  • 材料科学一般
  • 工学一般

フィンガープリント

「Enhanced Unpaired Image-to-Image Translation via Transformation in Saliency Domain」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル