Channel Attention GAN Trained with Enhanced Dataset for Single-Image Shadow Removal

Ryo Abiko, Masaaki Ikehara

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)


Even today, where many deep-learning-based methods have been published, single-image shadow removal is a challenging task to achieve high accuracy. This is because the shadow changes depending on various conditions such as the target material or the light source, and it is difficult to estimate all the physical parameters. In this paper, we propose a new single-image shadow removal method (Channel Attention GAN: CANet) using two networks for detecting shadows and removing shadows. Intensity change in shadowed regions has different characteristics depending on the wavelength of light. In addition, the image acquisition system of the camera acquires an image in a state where the RGB values influence each other. Therefore, our method focused on the physical properties of shadows and the camera's image acquisition system. The proposed network has a structure considering the relationship between color channels. When training this network, we modified the color and added some artifacts to the training images in order to make the training dataset more complex. These image processing are based on the shadow model, considering the camera image acquisition system. With these new proposals, our method can remove shadows in all ISTD, ISTD+, SRD, and SRD+ datasets with higher accuracy than the state-of-the-art methods. The code is available on GitHub:

Original languageEnglish
Pages (from-to)12322-12333
Number of pages12
JournalIEEE Access
Publication statusPublished - 2022


  • Image restoration
  • deep learning
  • generative adversarial networks
  • shadow removal

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering


Dive into the research topics of 'Channel Attention GAN Trained with Enhanced Dataset for Single-Image Shadow Removal'. Together they form a unique fingerprint.

Cite this