Automatic segmentation of non-perfusion area from fluorescein angiography using deep learning with uncertainty estimation

Kanato Masayoshi, Yusaku Katada, Nobuhiro Ozawa, Mari Ibuki, Kazuno Negishi, Toshihide Kurihara

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

The non-perfusion area (NPA) of the retina is an important indicator in the visual prognosis of patients with retinal vein occlusion. Therefore, automatic detection of NPA will help its management. Deep learning models for NPA segmentation in fluorescein angiography have been reported. However, typical deep learning models do not adequately address the uncertainty of the prediction, which may lead to missed lesions and difficulties in working with medical professionals. In this study, we developed deep segmentation models with uncertainty estimation using Monte Carlo dropout and compared the accuracy of prediction and reliability of uncertainty in different models (U-Net, PSPNet, and DeepLabv3+) and uncertainty measures (standard deviation and mutual information). The study included 403 Japanese fluorescein angiography images of retinal vein occlusion. The mean Dice scores were 65.6 ± 9.6%, 66.8 ± 12.3%, and 73.6 ± 9.4% for U-Net, PSPNet, and DeepLabv3+, respectively. The uncertainty scores were best for U-Net, which suggests that the model complexity may deteriorate the quality of uncertainty estimation. Over-looked lesions and inconsistent prediction led to high uncertainty values. The results indicated that the uncertainty estimation would help decrease the risk of missed lesions.

Original languageEnglish
Article number101060
JournalInformatics in Medicine Unlocked
Volume32
DOIs
Publication statusPublished - 2022 Jan

Keywords

  • Deep learning
  • Fundus
  • Retinal vein occlusion
  • Uncertainty

ASJC Scopus subject areas

  • Health Informatics

Fingerprint

Dive into the research topics of 'Automatic segmentation of non-perfusion area from fluorescein angiography using deep learning with uncertainty estimation'. Together they form a unique fingerprint.

Cite this