TY - GEN
T1 - Deneb
T2 - 17th Asian Conference on Computer Vision, ACCV 2024
AU - Matsuda, Kazuki
AU - Wada, Yuiga
AU - Sugiura, Komei
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - In this work, we address the challenge of developing automatic evaluation metrics for image captioning, with a particular focus on robustness against hallucinations. Existing metrics are often inadequate for handling hallucinations, primarily due to their limited ability to compare candidate captions with multifaceted reference captions. To address this shortcoming, we propose Deneb, a novel supervised automatic evaluation metric specifically robust against hallucinations. Deneb incorporates the Sim-Vec Transformer, a mechanism that processes multiple references simultaneously, thereby efficiently capturing the similarity between an image, a candidate caption, and reference captions. To train Deneb, we construct the diverse and balanced Nebula dataset comprising 32,978 images, paired with human judgments provided by 805 annotators. We demonstrated that Deneb achieves state-of-the-art performance among existing LLM-free metrics on the FOIL, Composite, Flickr8K-Expert, Flickr8K-CF, Nebula, and PASCAL-50S datasets, validating its effectiveness and robustness against hallucinations. Project page at https://deneb-project-page-nc03k.kinsta.page/.
AB - In this work, we address the challenge of developing automatic evaluation metrics for image captioning, with a particular focus on robustness against hallucinations. Existing metrics are often inadequate for handling hallucinations, primarily due to their limited ability to compare candidate captions with multifaceted reference captions. To address this shortcoming, we propose Deneb, a novel supervised automatic evaluation metric specifically robust against hallucinations. Deneb incorporates the Sim-Vec Transformer, a mechanism that processes multiple references simultaneously, thereby efficiently capturing the similarity between an image, a candidate caption, and reference captions. To train Deneb, we construct the diverse and balanced Nebula dataset comprising 32,978 images, paired with human judgments provided by 805 annotators. We demonstrated that Deneb achieves state-of-the-art performance among existing LLM-free metrics on the FOIL, Composite, Flickr8K-Expert, Flickr8K-CF, Nebula, and PASCAL-50S datasets, validating its effectiveness and robustness against hallucinations. Project page at https://deneb-project-page-nc03k.kinsta.page/.
KW - hallucination
KW - image captioning
KW - metrics
KW - vision and language
UR - https://www.scopus.com/pages/publications/85213020906
UR - https://www.scopus.com/pages/publications/85213020906#tab=citedBy
U2 - 10.1007/978-981-96-0908-6_10
DO - 10.1007/978-981-96-0908-6_10
M3 - Conference contribution
AN - SCOPUS:85213020906
SN - 9789819609079
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 166
EP - 182
BT - Computer Vision – ACCV 2024 - 17th Asian Conference on Computer Vision, Proceedings
A2 - Cho, Minsu
A2 - Laptev, Ivan
A2 - Tran, Du
A2 - Yao, Angela
A2 - Zha, Hongbin
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 8 December 2024 through 12 December 2024
ER -