Abstract
Active learning refers to label-efficient algorithms that use the most representative samples for labeling when creating training data. In this paper, we propose a model that derives the most informative unlabeled samples from the output of a task model. The tasks arc a classification problem, multi-label classification and a semantic segmentation problem. The model consists of an uncertainty indicator generator and a task model. After training the task model with labeled samples, the model predicts unlabeled samples, and based on the prediction results, the uncertainty indicator generator outputs an uncertainty indicator for each unlabeled sample. Samples with a higher uncertainty indicator are considered to be more informative and selected. As a result of experiments using multiple datasets, our model achieved better accuracy than conventional active learning methods and reduced execution time by a factor of 10.
Translated title of the contribution | Improving Annotation Efficiency through Uncertain Sample Selection in Active Learning |
---|---|
Original language | Japanese |
Pages (from-to) | 211-216 |
Number of pages | 6 |
Journal | Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering |
Volume | 88 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2022 |
ASJC Scopus subject areas
- Mechanical Engineering