Gaze estimation using monocular cameras has significant commercial applicability, and many studies have been undertaken on head pose-invariant and calibration-free gaze estimation. The head positions in existing data sets used in these studies are, however, limited to the vicinity of the camera, and methods trained on such data sets are not applicable when subjects are at greater distances from the camera. In this paper, we create a room-scale gaze data set with large variations in head poses to achieve robust gaze estimation across a broader range of widths and depths. The head positions are much farther from the camera, and the resolution of the eye image is lower than in conventional data sets. To address this issue, we propose a likelihood evaluation method based on edge gradients with dense particles for iris tracking, which achieves robust tracking at low-resolution eye images. Cross-validation experiments show that our proposed method is more accurate than conventional methods on all the individuals in our data set.
ASJC Scopus subject areas
- コンピュータ サイエンス（全般）