Evaluation of Spatial Directional Guidance Using Cheek Haptic Stimulation in a Virtual Environment

Fumihiko Nakamura, Adrien Verhulst, Kuniharu Sakurada, Masaaki Fukuoka, Maki Sugimoto

研究成果: Article査読


Spatial cues play an important role in navigating people in both physical and virtual spaces. In spatial navigation, visual information with additional cues, such as haptic cues, enables effective guidance. Most haptic devices are applied to various body parts to make mechanical stimuli, while few devices stimulate a head despite the excellent sensitivity. This article presents Virtual Whiskers, a spatial directional guidance technique by cheek haptic stimulation using tiny robot arms attached to a Head-Mounted Display (HMD). The tip of the robotic arm has photo reflective sensors to detect the distance between the tip and the cheek surface. Using the robot arms, we stimulate a point on the cheek obtained by calculating an intersection between the cheek surface and the target direction. In the directional guidance experiment, we investigated how accurately participants identify the target direction provided by our guidance method. We evaluated an error between the actual target direction and the participant's pointed direction. The experimental result shows that our method achieves the average absolute directional error of 2.54° in the azimuthal plane and 6.54° in the elevation plane. We also conducted a spatial guidance experiment to evaluate task performance in a target search task. We compared the condition of visual information, visual and audio information, and visual information and cheek haptics for task completion time, System Usability Scale (SUS) score, NASA-TLX score. The averages of task completion time were M = 6.39 s, SD = 3.34 s, and M = 5.62 s, SD = 3.12 s, and M = 4.35 s, SD = 2.26 s, in visual-only condition, visual+audio condition, and visual+haptic condition, respectively. In terms of the SUS score, visual condition, visual+audio condition, and visual+haptic condition achieved M = 55.83, SD = 20.40, and M = 47.78, SD = 20.09, and M = 80.42, SD = 10.99, respectively. As for NASA-TLX score, visual condition, visual+audio condition, and visual+haptic condition resulted in M = 75.81, SD = 16.89, and M = 67.57, SD = 14.96, and M = 38.83, SD = 18.52, respectively. Statistical tests revealed significant differences in task completion time, SUS score, and NASA-TLX score between the visual and the visual+haptic condition and the visual+audio and the visual+haptic condition.

ジャーナルFrontiers in Computer Science
出版ステータスPublished - 2022 5月 12

ASJC Scopus subject areas

  • コンピュータ サイエンスの応用
  • コンピュータ ビジョンおよびパターン認識
  • 人間とコンピュータの相互作用
  • コンピュータ サイエンス(その他)


「Evaluation of Spatial Directional Guidance Using Cheek Haptic Stimulation in a Virtual Environment」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。