GazeSim: Simulating foveated rendering using depth in eye gaze for VR

Yun Suen Pai, Benjamin Tag, Benjamin Outram, Noriyasu Vontin, Kazunori Sugiura, Kai Kunze

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Citations (Scopus)

Abstract

We present a novel technique of implementing customized hardware that uses eye gaze focus depth as an input modality for virtual reality applications. By utilizing eye tracking technology, our system can detect the point in depth the viewer focusses on, and therefore promises more natural responses of the eye to stimuli, which will help overcoming VR sickness and nausea. The obtained information for the depth focus of the eye allows the utilization of foveated rendering to keep the computing workload low and create a more natural image that is clear in the focused field, but blurred outside that field. Copyright is held by the owner/author(s).

Original languageEnglish
Title of host publicationSIGGRAPH 2016 - ACM SIGGRAPH 2016 Posters
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450343718
DOIs
Publication statusPublished - 2016 Jul 24
EventACM International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2016 - Anaheim, United States
Duration: 2016 Jul 242016 Jul 28

Publication series

NameSIGGRAPH 2016 - ACM SIGGRAPH 2016 Posters

Other

OtherACM International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2016
Country/TerritoryUnited States
CityAnaheim
Period16/7/2416/7/28

Keywords

  • Depth of field
  • Eye gaze
  • Foveated rendering
  • Virtual reality

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'GazeSim: Simulating foveated rendering using depth in eye gaze for VR'. Together they form a unique fingerprint.

Cite this