Reading activity recognition using an off-the-shelf eeg - Detecting reading activities and distinguishing genres of documents

Kai Kunze, Yuki Shiga, Shoya Ishimaru, Koichi Kise

Research output: Contribution to journalConference articlepeer-review

18 Citations (Scopus)

Abstract

The document analysis community spends substantial resources towards computer recognition of any type of text (e.g. characters, handwriting, document structure etc.). In this paper, we introduce a new paradigm focusing on recognizing the activities and habits of users while they are reading. We describe the differences to the traditional approaches of document analysis. We present initial work towards recognizing reading activities. We report our initial findings using a commercial, dry electrode Electroencephalography (EEG) system. We show the feasibility to distinguish reading tasks for 3 different document genres with one user and near perfect accuracy. Distinguishing reading tasks for 3 different document types we achieve 97 % with user specific training. We present evidence that reading and non-reading related activities can be separated over 3 users using 6 classes, perfectly separating reading from non-reading. A simple EEG system seems promising for distinguishing the reading of different document genres.

Original languageEnglish
Article number6628592
Pages (from-to)96-100
Number of pages5
JournalProceedings of the International Conference on Document Analysis and Recognition, ICDAR
DOIs
Publication statusPublished - 2013
Externally publishedYes
Event12th International Conference on Document Analysis and Recognition, ICDAR 2013 - Washington, DC, United States
Duration: 2013 Aug 252013 Aug 28

Keywords

  • EEG
  • activity recognition
  • cognitive
  • document analysis
  • pervasive
  • reading

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Reading activity recognition using an off-the-shelf eeg - Detecting reading activities and distinguishing genres of documents'. Together they form a unique fingerprint.

Cite this