Determining the onset of driver's preparatory action for take-over in automated driving using multimodal data

Research output: Contribution to journalArticlepeer-review

Abstract

Automated driving technology has the potential to substantially reduce traffic accidents, a considerable portion of which are caused by human error. Nonetheless, until automated driving systems reach Level 5, which can drive automatically under all road conditions, there will be situations requiring driver intervention. In these situations, drivers engage in actions to prepare for take-over that include shifting their visual attention to the road, placing their hands on the steering wheel, and placing their feet on the pedals. Proper execution of preparatory actions is critical for a safe take-over, and it is crucial to analyze and verify that the actions are properly initiated during the take-over situations. However, when analyzing or verifying preparatory actions for a take-over, manual observation based on video footage is necessary to capture the actions. This manual observation can become a laborious task. Therefore, we propose a method to automatically determine the onset of a driver's preparatory action for a take-over. This method provides a binary signal that indicates the onset of the action, and the signal could serve as an informative marker. For example, the timing of the signal can be used to verify whether a developing Human Machine Interface (HMI) effectively prompts the driver to initiate a preparatory action within the expected time frame. The method utilizes a multimodal fusion model to classify preparatory actions based on driver's upper-body video, seat pressure, and eye potential at the temples. Subsequently, the onset of the preparatory action is determined using a change-point detection technique on the time series of the predicted probabilities resulting from the classification of the preparatory actions. We created a dataset of 300 take-over events collected from 30 subjects and evaluated the method using a 5-fold cross-validation approach. The results demonstrate that the method can classify preparatory actions with an accuracy of 93.9%, and determine the actions’ onset with a time error of 0.15 s.

Original languageEnglish
Article number123153
JournalExpert Systems With Applications
Volume246
DOIs
Publication statusPublished - 2024 Jul 15

Keywords

  • Automated driving
  • Change-point detection
  • Multimodal data fusion
  • Preparatory action
  • Take-over

ASJC Scopus subject areas

  • General Engineering
  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Determining the onset of driver's preparatory action for take-over in automated driving using multimodal data'. Together they form a unique fingerprint.

Cite this