TY - GEN
T1 - Hierarchical human action recognition around sleeping using obscured posture information
AU - Kudo, Yuta
AU - Sashida, Takehiko
AU - Aoki, Yoshimitsu
N1 - Publisher Copyright:
© 2015 SPIE.
PY - 2015
Y1 - 2015
N2 - This paper presents a new approach for human action recognition around sleeping with the human body parts locations and the positional relationship between human and sleeping environment. Body parts are estimated from the depth image obtained by a time-of-flight (TOF) sensor using oriented 3D normal vector. Issues in action recognition of sleeping situation are the demand of availability in darkness, and hiding of the human body by duvets. Therefore, the extraction of image features is difficult since color and edge features are obscured by covers. Thus, first in our method, positions of four parts of the body (head, torso, thigh, and lower leg) are estimated by using the shape model of bodily surface constructed by oriented 3D normal vector. This shape model can represent the surface shape of rough body, and is effective in robust posture estimation of the body hidden with duvets. Then, action descriptor is extracted from the position of each body part. The descriptor includes temporal variation of each part of the body and spatial vector of position of the parts and the bed. Furthermore, this paper proposes hierarchical action classes and classifiers to improve the indistinct action classification. Classifiers are composed of two layers, and recognize human action by using the action descriptor. First layer focuses on spatial descriptor and classifies action roughly. Second layer focuses on temporal descriptor and classifies action finely. This approach achieves a robust recognition of obscured human by using the posture information and the hierarchical action recognition.
AB - This paper presents a new approach for human action recognition around sleeping with the human body parts locations and the positional relationship between human and sleeping environment. Body parts are estimated from the depth image obtained by a time-of-flight (TOF) sensor using oriented 3D normal vector. Issues in action recognition of sleeping situation are the demand of availability in darkness, and hiding of the human body by duvets. Therefore, the extraction of image features is difficult since color and edge features are obscured by covers. Thus, first in our method, positions of four parts of the body (head, torso, thigh, and lower leg) are estimated by using the shape model of bodily surface constructed by oriented 3D normal vector. This shape model can represent the surface shape of rough body, and is effective in robust posture estimation of the body hidden with duvets. Then, action descriptor is extracted from the position of each body part. The descriptor includes temporal variation of each part of the body and spatial vector of position of the parts and the bed. Furthermore, this paper proposes hierarchical action classes and classifiers to improve the indistinct action classification. Classifiers are composed of two layers, and recognize human action by using the action descriptor. First layer focuses on spatial descriptor and classifies action roughly. Second layer focuses on temporal descriptor and classifies action finely. This approach achieves a robust recognition of obscured human by using the posture information and the hierarchical action recognition.
KW - Hierarchical action classes and classifiers
KW - Human action recognition
KW - Obscured 3D posture information
KW - Sleeping situation
UR - http://www.scopus.com/inward/record.url?scp=84931308192&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84931308192&partnerID=8YFLogxK
U2 - 10.1117/12.2182870
DO - 10.1117/12.2182870
M3 - Conference contribution
AN - SCOPUS:84931308192
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Twelfth International Conference on Quality Control by Artificial Vision
A2 - Meriaudeau, Fabrice
A2 - Aubreton, Olivier
PB - SPIE
T2 - 12th International Conference on Quality Control by Artificial Vision
Y2 - 3 June 2015 through 5 June 2015
ER -