TY - GEN
T1 - Emotion identification system for musical tunes based on characteristics of acoustic signal data
AU - Endrjukaite, Tatiana
AU - Kiyoki, Yasushi
PY - 2014/1/1
Y1 - 2014/1/1
N2 - We design and implement a music-tune analysis system to realize automatic emotion identification and prediction based on acoustic signal data. To compute physical elements of music pieces we define three significant tunes parameters. These are: repeated parts or repetitions inside a tune, thumbnail of a music piece, and homogeneity pattern of a tune. They are significant, because they are related to how people perceive music pieces. By means of these three parameters we can express the essential features of emotional-aspects of each piece. Our system consists of music-tune features database and computational mechanism for comparison between different tunes. Based on Hevner's emotions adjectives groups we created a new way of emotion presentation on emotion's plane with two axes: activity and happiness. That makes it possible to determine perceived emotions of listening to a tune and calculate adjacent emotions on a plane. Finally, we performed a set of experiments on western classical and popular music pieces, which presented that our proposed approach reached 72% precision ratio and show a positive trend of system's efficiency when database size is increasing.
AB - We design and implement a music-tune analysis system to realize automatic emotion identification and prediction based on acoustic signal data. To compute physical elements of music pieces we define three significant tunes parameters. These are: repeated parts or repetitions inside a tune, thumbnail of a music piece, and homogeneity pattern of a tune. They are significant, because they are related to how people perceive music pieces. By means of these three parameters we can express the essential features of emotional-aspects of each piece. Our system consists of music-tune features database and computational mechanism for comparison between different tunes. Based on Hevner's emotions adjectives groups we created a new way of emotion presentation on emotion's plane with two axes: activity and happiness. That makes it possible to determine perceived emotions of listening to a tune and calculate adjacent emotions on a plane. Finally, we performed a set of experiments on western classical and popular music pieces, which presented that our proposed approach reached 72% precision ratio and show a positive trend of system's efficiency when database size is increasing.
KW - emotions
KW - music
KW - repetitions
KW - tune's internal homogeneity
KW - tune's thumbnail
UR - http://www.scopus.com/inward/record.url?scp=84922572014&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84922572014&partnerID=8YFLogxK
U2 - 10.3233/978-1-61499-472-5-88
DO - 10.3233/978-1-61499-472-5-88
M3 - Conference contribution
AN - SCOPUS:84922572014
T3 - Frontiers in Artificial Intelligence and Applications
SP - 88
EP - 107
BT - Information Modelling and Knowledge Bases XXVI
A2 - Thalheim, Bernhard
A2 - Jaakkola, Hannu
A2 - Yoshida, Naofumi
A2 - Kiyoki, Yasushi
PB - IOS Press
T2 - 24th International Conference on Information Modelling and Knowledge Bases, EJC 2014
Y2 - 3 June 2014 through 6 June 2014
ER -