Emotions recognition system for acoustic music data based on human perception features

Tatiana Endrjukaite, Yasushi Kiyoki

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)


Music plays an important role in the human's life. It is not only a set of sounds-music evokes emotions subjectively perceived by listeners. The growing amount of audio data wakes up a need for content-based searching. Traditionally, tunes information has been retrieved based on a reference information, for example, the title of a tune, the name of an artist, the genre and so on. When users would like to try to find music pieces in a specific mood such standard reference information of the tunes is not sufficiently effective. We need new methods and approaches to realize emotion-based search and tune content analysis. This paper proposes a new music-tune analysis approach to realize automatic emotion recognition by means of essential musical features. The innovativeness of this research is that it uses new musical features for tune's analysis, which are based on human's perception of the music. Most important distinction of the proposed approach is that it includes broader range of tunes genres, which is very significant for music emotion recognition system. Emotion description on continuous plane instead of categories results in more supported adjectives for emotion description which is also a great advantage.

Original languageEnglish
Title of host publicationInformation Modelling and Knowledge Bases XXVIII
PublisherIOS Press
Number of pages20
ISBN (Electronic)9781614997191
Publication statusPublished - 2017

Publication series

NameFrontiers in Artificial Intelligence and Applications
ISSN (Print)09226389


  • emotion recognition
  • instantaneous frequency spectrum
  • music analysis
  • music emotions
  • music repetitions
  • music similarity
  • tune internal homogeneity

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'Emotions recognition system for acoustic music data based on human perception features'. Together they form a unique fingerprint.

Cite this