TY - GEN
T1 - Music compositional intelligence with an affective flavor
AU - Legaspi, Roberto
AU - Hashimoto, Yuya
AU - Moriyama, Koichi
AU - Kurihara, Satoshi
AU - Numao, Masayuki
PY - 2007
Y1 - 2007
N2 - The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that tightly couples with the listener's affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6% accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.
AB - The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that tightly couples with the listener's affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6% accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.
KW - Adaptive user interface
KW - Affective computing
KW - Automated reasoning
KW - User modeling
UR - http://www.scopus.com/inward/record.url?scp=34648837228&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34648837228&partnerID=8YFLogxK
U2 - 10.1145/1216295.1216335
DO - 10.1145/1216295.1216335
M3 - Conference contribution
AN - SCOPUS:34648837228
SN - 1595934812
SN - 9781595934819
T3 - International Conference on Intelligent User Interfaces, Proceedings IUI
SP - 216
EP - 224
BT - IUI 2007
T2 - 12th International Conference on Intelligent User Interfaces, IUI 2007
Y2 - 28 January 2007 through 31 January 2007
ER -