DocumentCode :
730152
Title :
The AMG1608 dataset for music emotion recognition
Author :
Yu-An Chen ; Yi-Hsuan Yang ; Ju-Chiang Wang ; Chen, Homer
Author_Institution :
Nat. Taiwan Univ., Taipei, Taiwan
fYear :
2015
fDate :
19-24 April 2015
Firstpage :
693
Lastpage :
697
Abstract :
Automated recognition of musical emotion from audio signals has received considerable attention recently. To construct an accurate model for music emotion prediction, the emotion-annotated music corpus has to be of high quality. It is desirable to have a large number of songs annotated by numerous subjects to characterize the general emotional response to a song. Due to the need for personalization of the music emotion prediction model to address the subjective nature of emotion perception, it is also important to have a large number of annotations per subject for training and evaluating a personalization method. In this paper, we discuss the deficiency of existing datasets and present a new one. The new dataset, which is publically available to the research community, is composed of 1608 30-second music clips annotated by 665 subjects. Furthermore, 46 subjects annotated more than 150 songs, making this dataset the largest of its kind to date.
Keywords :
audio signal processing; emotion recognition; music; AMG1608 dataset; automated recognition; emotion perception; existing dataset deficiency; music emotion recognition; Computational modeling; Data models; Emotion recognition; Mood; Predictive models; Speech; Training data; Music emotion recognition; crowdsourcing; personalization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Conference_Location :
South Brisbane, QLD
Type :
conf
DOI :
10.1109/ICASSP.2015.7178058
Filename :
7178058
Link To Document :
بازگشت