What makes listening to music an emotional experience?
Ryerson University and SMART Lab Analyze How We Listen to Music
What makes listening to music an emotional experience? How much of this emotion is common across listeners with different musical interests and backgrounds, and how much of this is subjective to each listener? WaveDNA is addressing these questions through a collaborative research project with the SMART lab at Ryerson University. Obtaining a better understanding of this fluid relationship between the music and the listener, in the context of emotion, is an extremely important problem in the domain of music composition, generation, and recommendation systems. A good understanding of this relationship would allow these systems to be catered and customized for listeners, thus providing a richer experience listening to music.
Some of the current approaches to this problem involve gathering emotion-based tags/labels from thousands of listeners for large datasets of music samples. Machine learning algorithms are then applied on these datasets, and the results are used to recommend different types of music for listeners. While this approach is valuable for a general listener, it has its limits with respect to each specific listener.
WaveDNA and the SMART lab are approaching this problem in a unique way. First, they are extracting relevant features from within the music, given that the music itself has specific characteristics that convey emotion. The salient aspects of music can be extracted from low-level audio features to mid- and high-level symbolic features pertaining to pitch, rhythm, and timbre using WaveDNA’s rich, proprietary representation. Second, each listener has his or her own subjective experience based on background, culture, and personality. Features pertaining to the listener are being captured using a combination of surveys and physiological response recordings. Once a reasonably large dataset of these features is obtained, algorithms will be applied to generate different emotion-based listening profiles for each type of listener.
Dr. Naresh N. Vempala
Postdoctoral Research Fellow (NSERC-CRD)
SMART Lab, Department of Psychology, Ryerson University