smartlab-ryerson-wavedna

What makes listening to music an emotional experience?

Ryerson University and SMART Lab Analyze How We Listen to Music

What makes listening to music an emotional experience? How much of this emotion is common across listeners with different musical interests and backgrounds, and how much of this is subjective to each listener? WaveDNA is addressing these questions through a collaborative research project with the SMART lab at Ryerson University. Obtaining a better understanding of this fluid relationship between the music and the listener, in the context of emotion, is an extremely important problem in the domain of music composition, generation, and recommendation systems. A good understanding of this relationship would allow these systems to be catered and customized for listeners, thus providing a richer experience listening to music.

Some of the current approaches to this problem involve gathering emotion-based tags/labels from thousands of listeners for large datasets of music samples. Machine learning algorithms are then applied on these datasets, and the results are used to recommend different types of music for listeners. While this approach is valuable for a general listener, it has its limits with respect to each specific listener.

music listening

WaveDNA and the SMART lab are approaching this problem in a unique way. First, they are extracting relevant features from within the music, given that the music itself has specific characteristics that convey emotion. The salient aspects of music can be extracted from low-level audio features to mid- and high-level symbolic features pertaining to pitch, rhythm, and timbre using WaveDNA’s rich, proprietary representation. Second, each listener has his or her own subjective experience based on background, culture, and personality. Features pertaining to the listener are being captured using a combination of surveys and physiological response recordings. Once a reasonably large dataset of these features is obtained, algorithms will be applied to generate different emotion-based listening profiles for each type of listener.

Written By:

Dr. Naresh N. Vempala
Postdoctoral Research Fellow (NSERC-CRD)
SMART Lab, Department of Psychology, Ryerson University
http://www.ryerson.ca/~nvempala/
http://www.ryerson.ca/smart/people/vempala/
http://www.cogmir.org/
nvempala@psych.ryerson.ca

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. More Information...

The cookie settings on this website are set to "Allow Cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings, or you click "Accept" below, then you are consenting to this. For further information, please see our Terms and Conditions, Cookie Policy and Privacy Policy.

Close