Explainable Models and Their Application in Music Emotion Recognition
ARI guest talk by Verena Haunschmid and Shreyan Chowdhury at the Acoustics Research Institute of the Austrian Academy of
Current machine learning models used for music processing, while generally working quite well, do not give meaningful or intuitive explanations for their predictions. In this talk we will give an overview of existing tools for interpretable machine learning and lay out how those tools (mostly developed for the image domain) can or cannot be applied in the music domain. In particular, we will talk about emotion recognition in music and a few approaches to make emotion predictions more useful in terms of explaining them in a perceptually meaningful way.
This research is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No 670035 ("Con Espressione").