Automatic Detection of Users’ Emotional States Using Music Listening Data
##plugins.themes.bootstrap3.article.main##
Abstract
Introduction: Music is a natural medium for emotional expression and regulation in everyday life. Recent studies highlight its potential for non-intrusive emotion detection.
Objective: Developing a system to detect users' emotional states from their behavior while listening to music. The goal is to achieve accurate and automated emotional classification.
Method: Music listening data were processed using normalization, feature extraction, and feature selection. Both supervised and unsupervised machine learning algorithms were applied and evaluated.
Results: The proposed system achieved an average classification accuracy of 82.4%, with a precision of 80.9% and a recall of 81.6% across all evaluation scenarios. Feature selection methods, such as Chi-Square and Relief, reduced computation time by approximately 25% while improving model generalization.
Conclusions: Music-listening behavior is an effective source of emotion detection without invasive measurements. The proposed system is compatible with future intelligent and emotion-sensitive applications.
Downloads
##plugins.themes.bootstrap3.article.details##

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Los artículos publicados son de exclusiva responsabilidad de sus autores y no reflejan necesariamente las opiniones del comité editorial.
La Revista CESTA respeta los derechos morales de sus autores, los cuales ceden al comité editorial los derechos patrimoniales del material publicado. A su vez, los autores informan que el presente trabajo es inédito y no ha sido publicado anteriormente.
Todos los artículos están bajo una Licencia Creative Commons Atribución-NoComercial-SinDerivadas 4.0 Internacional.
