.

Integration of nonparametric fuzzy classification with an evolutionary-developmental framework to perform music sentiment-based analysis and composition

LAUR Repository

Show simple item record

dc.contributor.author Abboud, Ralph
dc.contributor.author Tekli, Joe
dc.date.accessioned 2024-08-14T09:43:15Z
dc.date.available 2024-08-14T09:43:15Z
dc.date.copyright 2020 en_US
dc.date.issued 2019-11-12
dc.identifier.issn 1432-7643 en_US
dc.identifier.uri http://hdl.handle.net/10725/15980
dc.description.abstract Over the past years, several approaches have been developed to create algorithmic music composers. Most existing solutions focus on composing music that appears theoretically correct or interesting to the listener. However, few methods have targeted sentiment-based music composition: generating music that expresses human emotions. The few existing methods are restricted in the spectrum of emotions they can express (usually to two dimensions: valence and arousal) as well as the level of sophistication of the music they compose (usually monophonic, following translation-based, predefined templates or heuristic textures). In this paper, we introduce a new algorithmic framework for autonomous music sentiment-based expression and composition, titled MUSEC, that perceives an extensible set of six primary human emotions (e.g., anger, fear, joy, love, sadness, and surprise) expressed by a MIDI musical file and then composes (creates) new polyphonic (pseudo) thematic, and diversified musical pieces that express these emotions. Unlike existing solutions, MUSEC is: (i) a hybrid crossover between supervised learning (SL, to learn sentiments from music) and evolutionary computation (for music composition, MC), where SL serves at the fitness function of MC to compose music that expresses target sentiments, (ii) extensible in the panel of emotions it can convey, producing pieces that reflect a target crisp sentiment (e.g., love) or a collection of fuzzy sentiments (e.g., 65% happy, 20% sad, and 15% angry), compared with crisp-only or two-dimensional (valence/arousal) sentiment models used in existing solutions, (iii) adopts the evolutionary-developmental model, using an extensive set of specially designed music-theoretic mutation operators (trille, staccato, repeat, compress, etc.), stochastically orchestrated to add atomic (individual chord-level) and thematic (chord pattern-level) variability to the composed polyphonic pieces, compared with traditional evolutionary solutions producing monophonic and non-thematic music. We conducted a large battery of tests to evaluate MUSEC’s effectiveness and efficiency in both sentiment analysis and composition. It was trained on a specially constructed set of 120 MIDI pieces, including 70 sentiment-annotated pieces: the first significant dataset of sentiment-labeled MIDI music made available online as a benchmark for future research in this area. Results are encouraging and highlight the potential of our approach in different application domains, ranging over music information retrieval, music composition, assistive music therapy, and emotional intelligence. en_US
dc.language.iso en en_US
dc.title Integration of nonparametric fuzzy classification with an evolutionary-developmental framework to perform music sentiment-based analysis and composition en_US
dc.type Article en_US
dc.description.version Published en_US
dc.author.school SOE en_US
dc.author.idnumber 201306321 en_US
dc.author.department Electrical And Computer Engineering en_US
dc.relation.journal Soft Computing en_US
dc.journal.volume 24 en_US
dc.journal.issue 13 en_US
dc.article.pages 9875-9925 en_US
dc.keywords Music sentiment analysis en_US
dc.keywords MIDI en_US
dc.keywords Evolutionary algorithms en_US
dc.keywords Algorithmic composition en_US
dc.keywords Supervised learning en_US
dc.keywords Fuzzy classification en_US
dc.identifier.doi https://doi.org/10.1007/s00500-019-04503-4 en_US
dc.identifier.ctation Abboud, R., & Tekli, J. (2020). Integration of nonparametric fuzzy classification with an evolutionary-developmental framework to perform music sentiment-based analysis and composition. Soft Computing, 24(13), 9875-9925. en_US
dc.author.email joe.tekli@lau.edu.lb en_US
dc.identifier.tou http://libraries.lau.edu.lb/research/laur/terms-of-use/articles.php en_US
dc.identifier.url https://link.springer.com/article/10.1007/s00500-019-04503-4 en_US
dc.orcid.id https://orcid.org/0000-0003-3441-7974 en_US
dc.author.affiliation Lebanese American University en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search LAUR


Advanced Search

Browse

My Account