During the last decade, in the fields of both systematic musicology and cultural musicology, a lot of research effort (using methods borrowed from music informatics, psychology, and neurosciences) has been spent to connect two worlds that seemed to be very distant or even antithetic: machines and emotions. Mainly in the Sound and Music Computing framework of human-computer interaction an increasing interest grew in finding ways to allow machines communicating expressive, emotional content using a nonverbal channel. Such interest has been justified with the objective of an enhanced interaction between humans and machines exploiting communication channels that are typical of human-human communication and that can therefore be easier and less frustrating for users, and in particular for non-technically skilled users (e.g. musicians, teacher, students, common people). While on the one hand research on emotional communication found its way into more traditional fields of computer science such as Artificial Intelligence, on the other hand novel fields are focusing on such issues. The examples are studies on Affective Computing in the United States, KANSEI Information Processing in Japan, and Expressive Information Processing in Europe. This chapter presents the state of the art in the research field of a computational approach to the study of music performance. In addition, analysis methods and synthesis models of expressive content in music performance, carried out by the authors, are presented. Finally, an encoding system aiming to encode the music performance expressiveness will be detailed, using an XML-based approach.

Expressiveness in music performance: analysis, models, mapping, encoding

CANAZZA TARGON, SERGIO;DE POLI, GIOVANNI;RODA', ANTONIO;
2012

Abstract

During the last decade, in the fields of both systematic musicology and cultural musicology, a lot of research effort (using methods borrowed from music informatics, psychology, and neurosciences) has been spent to connect two worlds that seemed to be very distant or even antithetic: machines and emotions. Mainly in the Sound and Music Computing framework of human-computer interaction an increasing interest grew in finding ways to allow machines communicating expressive, emotional content using a nonverbal channel. Such interest has been justified with the objective of an enhanced interaction between humans and machines exploiting communication channels that are typical of human-human communication and that can therefore be easier and less frustrating for users, and in particular for non-technically skilled users (e.g. musicians, teacher, students, common people). While on the one hand research on emotional communication found its way into more traditional fields of computer science such as Artificial Intelligence, on the other hand novel fields are focusing on such issues. The examples are studies on Affective Computing in the United States, KANSEI Information Processing in Japan, and Expressive Information Processing in Europe. This chapter presents the state of the art in the research field of a computational approach to the study of music performance. In addition, analysis methods and synthesis models of expressive content in music performance, carried out by the authors, are presented. Finally, an encoding system aiming to encode the music performance expressiveness will be detailed, using an XML-based approach.
2012
Structuring Music through Markup Language
9781466624979
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/2534723
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 18
  • ???jsp.display-item.citation.isi??? ND
social impact