Thesis. Reference. Dynamiques temporelles des émotions exprimées par la musique. THIBAULT DE BEAUREGARD, Kim Julie

Dimension: px
Commencer à balayer dès la page:

Download "Thesis. Reference. Dynamiques temporelles des émotions exprimées par la musique. THIBAULT DE BEAUREGARD, Kim Julie"

Transcription

1 Thesis Dynamiques temporelles des émotions exprimées par la musique THIBAULT DE BEAUREGARD, Kim Julie Abstract Cette thèse étudie la perception de caractéristiques émotionnelles dans la musique selon une perspective dynamique. Basée sur le modèle GEMS et à l aide d une méthode de jugements dynamiques, c est-à-dire en prenant en compte l aspect temporel de la musique, cette thèse a permis de mettre en évidence l existence de différences dans les structures temporelles émotionnelles des dimensions GEMS, l importance de facteurs acoustiques et musicaux dans l attribution de l émotion à la musique, la modulation de l intensité émotionnelle à travers trois modes d expressivité musicale (académique, emphatique, naturel) et enfin l importance du contexte d écoute dans l attribution de caractéristiques émotionnelles à la musique. Reference THIBAULT DE BEAUREGARD, Kim Julie. Dynamiques temporelles des émotions exprimées par la musique. Thèse de doctorat : Univ. Genève, 2017, no. FPSE 646 URN : urn:nbn:ch:unige DOI : /archive-ouverte/unige:93823 Available at: Disclaimer: layout of this document may differ from the published version.

2 Section de Psychologie Sous la direction de Professeur Didier Grandjean DYNAMIQUES TEMPORELLES DES EMOTIONS EXPRIMEES PAR LA MUSIQUE THESE Présentée à la Faculté de psychologie et des sciences de l éducation de l Université de Genève pour obtenir le grade de Docteur en Psychologie par Kim THIBAULT de BEAUREGARD de Genève Thèse No 646 GENEVE 3 mars 2017 Numéro d étudiant :

3 Thibault de Beauregard, K., Ott, T., Labbé, C. & Grandjean, D. Dynamic approach to the study of emotions expressed by music. Under review. 2

4 3

5 Remerciements Je tiens tout d abord à remercier les membres de mon jury, Prof. Stéphanie Khalfa, Prof. Marcel Zentner, Prof. Marc-André Rappaz et Prof. David Sander pour s être déplacés et avoir accepté d accorder de leur temps et leurs connaissances à ce travail. Une mention spéciale pour David et son cours de Bachelor «Psychologie de l émotion» qui m a donné envie de poursuivre dans cette voie. Je remercie chaleureusement tous mes collègues du CISA et membres du laboratoire NEAD, en particulier Tamara Ott, Carole Varone, Blandine Mouron, Daniela Sauge, Donato Cereghetti, Donald Glowinski, Sylvain Delplanque, Carolina Labbé, Valérie Milesi, Simon Schaerlaeken, Wiebke Trost, Ben Meuleman, Olivier Rosset, Julien Savary, Eric Werner Federau, Sylvain Tailamee, Lara Lordier et mes étudiantes de Master, Stéphanie Baudoux, Adeline Tinguely et Christina Bisiritsa. Je remercie tous les membres du projet SIEMPRE, en particulier Antonio Camurri et Luciano Fadiga pour ces trois années de voyages et d épanouissement intellectuel. Merci à Renaud Capuçon pour sa gentillesse et sa disponibilité. «Grazie mille» aux membres du Quartetto di Cremona, et spécialement Cristiano Gualco, pour cette splendide collaboration dans le cadre du projet SIEMPRE! Je remercie très chaleureusement Olivier Lartillot, qui est un sacré personnage qui se cache derrière le grand ingénieur de la MIRtoolbox. Timothée, merci infiniment pour ta fidélité, ton amitié et ta disponibilité à toute heure du jour et de la nuit! Un immense merci à toute l Etude de Pfyffer qui m a accompagnée, soutenue et encouragée durant cette épopée. Je tiens à remercier tout particulièrement Maître Charles de Bavier pour sa compréhension et sa flexibilité quant à mes horaires un peu baroques, et mes collègues Carmen Valzino et Claude Delecraz, sans qui je n aurais pas pu terminer cette thèse. Merci à tous les trois, du fond du cœur. 4

6 Je remercie mon mari, Alexis, pour son amour inconditionnel et son précieux soutien. Merci à mes parents et beaux-parents pour leurs encouragements et leur bienveillance. Vous avez été tous les cinq des piliers pour moi durant cette aventure. Sophie, Sergio, Astrid, Cédric et Marie, un grand merci à vous aussi, pour votre joie de vivre et vos sourires. Je remercie tous mes amis et membres de ma famille qui m ont entourée, en particulier mon Oiselet, Alba, Emma, Sophie, Bianca, Louis M., Heidi, Christine, Cindy, Catherine, Jessica, Alexia, Garance, Marie-Hélène, Louis, Chrystelle, Sonia, Radisha, Yannick, Gaspard, Vanessa, Adrian, Flore, Julie et Olivia. Mes derniers remerciements vont à Didier Grandjean qui a non seulement su me faire apprécier Wagner mais aussi réussi à me passionner pour des matières que je n aurais jamais eu l audace d affronter toute seule. Merci Didier pour ces six années de collaboration, pour ces fous-rires «scientifiques», ces voyages, ces moments d éclairs intellectuels et surtout merci pour ton amitié. 5

7 6

8 RESUME Cette thèse étudie la perception de caractéristiques émotionnelles dans la musique selon une perspective dynamique. Nous nous sommes attachés à comprendre comment les individus sont capables de se représenter ce que la musique exprime en termes émotionnels et nous avons étudié cela dans une perspective dynamique, en se basant sur la caractéristique principale du phénomène musical : l aspect temporel. A l aide d une nouvelle méthode de mesure dynamique, sous forme d interface informatique, nous avons eu la possibilité de capturer temporellement les réponses émotionnelles des individus à partir de l écoute de pièces musicales classiques. Cette thèse s appuie sur la musique «classique» occidentale, allant de la fin de la Renaissance à l époque contemporaine, et ne propose donc pas une généralisation de ses résultats à tous types de musique. Afin de mieux comprendre les processus de perception et d attribution de caractéristiques émotionnelles à la musique, le modèle GEMS (Geneva Emotional Music Scale) proposé par Zentner, Grandjean & Scherer (2008) offre le cadre théorique le plus pertinent. Le modèle GEMS représente le premier cadre théorique spécialement dédié aux émotions liées à la musique et est présenté en détails dans cette thèse. En se basant donc sur le modèle GEMS et en utilisant une perspective dynamique, les résultats de cette thèse sont les suivants : i) une première étude (Etude I : étude pilote, expérience 1 et expérience 2) a permis de tester le modèle GEMS en étudiant les émotions exprimées par la musique - et non le ressenti subjectif de l auditeur - ainsi qu une méthode de mesure dite de «jugements dynamiques». Afin de vérifier si cette méthode de jugements dynamiques est fiable, une étude pilote a permis de mettre en évidence l importance et l utilité, en termes d apport d information, d une approche continue dans l étude de la musique et des émotions. Cette première étude a également mis en exergue un fort consensus entre les individus quant à ce que la musique exprime en termes émotionnels dans le temps, 7

9 comme l ont révélé des accords inter-juges très élevés, quelque soit le type de musique et pour toutes les dimensions GEMS confondues, attestant de l utilité et de la fiabilité de l interface des jugements dynamiques. Cette première étude a aussi révélé des différences significatives dans la structure temporelle des dimensions GEMS (Etude 1, expérience 1), et a également porté sur la spécificité et la complexité liées à cette temporalité des dimensions GEMS (Etude 1, expérience 2); ii) une deuxième étude (Etude 2 : expérience 1 et expérience 2) a investigué les relations dynamiques entre paramètres acoustiques à différents niveaux, paramètres sur lesquels les auditeurs se basent pour percevoir des caractéristiques émotionnelles dans la musique, à l aide de jugements dynamiques. Basé sur un ensemble de descripteurs dynamiques extraits de la MIRtoolbox développée par Lartillot, Toiviainen & Eerola (2008), les analyses statistiques de cette étude ont fait ressortir trois facteurs pertinents pour expliquer la variance des jugements dynamiques émotionnels, démontrant un apport spécifique des paramètres acoustiques pour chaque dimension GEMS (Etude 2, expérience 1). Dans un deuxième temps, nous avons étudié deux dimensions GEMS cibles (la Puissance et la Tendresse) (Etude 2, expérience 2), ce qui nous a permis de confirmer la pertinence de ces trois facteurs; iii) une troisième étude (Etude 3, expériences 1a/1b et expériences 2a/2b) a porté sur la dualité des affects liés à la musique en termes d émotions exprimées par la musique et d émotions ressenties par l auditeur. Avec l aide d un violoniste professionnel avec qui nous avons créé les stimuli, nous nous sommes aussi intéressés au concept d expressivité émotionnelle de la musique, en étudiant trois différents modes d expressivité : un mode académique, un mode naturel et un mode emphatique. Cette troisième étude a mis en évidence que l expressivité musicale a un effet significatif mais différent sur les évaluations dynamiques des émotions perçues vs. exprimées par la musique, et a ressorti des différences significatives en termes d intensité émotionnelle sur les jugements dynamiques de ces deux types de processus émotionnels ; iv) enfin, une quatrième et dernière étude (Etude 4) a 8

10 investigué l importance du contexte d écoute dans l attribution de caractéristiques émotionnelles à la musique, en étudiant des jugements dynamiques effectués dans un contexte de validité écologique, i.e. durant un concert, vs. des jugements dynamiques effectués dans un contexte de laboratoire, i.e. dans une salle informatique avec des écouteurs, et a démontré que l intensité émotionnelle perçue par les auditeurs était plus élevée dans un contexte d écoute non expérimental de la musique. 9

11 Table des matières I. Introduction théorique Emotion et musique Définition des émotions utilitaires vs. émotions esthétiques, présentation du modèle GEMS Distinction entre émotions exprimées par la musique vs. émotions ressenties par l auditeur Aspect dynamique de la musique Décodage des émotions exprimées par la musique Approche brunswikienne dans le processus d attribution d une émotion à la musique Paramètres acoustiques, percepts auditifs, structure musicale et jugements émotionnels Performance musicale, expressivité émotionnelle et contexte d écoute Performance et interprétation Contexte d écoute et modes d expressivité II. Méthode III. Partie expérimentale Etude Pilot Study Study Study Etude Study Study Etude Etude IV. Conclusion générale V. Appendix VI. Bibliographie

12 I. Introduction théorique 1.1. Emotion et musique Définition des émotions utilitaires vs. émotions esthétiques, présentation du modèle GEMS «Nous appellerons émotion, une chute brusque de la conscience dans le magique». Quand Jean-Paul Sartre définit le concept d émotion dans «Esquisse d une théorie des émotions» (1939), il souligne le lien qu il existe entre la réalité consciente du phénomène émotionnel et le caractère quasi inexplicable au niveau linguistique d une telle réalité. En effet, nous sommes tous capables de nous représenter ce qu est une émotion mais il est très difficile de trouver les termes qui rendront le plus compte de ce que nous ressentons. Prenons l exemple de la joie : tous les individus sont capables de se représenter par leurs vécus les mécanismes psychophysiologiques qui sous-tendent cette émotion, et nous avons d ailleurs souvent recours à la terminologie physiologique pour expliquer un état émotionnel (par exemple, le rythme cardiaque, les changements de température corporelle, les frissons). Le concept d émotion apparaît comme polysémique et pléthore de théories se sont attachées à étudier l émotion. Nous en présenterons trois ici : la théorie des émotions dites «de base», le modèle bi-dimensionnel de l émotion, que nous survolerons, et enfin un modèle issu des théories de l évaluation cognitive, le Modèle des Processus Composants, que nous détaillerons. La théorie des émotions de base (Ekman, 1992) propose qu à partir d une liste de six émotions de base que sont la joie, la tristesse, le dégoût, la peur, la colère et la surprise, il est possible d étudier et de comprendre tous les autres types d émotions, dites secondaires. Ces émotions sont universelles selon ce modèle et les émotions dites secondaires, plus complexes, comme la honte ou la culpabilité, seraient issues d un mélange de ces émotions de base. Le modèle dimensionnel (Russell, 2003) propose quant à lui de définir et d expliquer tous les états 11

13 affectifs par le biais de deux dimensions que sont la valence et l arousal. Par exemple, selon cette théorie, la tristesse est une émotion à valence négative accompagnée d un bas niveau d arousal. Ces deux modèles possèdent des qualités scientifiques indéniables dans leur facilité à expliquer le processus émotionnel, notamment la théorie des émotions de base concernant les expressions faciales, mais elles présentent toutefois de grandes lacunes du fait de la simplicité avec laquelle elles proposent d étudier le phénomène émotionnel. En effet, le Modèle des Processus Composants (Scherer, 1984 ; 2001; 2005), issu des théories de l évaluation cognitive, propose un cadre théorique plus affiné et approfondi pour étudier l émotion. Selon ce modèle, une émotion est définie comme : «[ ] une séquence de changements d état intervenant dans cinq systèmes organiques de manière interdépendante et synchronisée en réponse à l évaluation d un stimulus externe, ou interne, par rapport à un intérêt central pour l individu.». Les cinq composantes sont : la composante cognitive, qui est la composante la plus riche et la plus complexe, comprenant les critères d évaluation d un stimulus (CES), regroupant l analyse de la pertinence, l implication pour les buts de l individu, le potentiel de maîtrise et la signification normative ; la composante périphérique efférente, qui a pour fonction de réguler l organisme ; la composante motivationnelle avec les tendances à l action ; la composante d expression motrice qui consiste en la communication des réactions et des intentions comportementales de l individu ; et enfin, la composante du sentiment subjectif. Cette dernière composante est une composante clé du processus émotionnel car elle souligne le caractère individuel et unique de l évaluation cognitive d un stimulus émotionnel. Selon le Modèle des Processus Composants, l émotion est un processus dynamique, récursif et a un but précis et adaptatif pour l individu, en constante interaction avec son vécu, sa mémoire, son attention, son environnement culturel, sa motivation et son raisonnement. Selon cette approche, le sentiment subjectif reflète les changements qui apparaissent dans les autres composantes de l émotion, servant de base à la représentation 12

14 consciente du processus émotionnel (Grandjean, Sander & Scherer, 2008). Le Modèle des Processus Composants prend en considération le caractère évolutif de l émotion, ainsi que le bagage biologique et culturel de l individu. En 2004, Scherer insiste sur la diversité des états affectifs, en termes d intensité, de durée, et de synchronisation entre les composants clés, et propose, dans un souci d éclaircissement sémantique, de distinguer les émotions «utilitaires» des émotions dites «esthétiques». Les premières sont celles que l on éprouve tous les jours lors de réactions à un évènement ou un stimulus particulier, telles que la colère, la joie, la tristesse, et ont une fonction adaptative ainsi qu une pertinence pour les buts de l individu. Ces émotions sont utilitaires dans le sens où elles ont des fonctions majeures dans l adaptation et l ajustement des individus aux événements qui vont avoir d éventuelles conséquences sur leur bien-être physique et psychologique immédiat (Scherer & Coutinho, 2013). Les secondes sont liées à des aspects esthétiques, étant moins instantanées et dénuées de pertinence aux buts directs et adaptatifs de l individu. Ces émotions esthétiques font écho à l appréciation des qualités intrinsèques de l Art visuel ou auditif, mettant en exergue un désintérêt pour toute considération utilitaire de la part de l individu, mais étant néanmoins essentielles à son développement personnel (Scherer, 2004). Le cas de la musique, sur lequel nous allons nous intéresser dans cette thèse, est sans doute l exemple le plus éminent des émotions esthétiques. La musique, selon la définition anthropocentrique du dictionnaire d Oxford, est : «L art ou la science de combiner les sons de la voix ou des instruments en visant la beauté ou la cohérence formelle et l expression des émotions». Il s agit donc d un art combinatoire, d une science consistant à arranger et ordonner les sons, mais aussi les silences, au cours du temps, le rythme étant le support de la combinaison temporelle, la hauteur et le timbre, les supports de la combinaison fréquentielle, la mélodie, la succession des sons de hauteurs différentes et l harmonie, la superposition de sons simultanés (Lemarquis, 2009, p.64). La musique revêt une grande importance lors de réunion de groupes d individus quant à l identification au 13

15 groupe social (North & Hargreaves, 1999 ; Cross, 2001) et représente également un important moyen de communication entre les individus (Peretz, 2010). De nombreuses recherches ont permis de mettre en exergue le pouvoir émotionnel de la musique : réduction du stress (Jiang, Zhou, Rickson & Jiang, 2013 ; Labbé, Schmidt, Babin & Pharr, 2007 ; Elliott, Polman & McGregor, 2011), induction d humeur (Thomson, Reece & Di Benedetto, 2014 ; Juslin & Sloboda, 2001 ; 2010), renforcement du caractère émotionnel d une image ou d un film par la musique (Baumgartner, Lutz, Schmidt & Jäncke, 2006), utilisation de la musique comme outil politique et culturel (Deliège, Vitouch & Ladinig, 2010), réactions physiologiques en réponse à la musique (Adolphs, 2006 ; Baltes, Avram, Mircea & Miu, 2011; Juslin & Sloboda, 2001, 2010). Cette distinction entre émotions utilitaires et émotions esthétiques est intéressante dans la mesure où la musique permet de sortir du quotidien, de mettre en miroir et parfois exagérer les émotions et les sentiments que nous pouvons rencontrer dans la vie quotidienne. Schubert (2010) décrit ce principe de dissociation, i.e. cette capacité de l esprit à se retirer de la réalité, comme étant la fonction fondamentale de la musique. Par exemple, le fait qu une musique à valence négative puisse créer des réactions de préférence chez les individus (Schubert, 1996 ; 2007, 2010 ; Hunter, Schellenberg & Schimmack, 2010) abonde dans le sens que la musique ait cette capacité à nous sortir de la réalité, car dans la vie quotidienne les individus ne sont généralement pas attirés par des stimuli à valence négative. Les enjeux et buts des émotions utilitaires et esthétiques ne sont donc pas les mêmes, c est pourquoi il est important de différencier les processus engagés dans ces deux concepts. Bien que le pouvoir émotionnel de la musique soit établi, il subsiste un manque patent de définitions, d opérationnalisations, de paradigmes et de consensus dans ce domaine de recherche et de nombreux débats sont encore d actualité. Afin de combler le manque de concepts et d opérationnalisations spécifiques au domaine de la musique, Zentner, Grandjean 14

16 et Scherer (2008) ont effectué une série d études aboutissant à un modèle dédié aux émotions liées à la musique, le GEMS : Geneva Emotional Music Scale. En effet, les traditionnels concepts, définitions et outils utilisés en psychologie de l émotion ne semblent pas des plus appropriés pour l étude d émotions aussi fines et complexes que celles induites par la musique (Scherer & Zentner, 2001). A la suite de deux études leur permettant de regrouper les termes affectifs les plus pertinents concernant les émotions liées à la musique, Zentner, Grandjean et Scherer (2008) ont démontré que ces termes se regroupaient en neuf dimensions: l Emerveillement, la Transcendance, la Tendresse, la Nostalgie, la Sérénité (groupés sous le label «Sublimité»), la Puissance/Pouvoir et la Joie (groupées sous le label «Vitalité»), l Agitation/Tension et la Tristesse (groupées sous le label «Inconfort») (Figure 1). 15

17 Figure 1. Modèle GEMS proposé par Zentner, Grandjean & Scherer (2008). Ce modèle représente la première tentative d étudier la musique comme un domaine spécifique de l émotion et semble donc être le plus adapté pour comprendre les émotions liées à la musique, ce qui n est pas le cas du modèle des émotions de base et du modèle dimensionnel (Ekman, 1992 ; Russel, 2003, cités dans Zentner, Grandjean & Scherer, 2008) traditionnellement utilisés. En effet, suite à une quatrième étude, ce modèle à neuf dimensions s est avéré être plus efficace en termes de prédiction des émotions liées la musique que les deux modèles de l émotion cités ci-dessus. A ce propos, Baltes, Avram, Mircea et Miu (2011) ont récemment testé des aspects de réponses périphériques (réponses cardiovasculaire, électrodermale et respiratoire) en lien avec les dimensions du GEMS. Les auteurs ont par exemple montré une corrélation positive entre la dimension Emerveillement et la fréquence respiratoire. 16

18 Le modèle à neuf dimensions proposé par Zentner, Grandjean et Scherer (2008) rend bien compte de cette nuance entre émotions utilitaires et esthétiques. Les émotions en lien avec la musique ne semblent donc pas pouvoir être directement apparentées à celles que l on peut vivre quotidiennement. Comme souligné auparavant, les auteurs relèvent également que même les émotions qui pourraient sembler «négatives», telles que l agitation/tension ou la tristesse, ne s expérimentent pas de la même façon avec la musique que lorsqu on les expérimente dans la vie quotidienne. Les individus écoutent parfois volontairement un morceau agité pour se donner de l entrain ou un morceau triste pour méditer (Labbé, Schmidt, Babin & Pharr, 2007 ; Evans & Schubert, 2008 ; Taruffi & Koelsch, 2014). L étude des émotions liées à la musique nécessite donc un cadre théorique qui lui est propre, traitant la musique comme un domaine spécifique de l émotion (Trost, Ethofer, Zentner, & Vuilleumier, 2011 ; Miu & Baltes, 2012). La subtilité des émotions dites «esthétiques» ainsi que la complexité de la communication émotionnelle dans le phénomène musical viennent en partie du fait qu il s agit d une relation implicite et non-verbale. Le modèle GEMS représente la tentative la plus convaincante à l heure actuelle pour étudier les émotions en lien avec la musique, c est pourquoi nous nous appuierons sur ce modèle dans cette thèse Distinction entre émotions exprimées par la musique vs. émotions ressenties par l auditeur Il existe une distinction opposant le sentiment subjectif de l auditeur, autrement dit l émotion ressentie par l auditeur, et l attribution de caractéristiques émotionnelles à la musique ou émotion exprimée par la musique et perçue/jugée par l auditeur (Scherer & Zentner, 2001). Bien qu en étroite interaction, l une n implique pas nécessairement l autre (Evans & 17

19 Schubert, 2008 ; Scherer & Coutinho, 2013) : il est en effet tout à fait possible de reconnaître la tristesse exprimée par la musique sans pour autant ressentir cette tristesse, tout comme il est possible de ressentir de la nostalgie ou de la mélancolie en écoutant un morceau joyeux. La perception de l émotion est le processus de reconnaissance de l émotion exprimée ou représentée par la musique. Il est par exemple facile de reconnaître le caractère triomphal et héroïque de la Chevauchée des Walkyries de Richard Wagner. La perception ou la reconnaissance de l émotion exprimée par la musique fait donc appel à des mécanismes plus «objectifs» d attribution de l émotion (Gabrielsson & Juslin, 2003). En revanche, l induction d une émotion par la musique se réfère au processus du ressenti subjectif de l émotion par l auditeur (Scherer & Zentner, 2001). Etant donné la grande part de subjectivité présente dans les évaluations des émotions ressenties par l auditeur, l étude des émotions exprimées par la musique offre l avantage d une certaine objectivité de la mesure due au fait qu il est plus facile de s accorder sur les émotions exprimées par la musique que sur les émotions proprement ressenties. Il a en effet été démontré que les accords inter-juges sont plus élevés lorsque les auditeurs évaluent les émotions exprimées par la musique que lorsqu ils sont interrogés sur leurs impressions et ressentis personnels (Gabrielsson & Juslin, 2003). Tous les individus, à de rares exceptions près, sont capables d attribuer la valence d un accord mineur et d un accord majeur. Comme le relève Johann Joachim Quantz, célèbre musicien de la cour prussienne au XVIIIe siècle : «La tonalité dure (majeure) est généralement utilisée pour exprimer la joie, l insolence, le sérieux, le majestueux ; la tonalité molle (mineure) pour exprimer la flatterie, la tristesse, la tendresse.» (Droz, 2001, p. 6). Il existe en effet un nombre important d études démontrant que les individus sont capables de reconnaître ou d attribuer des émotions exprimées par la musique. Fritz et collaborateurs (2009) ont par exemple démontré l existence d une universalité dans la reconnaissance d émotions de base (joie, tristesse, peur) exprimées par la musique chez une population originaire d Afrique (les 18

20 Mafas) et donc non familière avec le répertoire de la musique classique occidentale. Curtis et Bharucha (2010) ont démontré l importance de l intervalle de hauteur (pitch : perception de la hauteur du son) pour décoder l émotion exprimée par la musique. Vieillard et collaborateurs (2008) ont démontré la capacité des individus à évaluer le caractère joyeux, triste, effrayant et apaisant d extraits musicaux et ont ainsi proposé une liste de 56 morceaux représentatifs de ces quatre types d état émotionnel. En demandant à trois guitaristes professionnels d interpréter trois mélodies courtes («When the Saints», «Nobody knows» et «Greensleeves») en modulant leur manière de jouer pour exprimer de la joie, de la tristesse, de la colère et de la peur, Juslin (1997) a fait ressortir trois principaux résultats : tout d abord, les interprètes sont capables d exprimer des émotions en modulant différents aspects de leur jeu ; ensuite, les indices utilisés par les interprètes pour exprimer une émotion particulière sont les mêmes que ceux utilisés par les auditeurs pour définir quel type d émotion est exprimé par la musique ; enfin, l utilisation des indices est davantage cohérente et stable à travers les différentes mélodies qu à travers les différentes interprétations. La musique peut donc exprimer différentes émotions, d après la conjugaison de plusieurs éléments tant au niveau de la partition, que du style et de l interprétation, et les individus sont capables de reconnaître ces émotions. L étude du ressenti émotionnel de l auditeur implique le contrôle de plusieurs variables en étroite interaction. Juslin et Västfjäll (2008) déplorent la négligence des recherches actuelles à prendre en considération les mécanismes sous-jacents à la notion d émotion musicale. Selon les auteurs, ce manque de considération sur le détail du processus émotionnel du ressenti de l auditeur, de ce «comment» ces émotions sont évoquées chez l individu, a mené à des résultats inconsistants et non interprétables. Ils ont donc proposé une nouvelle structure théorique mettant en exergue six mécanismes par lesquels le fait d écouter de la musique peut induire des émotions chez l auditeur. Ils ont ajouté un septième mécanisme, qu est 19

21 l entrainement rythmique (rythmic entrainment) lors d une révision de leur modèle (Juslin, Liljeström, Västfjäll & Lundqvist, 2010) en Ce modèle, le BRECVEM, propose les mécanismes d induction de l émotion suivants : les réflexes du tronc cérébral, l entrainement rythmique, le conditionnement évaluatif, la contagion émotionnelle, l imagerie visuelle, la mémoire épisodique et enfin l attente musicale. Chaque mécanisme présente une voie plus ou moins sophistiquée par laquelle la musique et l émotion entrent en résonance : i) les réflexes du tronc cérébral (brain stem reflexes) : ce mécanisme ramène la musique à son sens le plus primaire, et résulte du fait qu une ou plusieurs caractéristique(s) acoustique(s) fondamentale(s) de la musique a/ont été prise(s) en compte par le tronc cérébral pour signaler un événement potentiellement important ; ii) l entraînement rythmique (rythmic entrainment), lié au «verrouillage» d'un rythme interne du corps de l'auditeur (par exemple le rythme cardiaque ou cérébral) avec le rythme de la musique pour une périodicité commune ; iii) le conditionnement évaluatif (evaluative conditioning) dépend de processus inconscients et non intentionnels par lesquels une émotion est induite par un morceau de musique par le simple fait que ce stimulus ait été couplé à plusieurs reprises à d autres stimuli ; iv) la contagion émotionnelle (emotional contagion) est le mécanisme d induction par lequel l auditeur perçoit l expression émotionnelle de la musique et «imite» cette expression intérieurement, par des moyens allant soit d un feedback périphérique aux muscles, soit d une activation plus directe des représentations émotionnelles pertinentes dans le cerveau ; v) l imagerie visuelle (visual imagery) est le mécanisme par lequel une émotion est induite chez un auditeur parce qu il évoque une image mentale pendant qu il écoute la musique et les émotions issues de ce mécanisme résultent d une forte interaction entre la musique et les images ; vi) la mémoire épisodique (episodic memory) est le mécanisme d induction de l émotion par lequel la musique évoque un souvenir d un événement particulier dans la vie de l auditeur ; vii) enfin, l attente musicale (musical expectancy) concerne la violation d une caractéristique spécifique 20

22 de la musique, différant ou confirmant les attentes de l auditeur concernant le déroulement de la suite de la musique. Juslin, Liljeström, Västfjäll et Lundqvist (2008 ; 2010) concluent en insistant sur la nécessité de mieux définir les concepts théoriques et soulignent leur intérêt pour l interaction entre ces sept mécanismes dans l induction de l émotion chez un auditeur, et sur le rôle de l évaluation cognitive/appraisal relativement aux différents mécanismes d induction. A ce propos, Scherer et Coutinho (2013) insistent sur le fait qu il existe un processus d appraisal commun dans le cadre des émotions utilitaires et dans le cadre des émotions esthétiques, mais que la différence majeure entre ces deux processus réside dans le fait que les évaluations cognitives concernant la pertinence au but et la capacité d adaptation au stimulus impliquent différents critères (p.8). L existence d un processus d appraisal dans les émotions esthétiques s explique notamment par la présence des évaluations de nouveauté et du concept d attente inhérent à la musique. Lorsqu un individu entend pour la première fois un morceau de musique, une évaluation cognitive traitant la nouveauté de ce stimulus va se déclencher. C est souvent dans ce contexte qu apparaissent les frissons. Huron et Margulis (2011) décrivent le frisson en musique comme «un affect induit musicalement et qui montre des liens étroits avec la surprise musicale» (p.591). Gabrielsson (2001) nomme ces expériences transcendantales en rapport avec la musique (comprenant les frissons, les larmes et la chair de poule), les «Strong Experiences with Music (SEM)». Ces réactions psychophysiologiques peuvent également apparaître dans le contexte de l attente musicale. Le concept d attente musicale sous-tend que les caractéristiques de la musique, tant au niveau de la structure que de l expressivité, soit violent, soit confirment les attentes de l auditeur (Huron, 2006). La violation ou la confirmation des règles musicales peut donc créer une émotion chez l auditeur. Une étude de Kraemer, Macrae, Green et Kelley (2005) a montré des résultats spectaculaires sur le traitement cérébral de la musique et le concept d attente. Les auteurs ont démontré que, comme dans le domaine visuel lorsque l information est 21

23 fragmentée ou partielle, le cerveau est capable de reconstruire la musique qui lui est familière lorsque la musique n est plus diffusée. Il existe donc bien un appraisal à différents niveaux dans l expérience musicale, mais le stimulus qui déclenche les évaluations cognitives dans le cadre des émotions esthétiques n a pas la même pertinence qu un stimulus qui déclenche les évaluations cognitives dans le cadre des émotions utilitaires, autrement dit le stimulus musical n implique pas des conséquences vitales pour les buts de l individu et ne demande pas les mêmes capacités d adaptation et de régulation. Néanmoins, un individu pourrait par exemple utiliser les émotions esthétiques pour réguler des réactions émotionnelles utilitaires. Scherer et Zentner (2001) ont tenté d identifier les règles de production du processus d induction d émotion par la musique et ont listé une multitude de facteurs en étroite interaction, que sont: la personnalité de l auditeur (i.e. expertise musicale, préférences musicales, humeur actuelle), les facteurs contextuels (i.e. lieu, événement, conditions d écoute), la performance (i.e. la qualité technique, l interprétation, l apparence physique du musicien), et les caractéristiques acoustique et structurale (i.e. intervalles, timbre, mélodie, mode). Dans cette thèse, nous nous focaliserons sur les émotions exprimées par la musique, autrement dit le processus de perception, d attribution et de reconnaissance de caractéristiques émotionnelles de la musique par l auditeur. Nous ne laisserons toutefois pas complètement de côté les émotions ressenties par l auditeur car nous nous intéresserons dans une de nos études, à la comparaison entre émotions ressenties par l auditeur vs. émotions exprimées par la musique, ainsi qu à l expressivité musicale à travers ces deux types d émotions. 22

24 Aspect dynamique de la musique Une caractéristique essentielle de la musique est son déroulement temporel. L article de Verduyn, Van Mechelen, Tuerlinckx, Meers et Van Coillie (2009) explique qu il s agit de même concernant l émotion et qu il est étonnant que peu de recherches se soit penché sur cet aspect si important de l intensité et du décours temporel de l émotion. La richesse des états émotionnels qui défilent dans la musique ne peut donc être restreinte à un jugement global, à une simple étiquette émotionnelle. L utilisation du rythme, des bémols, des dièses, des silences, des pauses ainsi que la différence d importance donnée à chaque instrument fait que tout un discours se construit autour du caractère global du morceau. On reconnaît dès les premières notes le climat général d une pièce musicale mais cela varie dans le temps et il existe une grande hétérogénéité dans les profils de variabilité de l intensité exprimée. Verduyn et al. (2009) soulignent non seulement une différence d intensité entre les émotions mais également entre les individus. La majorité des études portant sur la musique et les émotions, ressenties et perçues confondues, utilisent des jugements globaux (demandant aux individus de labelliser avec un terme la totalité d un extrait musical ou d une pièce musicale), des échelles d adjectifs, des jugements dimensionnels (Gabrielsson & Juslin, 2003, pour une revue) renvoyant donc à un jugement «statique» et différé de l émotion, et passant à côté de la richesse des changements émotionnels propres à la musique. Les premières tentatives d étudier la musique dans le temps à l aide de mesures continues ne datent que des années 1980 et l efficacité des recherches pionnières d Emery Schubert témoigne de l importance d étudier le phénomène musical dans le temps (Schubert, 2001). Les avancées technologiques ont permis d effectuer différentes études qui utilisent une large gamme de moyens pour procéder à des mesures continues : par exemple, une mesure continue de la tension perçue 23

25 dans la musique par la pression d une paire de pince (plus il y a de tension perçue dans la musique, plus les participants pressent la paire de pince) (Nielsen, 1983, cité dans Gabrielsson & Juslin, 2003), la «Continuous Response Digital Interface» (Madsen, 1990), ou le 2DES (Schubert, 1996 ; Chapin, Jantzen, Kelso, Steinberg & Large, 2010). L instrument 2DES propose par exemple une interface représentant un espace émotionnel bidimensionnel, utilisant les dimensions de valence (axe horizontal) et d arousal (axe vertical), pour évaluer le caractère émotionnel de la musique. Cet outil ne semble pourtant pas optimal car cela met les individus en double tâche et donc enclins à une surcharge cognitive. La musique est un art du temps. Le temps gouverne la musique comme il gouverne la perception du son : depuis le «micro-temps», qui est l échelle de la vibration sonore (le son est une mise en vibration de l air), jusqu à la forme musicale, construction dans un temps de l écoute (Emery, 1975, p.696). Comme la forme musicale ne nous est révélée qu au fur et à mesure du temps, chaque instant est en puissance un moment d avenir, une projection dans l inconnu. C est le sens du titre d une œuvre d Henri Dutilleux qui propose de nous plonger dans le «mystère de l instant». Dans cette composante temporelle, la musique peut se déployer selon quatre dimensions fondamentales : - le rythme, qui relève de la durée des sons et de leur niveau d intensité (la dynamique) ; - la mélodie, qui est l impression produite par la succession de sons de hauteurs différentes ; - la polyphonie ou harmonie, considère la superposition voulue de sons simultanés ; - le timbre, permettant une polyphonie mêlant plusieurs instruments. Nous nous intéresserons de plus près à ces aspects de structure musicale et acoustique dans la suite de cette introduction. Un concept clé de cette approche dynamique de la musique est le concept de «tension» (Madsen & Fredrickson, 1993 ; Bigand, Parncutt & Lerdhal, 1996 ; Bigand & Parncutt, 24

26 1999). La tension s articule entre la complexité et la spécificité émotionnelle d un extrait musical. Elle a rarement été définie de façon claire, si ce n est par Bigand et Parncutt (1999) cités par Farbood (2012), comme : «the feeling that there must be a continuation of the sequence» (Farbood, 2012 ; p.387). Le concept de tension musicale est donc indissociable du concept de «résolution de la tension». Si la tension est créée par l attente d une suite au temps T, il faut que cette suite résolve la tension au temps T1. Farbood (2012) décrit la tension en musique comme un concept de haut niveau difficile à définir à cause de sa nature complexe et multi-dimensionnelle. En effet, nous verrons dans une prochaine section comment l interaction d une multitude de facteurs acoustiques et musicaux créé la tension et comment cette multitude de facteur parvient ensuite à résoudre cette tension. La notion de tension est également indissociable du concept d «expectation», d attente musicale. En effet, nous avons vu précédemment que ce concept est proposé comme étant un des mécanismes d induction de l émotion chez l auditeur (Juslin & Västfjäll, 2008 ; Juslin, Liljeström, Västfjäll & Lundqvist, 2010). La plupart des études qui s intéressent aux émotions musicales se servent des tests psychométriques traditionnellement utilisés dans l étude de l émotion (échelles de Likert, questionnaires, rapports personnels, choix parmi des listes de termes émotionnels, cf. Zentner & Eerola, 2010 pour une revue). Néanmoins, comme nous l avons vu précédemment, l étude des émotions esthétiques et les états affectifs divers et variés que nous pouvons lier à l Art en général et à la musique en particulier, demande à être étudier comme un domaine spécifique de l émotion (Zentner, Grandjean & Scherer, 2008 ; Scherer & Coutinho, 2013). De plus, les méthodes de mesure citées ci-dessus sont globales, statiques et différées, c est-à-dire qu elles demandent le plus souvent à l auditeur d émettre un choix global sur une pièce dans sa totalité à l issue de l écoute, passant à côté de l aspect temporel de l émotion et de la musique. De plus en plus de recherches soulignent la lacune des méthodes de mesure traditionnelles et 25

27 proposent d utiliser des mesures dynamiques (Schubert, 2004 ; Chapin, Jantzen, Kelso, Steinberg & Large, 2010 ; Coutinho & Cangelosi, 2011 ; Lehne, Rohrmeier & Koelsch, 2014). Farbood (2012) a proposé un modèle quantitatif pour prédire en temps réel les jugements de tension de stimuli musicaux complexes basé sur l interaction d un certain nombre de paramètres musicaux. A la suite de deux études prenant en compte l harmonie, la hauteur du son, l attente mélodique, le tempo, la dynamique, le début de la fréquence, la régularité rythmique et le mètre, les résultats statistiques ont montré que ce modèle paramétrique temporel prédit de façon précise les jugements de tension pour les stimuli musicaux complexes. En 2006, Vines, Krumhansl, Wanderley et Levitin avaient déjà utilisé des sliders linéaires ajustables sur un régulateur MIDI pour collecter des données continues dans le cadre de l étude de la tension musicale. Coutinho et Cangelosi (2011) insistent également sur le fait que les émotions induites par la musique dépendent largement des patterns temporaux et dynamiques à un bas niveau des paramètres de la structure musicale. Dans une précédente étude, les auteurs ont démontré que les dynamiques spatio-temporelles des caractéristiques psycho acoustiques entrent en résonance avec deux dimensions psychologiques des affects : la valence et l arousal (Coutinho & Cangelosi, 2009). En 2011, Coutinho et Cangelosi ont mis en évidence l existence de six caractéristiques psycho-acoustiques (le volume, la hauteur, le contour de la hauteur, le tempo, la texture et l acuité) dans l émergence du ressenti subjectif émotionnel des individus en rapport à la musique. A l aide du logiciel EMuJoy (Nagel, Kopiez, Grewe & Altenmüller, 2007), qui consiste en une représentation computationnelle du 2DES (espace émotionnel bidimensionnel représentant la valence sur l axe horizontal et l arousal sur l axe vertical), ils ont demandé à des individus de juger neuf extraits issus de la musique classique occidentale et ils ont mis en rapport les réponses subjectives captées en temps réel grâce au logiciel, avec des mesures psychophysiologiques (conductance de la peau 26

28 et fréquence cardiaque) et les six paramètres psycho-acoustiques. Ils ont trouvé une corrélation positive entre une augmentation de la fréquence cardiaque et un arousal élevé, ce qui corrobore des résultats déjà mis en évidence dans des études précédentes (Krumhansl, 1997 ; Iwanaga & Moroky, 1999). Concernant les jugements subjectifs captés en temps réel, les auteurs ont confirmé que les six paramètres psycho-acoustiques (volume, tempo, hauteur, contour de la hauteur, texture et acuité) présentent une base solide dans la prédiction des émotions ressenties par les auditeurs à l écoute de la musique, en termes de valence et d arousal. Comme expliqué ci-dessus, nous nous appuierons dans cette thèse sur le modèle GEMS (Zentner, Grandjean & Scherer, 2008) qui nous servira de cadre théorique pour étudier les émotions exprimées par la musique, et nous utiliserons une méthode de jugements dynamiques que nous avons développée au sein de notre groupe de recherche, et qui nous aidera à mieux comprendre la genèse d un épisode émotionnel dans le décours temporel de la musique, notamment en nous permettant de différencier plus spécifiquement des émotions qui peuvent être très proches (e.g. la Tendresse et la Nostalgie). Cette méthode de jugements dynamiques sera présentée en détails dans la section «Méthode» de ce manuscrit Décodage des émotions exprimées par la musique Approche brunswikienne dans le processus d attribution d une émotion à la musique D un point de vue psychologique, l expérience esthétique est constituée de deux types de processus : des processus perceptifs, rapides et partiellement inconscients - impliquant des organes sensoriels - et des processus centraux, généraux et intermodaux, basés sur des représentations mentales, indépendamment ou dépendamment de la contribution des organes 27

29 sensoriels (Deliège, Vitouch & Ladinig, 2010). En tant qu individu évoluant dans un monde fait de mouvements, de sons, de couleurs, notre système est programmé pour percevoir les éléments qui nous entourent et construire une réalité à partir de ce que nous percevons. Notre équipement sensoriel et nos réseaux neuronaux nous rendent capable de construire/nous représenter des percepts auditifs et visuels dynamiques basiques, des objets auditifs et visuels dynamiques, des contenus explicites et implicites. Nous sommes des «machines à percevoir et à prédire», et la plupart de nos perceptions sont automatiques et inconscientes (i.e. nous ne devons pas réfléchir pour percevoir un arbre, sa forme, sa couleur, ses mouvements). En revanche, dans d autres contextes, le mécanisme de perception demande à être analysé de façon plus affinée en délimitant des niveaux de perception. En effet, il existe différents stades, allant du niveau primaire au niveau cognitif, dans l encodage et le décodage d un stimulus. En 1955, Egon Brunswik s intéresse à ces différentes étapes de perception et propose une modélisation des interactions sociales, en examinant les processus de production, de perception, de reconnaissance et d attribution d une émotion à autrui au sein de l échange social. Il donnera le nom de «Lens Model» à cette proposition théorique, autrement dit le modèle en lentille. D un point de vue macro-analytique, ce modèle propose deux niveaux : un niveau fonctionnel et un niveau de coefficient de précision dans l encodage et le décodage d un stimulus. D un point de vue micro-analytique, ce modèle propose qu à partir d un état ou d un trait (p.ex. un état de colère) d un individu (l émetteur), un ensemble de caractéristiques peut être décrit (indicateurs distaux) qui correspondent plus ou moins à des phénomènes perceptifs (percepts proximaux) débouchant sur une attribution (p. ex. la colère) de la part de celui qui perçoit. La notion d extériorisation recouvre à la fois la communication intentionnelle des états internes et les réactions comportementales et physiologiques involontairement produites. Sur le plan opérationnel, les états internes sont représentés par des valeurs de critère et les indices distaux par des valeurs d indicateurs. Les indices distaux sont 28

30 représentés de manière proximale par des percepts qui sont le résultat du traitement perceptif réalisé par l observateur. Sur le plan opérationnel, les percepts peuvent être évalués par des jugements exprimés sous forme de scores sur des échelles/dimensions psychophysiques. Les corrélations entre valeurs d indicateurs et jugements perceptifs sont désignées par le coefficient de représentation. Elles indiquent le degré de précision de la projection des indices distaux dans l espace perceptif de l individu. L attribution d un état est le résultat de processus d inférence basés sur la perception des indices distaux. Les corrélations entre jugements perceptifs et attribution sont représentées dans le modèle par les coefficients d utilisation qui donnent une mesure de l utilisation de chaque indice perçu lors de l inférence d un état. Ce modèle a connu un grand succès en psychologie sociale et de nombreuses autres branches de la psychologie s en sont inspirées pour expliquer notamment les interactions émotionnelles entre individus. La structure du modèle de Brunswik a par exemple été adaptée à l étude de la communication vocale des émotions pour comprendre comment notre système est organisé pour intégrer un objet acoustique (Grandjean & Baenziger, 2009). Dans cette adaptation du modèle de Brunswik, les états internes sont extériorisés sous la forme d indices distaux qui correspondent dans le contexte de la communication vocale (prosodie émotionnelle), aux caractéristiques acoustiques de la voix. La prosodie émotionnelle est liée à la capacité qu ont les humains d inférer les états émotionnels d autrui et d adapter leurs comportements en fonction du contenu émotionnel présent dans la voix, tout comme les animaux qui se basent sur des vocalisations non linguistiques pour comprendre les messages de leurs congénères (Sander & Scherer, 2009). Dans le contexte de la communication vocale des émotions, les indicateurs distaux (symbolisant une mesure objective) sont représentés par des indices acoustiques tels que la fréquence fondamentale, la quantification de l énergie dans le temps, la quantification de l énergie en différentes bandes de fréquence, et les percepts proximaux (symbolisant une mesure subjective) par l intonation de la hauteur, l intensité, la 29

31 qualité vocale, en somme, tout ce qui est perçu par l auditeur. Cette adaptation du modèle de Brunswik par Grandjean et Baenziger (2009) lie les propriétés physiques d un stimulus aux percepts construits par les aires des cortex auditifs primaire et secondaire dans le but d attribuer un état émotionnel à autrui. Juslin (1997, 2000) a également revisité le «Lens Model» de Brunswik (1955), en l adaptant à l étude de la communication émotionnelle dans la performance musicale (Figure 2). Dans cette version, l interprète et l auditeur utilisent les mêmes indices pour attribuer une émotion à la musique : l interprète module par exemple le tempo ou les nuances pour exprimer une émotion cible et l auditeur s appuiera sur ces modulations de tempo ou de nuances pour effectuer un jugement cohérent et fiable de l émotion exprimée par la musique. Dans cette version revisitée du «Lens Model», la validité écologique correspond à la relation entre les intentions d expressivité musicale de l interprète et un certain indice qui sera utilisé dans la performance (ex : le tempo). Joie Joie Figure 2. Modèle de Brunswik adapté à la musique par Juslin (1997). 30

32 La validité fonctionnelle de l indice en question est représentée par la relation entre l indice et le jugement de l auditeur, à savoir la validité de l indice dans la prédiction du jugement. La réussite renvoie à la précision de la communication, mesurée par la relation qui lie l intention de l interprète et le jugement de l auditeur. Enfin, le «matching» reflète la mesure dans laquelle les validités écologiques et fonctionnelles correspondent l une à l autre, à savoir si l interprète et l auditeur utilisent les mêmes indices l un pour encoder, l autre pour décoder l émotion exprimée par la musique. Les niveaux de perception, de reconnaissance et d attribution de caractéristiques émotionnelles à la musique peuvent être résumés ainsi : un niveau basique et physique concerne les paramètres acoustiques du signal, tels que la distribution de l énergie dans le signal ou le paramètre de fréquence fondamentale (Juslin & Laukka, 2003). Un traitement de bas niveau, relié à des traitements sensoriels, se réfère au niveau perceptuel des paramètres acoustiques (Banse & Scherer, 1996) ; il est par exemple possible de percevoir l intensité du son - autrement dit le volume - le pitch, corrélat perceptif de la F0, le timbre, etc. La structure musicale, représentation encore plus élaborée et complexe que le percept, se rapporte à tous les éléments pertinents présents dans la partition, comme le rythme, la tonalité, l articulation, le mode, et se réfère à l écriture musicale, qui est invariante à travers différents types de performance. Enfin, un dernier niveau d intérêt concerne l attribution émotionnelle par l auditeur via un jugement. Ce dernier niveau englobe l attribution d émotion(s) exprimée(s) par la musique que l auditeur peut inférer en utilisant différents indices basés sur des percepts, percepts eux-mêmes corrélés à des paramètres acoustiques, et des indices issus des aspects d organisation perceptive à un niveau plus élaboré, et pouvant être reliés à la notion de structure musicale. Dans cette thèse, nous proposons une adaptation en version dynamique du modèle en lentille de Brunswik. La figure 3 présente un exemple de la démarche de décomposition d un 31

33 jugement dynamique dans le temps, avec les déterminants qui permettent l attribution de caractéristiques émotionnelles à la musique (e.g. l articulation, le mode, l énergie décomposée en bandes de fréquence). Figure 3. Perspective émotionnelle du modèle en lentille durant un jugement dynamique, proposée par Glowinski et al. (2015), adapté de Juslin & Lindström, M : Music score(s) / la partition ; X, Y, Z: indices émotionnels présents dans la partition et représentés par les paramètres acoustiques et des aspects de structure musicale; XY, YZ, XZ: interactions entre les indices émotionnels; PC: perceived cues by listener (L) / indices perçus par l auditeur, sur lesquels sont basés l attribution de caractéristiques émotionnelles. Nous présentons en détails dans la section suivante ces éléments du processus d encodage, de perception, de reconnaissance et d attribution de caractéristiques émotionnelles à la musique. 32

34 Paramètres acoustiques, percepts auditifs, structure musicale et jugements émotionnels Une méthode objective pour analyser l expression de l émotion par la musique ou par la voix consiste à effectuer des analyses acoustiques (Grandjean & Baenziger, 2009). Il existe dans le signal acoustique plusieurs paramètres pertinents qui permettent aux individus de décoder une expression émotionnelle, tant au niveau vocal qu au niveau musical (Juslin & Laukka, 2003). Les analyses acoustiques traditionnellement utilisées pour l étude de la prosodie émotionnelle correspondent à des analyses du contour de la fréquence fondamentale, du contour de l intensité et de la durée des expressions. La fréquence fondamentale (F0) est exprimée en hertz (Hz) et correspond au nombre de répétitions de la période fondamentale du signal acoustique par seconde. Le contour de la F0 représente l évolution de cette période fondamentale au cours d une expression émotionnelle. L intensité acoustique est exprimée en décibels (db) et est dérivée de l amplitude du signal acoustique. Le contour d intensité correspond à l évolution de l intensité acoustique au cours d une expression émotionnelle (Grandjean & Baenziger, 2009). Il est également possible d effectuer des analyses de l énergie relative dans différentes bandes de fréquence du signal acoustique (Banse & Scherer, 1996). Grâce à ces différentes analyses acoustiques, une revue effectuée par Scherer (2003, cité dans Grandjean & Baenziger, 2009) a pu mettre en évidence certains patterns spécifiques à la production d expressions vocales émotionnelles. La colère présente par exemple une énergie élevée dans les hautes fréquences, un tempo élevé de parole et d articulation et un accroissement de l étendue de la F0 tandis que la tristesse présente le pattern inverse. Dans leur revue de la littérature, Juslin & Laukka (2003) démontrent un parallèle important entre l expression vocale et la musique. Toutes deux font partie de la communication non-verbale (l expression vocale/prosodie émotionnelle étant l aspect non verbal du discours) et peuvent communiquer des émotions spécifiques suivant les 33

35 modulations de certains paramètres acoustiques. Au niveau musical, on peut supposer qu il existe de fortes interactions entre les paramètres acoustiques, les percepts et la structure musicale dans le décodage des émotions exprimées par la musique. Par exemple, le fait que les instrumentistes suivent l indication «fortissimo» sur une partition aura un impact sur l intensité perçue par les auditeurs. La production d une pièce musicale est la plupart du temps issue d une partition présentant l ossature de l œuvre. L écriture musicale est apparue tardivement en Occident avec Guy d Arezzo vers l an Avant cette période, la transmission musicale se fait donc essentiellement viva voce et repose sur la mémoire auditive et les émotions qui peuvent y être associées (Lemarquis, 2009). Marc-Antoine Charpentier, compositeur du XVIIe siècle, catalogue dans ses «Règles de composition», les effets de chaque tonalité : ut (do) majeur fait référence à un caractère gai et guerrier, ut (do) mineur à un caractère obscur et triste, ré majeur renvoie à un caractère joyeux et très guerrier, ré mineur est grave et dévot, mi bémol majeur fait référence à un caractère cruel et dur, mi bémol mineur est horrible et affreux, mi majeur est querelleux et criard, mi mineur est efféminé, amoureux et plaintif, fa majeur renvoie à la fureur et l emportement tandis que fa mineur est obscur et plaintif ; sol majeur est doucement joyeux, sol mineur est sérieux et magnifique, la majeur représente la joie et est champêtre, la mineur est tendre et plaintif ; si bémol majeur est magnifique et joyeux tandis que si bémol mineur est obscur et terrible, enfin, si majeur est dur et plaintif,et si mineur est solitaire et mélancolique. La structure musicale est indubitablement liée aux sons musicaux produits lors de la représentation musicale et indique quatre caractéristiques fondamentales du son sur papier: la hauteur des notes, la durée des notes, l intensité et le timbre des instruments (Gabrielsson & Lindström, 2001 ; 2010). Le compositeur utilise différents moyens pour véhiculer l émotion cible d une pièce musicale, moyens présents dans la structure musicale. Des études ont mis en évidence l importance de plusieurs indices musicaux permettant d attribuer une émotion : le mode (majeur/mineur) 34

36 (Hevner, 1935), le tempo (vitesse d exécution de la pièce) (Peretz, Gagnon & Bouchard, 1998), l articulation (legato/staccato) (Juslin, 1997), l intensité (fort/faible) (Juslin, 2000), le contour mélodique (Schubert, 2004), le pitch (Curtis & Bharucha, 2010), le rythme (Thompson & Robitaille, 1992), et l harmonie (consonante/dissonante) (Hevner, 1936). Cette énumération est naturellement non exhaustive. En se focalisant sur des éléments précis de la structure musicale, il a par exemple été démontré qu un rythme rapide était évalué comme plus joyeux qu un rythme lent ; les analyses d intervalles ont révélé que de larges intervalles étaient jugés comme «plus puissants» que de petits intervalles, la seconde mineure étant traditionnellement considérée comme l intervalle le plus mélancolique tandis que l octave, la quarte, la quinte, et la sixte majeure, comme des intervalles «joyeux/insouciants» (Gabrielsson & Lindström, 2001, pour une revue). Il existe de fortes interactions au sein même des éléments de la structure musicale. Par exemple, entre le rythme et la hauteur de la note : la combinaison d un rythme de quatre doubles croches accompagné d un sol bécarre (naturel) au violon peut être apparentée à l expression de tension tandis que la combinaison d un rythme lent de blanches pointées et d un sol bécarre peut être apparentée à l expression de tristesse. La syntaxe musicale est comparable au langage : tonique (Sujet), sous-dominante (Verbe), dominante (COD). Il faut penser la structure musicale comme des sections, des éléments qui peuvent se complexifier, se juxtaposer, s additionner ou se réduire (Lemarquis, 2009), imposant un degré de granularité dans l analyse. En se basant sur ces éléments acoustiques et structuraux, un auditeur est capable d attribuer des caractéristiques émotionnelles à un extrait musical. A ce propos, Lartillot, Toiviainen et Eerola (2008) ont développé un outil informatique et statistique sur Matlab, la MIRToolbox, outil qui permet l analyse des objets acoustique et structural de la musique. Cet outil informatique propose une série de déterminants clés qui, seul ou en combinaison, facilitent la compréhension de la modélisation 35

37 de la musique dans le processus émotionnel. L un des défis que pose cette thèse, en se basant sur le modèle GEMS, en adoptant une méthode de jugements dynamiques et en utilisant la MIRtoolbox comme outil d analyse, est de comprendre quels sont les déterminants acoustiques, perceptifs et musicaux qui permettent aux individus d attribuer du contenu émotionnel à la musique. Notre principal objectif est de comprendre quelle(s) combinaison(s) de ces éléments va/ vont permettre de labelliser la musique comme «nostalgique», «joyeuse», «merveilleuse», «tendue», «triste», etc. La subtilité du phénomène émotionnel musical réside dans la capacité qu a la musique à passer d une émotion à une autre en une fraction de seconde, reflétant la complexité et la spécificité de l attribution de caractéristiques émotionnelles à la musique. Il est important de souligner la mission de chaque acteur responsable du contenu émotionnel de la musique. En nous plaçant à un niveau plus général d analyse, le premier acteur responsable du contenu émotionnel dans la musique est le compositeur. Nous avons vu dans cette section quels étaient les outils à sa disposition dans la structure musicale pour transmettre de l émotion dans la musique. En décrivant les deux types d émotions (émotions ressenties vs. émotions exprimées par la musique), nous nous sommes intéressés au destinataire du travail du compositeur en la personne de l auditeur. Il existe un acteur clé entre ces deux entités : l interprète. En effet, c est par lui que le message du compositeur est transmis et par son jeu que l auditeur va pouvoir donner du sens émotionnel au contenu acoustique et musical qu il reçoit. Nous approfondissons le rôle de ce troisième acteur dans la section suivante. 36

38 1.3 Performance musicale, expressivité émotionnelle et contexte d écoute Performance et interprétation La performance musicale est la façon d interpréter une pièce musicale propre à chaque musicien et variant considérablement d un interprète à l autre (Gabrielsson & Juslin, 2003 ; Juslin, 2001). En 2003, Juslin propose que la performance musicale soit étudiée comme un phénomène multidimensionnel se déclinant en cinq facettes, les GERMS : i) G : les «generative rules» sont des règles génératives dont la fonction est de transmettre et clarifier la structure musicale. Une des fonctions de la performance est de véhiculer la structure musicale aussi clairement que possible. L expression est vue ici comme des valeurs nominales de la partition qui proviennent des représentations cognitives de l interprète quant à la structure hiérarchique. Par le biais de variations de variables acoustiques telles que le temps (tempo), la dynamique et l articulation, un interprète est capable de clarifier les limites des groupes de notes, les accents métriques et la structure harmonique. Les règles pour la transformation de structure générative en patterns d expression sont fortement dépendantes des conventions liées aux styles musicaux spécifiques (baroque, classique, romantique, etc.) ; ii) E : «emotional expression», i.e. l expression émotionnelle peut être définie par cette citation : «[ ] un interprète peut être fidèle à la structure musicale et en même temps avoir la liberté de façonner ses humeurs.» (Schaffer, 1992 cité dans Juslin, 2003), et prend forme selon les différentes variables que l interprète a à sa disposition. Elle sert à transmettre des émotions destinées aux auditeurs ; iii) R : la «random variability» reflète la présence de variations aléatoires au niveau du rythme interne reflétant les limites humaines à l égard de la précision de l exécution. Une performance contient toujours quelques fluctuations aléatoires, bien qu elles soient assez infimes. D un point de vue esthétique, ces fluctuations aléatoires 37

39 contribuent à rendre un caractère «vivant» à une pièce musicale et cette légère imprévisibilité fait que chaque performance est absolument unique (ex : l ampleur des fluctuations aléatoires augmentent avec la durée de l inter-onset-intervalle de sorte que des intervalles plus larges/longs ont tendance à donner de plus grandes déviations). Bien que ces variables aléatoires soient subtiles, elles contribuent au son d une performance musicale humaine ; iv) M : les «motion principles» sont les principes biologiques de mouvements et se réfèrent aux patterns dynamiques du mouvement qui sont caractéristiques des humains. Ces patterns présents dans la performance musicale peuvent être de deux types. Premièrement, on peut supposer que les interprètes essayent intentionnellement de recréer ces patterns ; deuxièmement, il y a le type de mouvement biologique qui regroupe les patterns nonintentionnels de variabilité qui reflètent des contraintes anatomiques du corps en connexion avec des exigences motrices d instruments musicaux spécifiques. v) S : la «stylistic unexpectedness» décrit les contraintes quant à la violation des attentes stylistiques de l auditeur, autrement dit un changement stylistique inattendu qui implique des déviations locales à partir des conventions de performance. Les émotions musicales apparaissent souvent lorsque les attentes musicales sont violées. Ceci crée une tension psychologique. Cela peut arriver quand un interprète dévie des attentes stylistiques que l on peut attendre des conventions de la performance pour une certaine partie de la structure musicale. Cela crée momentanément une tension psychologique qui est résolue quand l interprète reprend «le jeu attendu». Ces cinq facettes proposées par Juslin (2003) offrent une perspective assez complète du jeu de l interprète. En effet, la performance musicale des interprètes n est pas seulement une question d habiletés motrices techniques, car elle requiert également la capacité de générer de façon expressive différentes performances de la même pièce de musique selon la nature de la structure musicale et de la communication émotionnelle. Selon Sloboda (2000), les performances musicales des interprètes professionnels ont deux éléments 38

40 majeurs : l élément technique et l élément expressif/émotionnel. Le composant technique est relié au mécanisme de production fluide de gestes coordonnés. Par exemple, une performance de piano techniquement compétente peut impliquer l exécution de 20 notes par seconde, où chaque durée de notes et le débit sont contrôlés, et où une synchronisation absolue entre les notes jouées par les différents doigts des deux mains, est requise. Le composant expressif de la performance musicale est dérivé des variations intentionnelles dans les paramètres de la performance, choisis par l interprète pour influencer les réponses cognitive et émotionnelle des auditeurs. Les principaux paramètres expressifs disponibles pour l interprète sont ceux relatifs au temps (à la fois dans le début et la fin d une note), l intensité, la hauteur et le timbre (au sens de la qualité du son). Ces paramètres varient suivant l instrument. Ce concept d expressivité est aussi lié à la connaissance du genre musical (ce qui peut être considéré comme approprié pour Chopin, peut paraître complètement aberrant pour jouer du Bach). Les habiletés techniques et expressives sont des composantes séparées bien qu elles interagissent entre elles et dépendent en partie l une de l autre. L habileté technique est, du moins en théorie, non reliée au contenu musical ou artistique de la musique. Il est tout à fait possible d interpréter un morceau de musique avec une maîtrise technique absolue mais sans aucune habileté expressive. Les habiletés expressives demandent une connaissance de la structure sous-jacente et des contraintes stylistiques de la pièce ou du genre/style musical. Parce qu une performance expressive efficace requiert souvent des variations très fines et subtiles dans les paramètres de l interprétation, les intentions expressives ne peuvent fréquemment pas être communiquées de façon efficace sans un haut niveau de maîtrise technique de la part de l interprète. Les performances émotionnellement puissantes peuvent être créées par l usage de matériaux inattendus ou non conventionnels de la part de l interprète. Ce phénomène a été investigué en demandant à des pianistes professionnels d enregistrer des interprétations de leurs choix d un 39

41 simple Prélude de Chopin sur une plateforme de piano MIDI, qui permet l extraction de note par note, de leurs durées, leurs intensités (Sloboda & Lehmann, 2001). Ces performances ont ensuite été jouées à un panel de musiciens à qui on demandait d ajuster un curseur mobile selon le degré d émotionnalité présent dans la performance, de façon dynamique. En résultait un «graphique d émotionnalité» de chaque performance. Il apparaît que c est justement le caractère inattendu du jeu du musicien qui donne le pouvoir émotionnel et esthétique de la pièce. Des études avec des auditeurs tout-venants corroborent ces résultats, en démontrant que la violation des attentes structurales de la musique provoque de l émotion (Juslin & Västfjäll, 2008 ; Juslin & Sloboda, 2010 ; Juslin, Liljeström, Västfjäll & Lundqvist, 2010) Contexte d écoute et modes d expressivité Selon Charles Darwin : «L aptitude à produire des notes musicales, la jouissance qu elles procurent, n étant d aucune utilité directe dans les habitudes ordinaires de la vie, nous pouvons ranger ces facultés parmi les plus mystérieuses dont l homme soit doué.» (1871, p.878, trad. franç. de Edmond Barbier, 1981, Bruxelles : Complexe, vol. II, p.623). Cependant, la musique favorise l établissement de liens entre les humains, du lien maternel, et participe au développement des habiletés motrices (Panksepp & Bernatzky, 2002 ; Wallin, Merker & Brown, 2000 ; Huron, 2001 ; Vitouch, 2006). Les études anthropologiques et les observations comportementales chez les jeunes enfants ont également démontré que la musique renforce la cohésion sociale, la coordination et la coopération à l intérieur des groupes sociaux (Geissmann, 2000). Toutes ces évidences appuient la thèse d une théorie du lien social de la musique. Huron (2001) prend l exemple du syndrome de Williams (syndrome apparaissant chez l enfant et combinant une cardiopathie congénitale, un retard mental, un visage d elfe et des caractéristiques comportementales se rapprochant de certains traits 40

42 autistiques) dans lequel il existe un lien très fort entre sociabilité et musicalité. Comme le soulignent Brattico, Brattico et Jacobsen (2010) : «Pour les membres d un groupe, l expérience simultanée des propriétés expressives de la musique va vraisemblablement constituer un aspect fondamental de la capacité à agir en société de manière empathique.» En effet, les manifestations sociales empathiques à l écoute de la musique en groupe sont nombreuses, comme en témoignent les succès des boîtes de nuit, des festivals de musique et des concerts. Dès lors, la question du contexte d écoute se pose dans l étude des émotions en lien avec la musique. La majorité des études ont proposé des protocoles expérimentaux en laboratoire, laissant ainsi de côté la validité écologique. Comme présenté dans la première section de cette introduction, afin de mieux comprendre l expérience musicale émotionnelle, Zentner & Scherer (2001) proposent de prendre en compte les critères suivants : les caractéristiques structurelles de la musique, la performance de l interprète, l auditeur et le contexte. Comme souligné auparavant, ces critères peuvent, individuellement ou en interaction, produire différents états affectifs (Juslin & Sloboda, 2010). Selon Zentner & Scherer (2001), les caractéristiques structurelles englobent deux catégories : la première regroupe les caractéristiques dites «segmentales», et concerne les caractéristiques acoustiques de la structure musicale ; la seconde concerne les caractéristiques dites «suprasegmentales» et représente le codage symbolique, l information émotionnelle de base. Les caractéristiques individuelles de l interprète et de l auditeur ont également un impact, l identité socio-culturelle, la personnalité, l expertise musicale, l humeur sont tout autant de facteurs qui entrent en jeu dans l expérience musicale émotionnelle, qu il s agisse de la perception d une émotion ou du ressenti d une émotion en lien avec la musique. Enfin, les caractéristiques contextuelles concernent le lieu d écoute (salle de concert, église, salon), le type d événement (mariage, funérailles, événements de la vie quotidienne) et le moyen d écoute (écouteurs, chaine stéréo, écoute en direct). La musique stimule notre imagination et 41

43 a une dimension sociale très forte (Liljeström, Juslin & Västfjäll, 2013) mais nous constatons le développement d un phénomène égocentré dans l écoute de la musique dans notre siècle, avec l accroissement de technologies créées à cet effet. L apparition des écouteurs et de tous les dérivés de moyens d écoute musicale donne lieu aujourd hui à un accès permanent à la musique. Autrefois, à l époque de Bach, Salieri ou Beethoven, il fallait se rendre à des concerts ou dans les salons pour pouvoir bénéficier de la musique. Avec le temps et le développement des nouvelles technologies, il est désormais possible d écouter de la musique depuis un téléphone, une chaîne hi-fi, un ordinateur, seul ou en groupe, dans les boîtes de nuit ou chez soi. Il est même désormais possible à notre époque d éprouver une saturation auditive quant à l utilisation exagérée de la musique dans les lieux publics comme les supermarchés, les centres commerciaux, les magasins, les hôtels. Pascal Quignard est l un des rares auteurs a abordé ce thème de surabondance de musique en parlant de souffrance auditive dans son ouvrage «La Haine de la Musique» (1997). L utilisation de la musique varie également énormément d une culture à l autre (Fritz et al. 2009). Vers 8 ans, le jugement esthétique d un enfant est constitué : il a acquis une perception nette de la consonance, de la tonalité et du rythme. S il est habitué à la musique occidentale, les accords consonants symboliseront l ordre, l équilibre ; les accords dissonants, l inquiétude, le désir, le tourment ; un tempo rapide, un mode majeur, évoquera la joie ; un tempo lent, un mode mineur, la tristesse. Il s agit d acculturation (Lemarquis, 2009). Dans cette thèse, nous nous focaliserons sur la musique classique occidentale, englobant des œuvres qui datent de l époque de Vivaldi (XVIIe siècle) jusqu à nos jours (XXIe siècle), et nous nous intéresserons à l effet du contexte d écoute sur les jugements dynamiques des individus en manipulant le contexte (concert vs. laboratoire). Lamont (2011) a par exemple démontré qu il y a un plus grand pourcentage d expériences intenses avec la musique lors d un concert. 42

44 De nombreuses études se sont également intéressées aux différents types d expression possibles au sein même de l expressivité musicale (Timmers, Marolt, Camurri & Volpe, 2006; Chaplin et al. 2010). Deux types d expressivité musicale pertinents pour étudier l influence de la performance dans le processus émotionnel de décodage et de reconnaissance sont : un type «académique», également appelé «métronomique», caractérisé par une lecture «froide» de la partition, très technique et scolaire, sans aucune modulation des paramètres habituellement utilisés dans le but de rendre compte de l expressivité, et un type «emphatique», caractérisé par une exagération de l expression émotionnelle déjà présente à la base dans la musique naturellement jouée. Cette dernière facette de l expressivité (mode académique vs. mode emphatique) en musique fera l objet d un intérêt tout particulier dans cette thèse. Cette thèse se compose de quatre études dont le fil rouge est les émotions exprimées par la musique représentées par les neuf dimensions du modèle GEMS, dans une perspective dynamique. La première étude s intéressera au développement et à l efficacité d une nouvelle méthode de mesure informatique pour étudier la complexité et la spécificité du déroulement temporel des dimensions GEMS exprimées par la musique. La deuxième étude se penchera sur les paramètres acoustiques et les aspects de structure musicale sur lesquels les individus se basent pour attribuer des caractéristiques émotionnelles à la musique. La troisième étude investiguera le lien entre les émotions ressenties par la musique vs. les émotions exprimées par la musique, modulées par l expressivité musicale. La dernière et quatrième étude s attachera à étudier le processus d attribution de caractéristiques émotionnelles à la musique dans un contexte de laboratoire vs. un contexte de concert soulignant l importance de la validité écologique des paradigmes expérimentaux. 43

45 II. Méthode La méthode de jugements dynamiques a été développée à partir d une interface Flash qui permet d enregistrer les évaluations des individus en temps réel. Cette interface graphique permet d effectuer un jugement dynamique sur une dimension GEMS cible (par exemple la Sérénité). La largeur du graphique est de 1000 pixels (correspondant à une durée de 4 16) et la hauteur de 300 pixels (1280*1024 pixels, 17 pouces). Les participants ont un feedback visuel direct de l interface graphique des jugements qu ils effectuent en bougeant le curseur soit vers le haut (pour signifier que la musique exprime fortement la dimension GEMS cible), soit vers le bas (pour signifier que la musique n exprime pas la dimension GEMS cible), pendant que le temps défile. Si la longueur du morceau qui est jugé dépasse les 4 16, la fenêtre graphique défile automatiquement. Les mesures des jugements sont faites toutes les 250 millisecondes. L axe des x représente le temps et l axe des y représente l intensité de l émotion exprimée par la musique (dimension GEMS cible) sur une échelle continue graduée par trois niveaux d intensité : bas, moyen, haut. L instruction principale qui s affiche à l écran est : «Jugez à quel point la musique exprime [dimension cible]», incluant les items principaux qui décrivent la dimension GEMS cible (Figure 4). La totalité des descriptions des dimensions GEMS se trouve dans l Appendix A de ce manuscrit et les consignes générales dans l Appendix B. 44

46 Figure 4. Exemple de l écran de l interface Flash pour la tâche de jugements dynamiques, ici avec la dimension cible «Sérénité» et l instruction : «Jugez à quel point la musique exprime la sérénité : style méditatif, relaxé, apaisé, et serein». Avant le début de l expérience, les participants devaient effectuer une phase d entraînement afin de devenir familier avec la procédure. Chaque passation s est faite en présence d un expérimentateur qui se tenait dans la salle informatique afin d assister ou de répondre aux questions des participants. Toutes les expériences de cette thèse ont été acceptées par le comité éthique de l Université de Genève et avant chaque expérience, les participants ont signé un consentement qui décrivait l expérience, l analyse et l utilisation des données pour les publications. 45

47 III. Partie expérimentale 3.1. Etude 1 Dynamic Approach to the Study of Emotions Expressed by Music Kim Thibault de Beauregard, Tamara Ott, Carolina Labbé, and Didier Grandjean Faculty of Psychology and Educational Sciences and Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, Geneva, Switzerland. Abstract Using the theoretical framework of the Geneva Emotional Music Scale (GEMS) model, we have developed a method of dynamic measurement for the study of emotions expressed by music. This dynamic method allows the investigation of the attribution of emotional characteristics to music in real time. We first performed a pilot study in which we compared static and dynamic emotional judgments, demonstrating that this dynamic method is more effective than a classic global judgment method for several GEMS dimensions. Two further studies led to a deeper unfolding of the structural characteristics of emotional attribution in response to classical music. Study 1 (n=71) showed that individuals agreed strongly on the GEMS dimensions expressed by music through time. Time fluctuations of these GEMS dimensions can be explained by four time frequency components, based on wavelet decomposition, showing significant differences of the unfolded structure for different emotions. Study 2 (n=25) demonstrated the complexity and specificity of the GEMS dimensions expressed by music by comparing different emotional judgments on positively or negatively correlated GEMS dimensions. The development of such emotional dynamic judgments is important for a better understanding of the complex relationships between acoustic characteristics, musical structure, and emotion expressed through music. Keywords: music, dynamic measurement, GEMS dimensions, expression, emotion 46

48 The Emotional Power of Music Nowadays, music is omnipresent in our lives and its emotional power is undeniable, as evidenced by the numerous empirical studies on this topic (Juslin & Sloboda, 2001; 2010, for reviews). However, as pointed out by Juslin and Sloboda (2001), music has been most studied in relation to cognition, and interest in and scientific research on the emotional side of music is only recent. Despite this belated interest, the work accomplished on the emotional power of music is impressive (Juslin & Sloboda, 2001). Scherer and Zentner (2001) observe that the research domain of music and emotion suffers from a lack of well-supported concepts, definitions, and measures. The main difficulty in the study of music and emotion relates to the definition of the term emotion. Indeed, there is currently no real consensus for this definition, making comparisons and an understanding of this phenomenon even more difficult. Most studies on music and emotion propose to investigate this phenomenon in terms of valence and arousal (Chapin, Jantzen, Kelso, Steinberg, & Large, 2010; Vieillard et al., 2008) or in terms of basic emotions (Fritz et al., 2009; Juslin, 2000). However, one might suppose that musical emotions are more complex or subtle and therefore that these approaches might not be the best suited to understand emotions and feelings related to music. As noted by Scherer (2004), a major problem in studying music and emotion is the tendency to confuse the terms emotion and feeling. By adopting a componential approach to emotional processes and by using the component process model (Scherer, 2001) framework, we define the concept of emotion as brief episodes, including events or stimuli that are important and relevant for the adaptation and well-being of individuals. An emotional episode is a complex phenomenon characterized by the modifications and synchronization of the components of emotion, that is, cognitive, expressive, autonomic, motivational, and feeling (Grandjean, Sander, & Scherer, 2008). In this context, Scherer (2004) proposed a distinction between utilitarian emotions and aesthetic emotions. The former are emotions that humans can experience in everyday life, 47

49 such as feelings of anger, fear, joy, and sadness. These emotions and feelings have been characterized as utilitarian because this type of emotion has major functions in the adaptation and adjustment of individuals to events that have important consequences for their wellbeing (Scherer, 2004, p. 241). Regarding aesthetic emotions, the author suggested that there are no appraisals concerning goal relevance or coping potential to the events that directly elicit them. Therefore, aesthetic emotions might not be governed by the same vital functions as the utilitarian emotions are, such as bodily needs or current goals, even if one can argue that the social aspects related to musical experiences, for example, the social sharing of feelings that increase the social coherence of a group, might be an essential element and an important implicit goal for individuals during the experience of emotions in musical contexts. An important point to stress here is the fact that the emotional processes induced by music are complex; for example, Juslin, Liljeström, Västfjäll and Lundqvist (2010) proposed a series of possible relevant mechanisms involved in emotion induced by music that might be studied systematically. It is also necessary to distinguish the emotional process and its representation in an explicit and verbally reported feeling, which can also be subject to systematic investigation. In this context, Zentner, Grandjean, and Scherer (2008) performed a set of experiments that enabled them, using exploratory and confirmatory factorial analysis, to propose a model of the most relevant emotional terms organized in feeling dimensions related to music. These studies gave rise to a nine factorial model of feelings induced by music: the Geneva Emotional Music Scale (GEMS). The GEMS includes the dimensions of Wonder, Transcendence, Tenderness, Nostalgia, Peacefulness, Power, Joyful Activation, Tension, and Sadness. This model currently represents the most effective attempt to study the specific categories of feelings related to music and seems therefore to constitute the best framework for the understanding of the emotional power of music. 48

50 Emotions Felt and Attribution of Emotional Characteristics A major distinction in the research domain of music and emotion and feelings is the difference between the perception of emotion and the induction of a feeling related to music (Scherer & Zentner, 2001). The attribution of emotional qualities of music is a complex process that allows humans to represent and explicitly report emotions expressed through music, whereas the induction of emotions is the process of experiencing emotions, that is, feelings, as a result of listening to music. In the present series of studies, we use the term emotion for the perception of emotional tone or its relation to other characteristics of acoustical and musical structure expressed through music, whereas we use the term feeling for the representation of the emotional reactions induced by music listening. This important distinction between emotion expressed through music and feelings induced by music leads to different questions and topics. For instance, the BRECVEM model (Juslin et al., 2010) proposes a theoretical framework to better understand the emotional response to music and suggests that music can induce emotions and related feelings via seven mechanisms that are not unique to music: (1) brain stem reflexes, (2) rhythmic entrainment, (3) evaluative conditioning, (4) emotional contagion, (5) visual imagery, (6) episodic memory, and (7) musical expectancy. These mechanisms are not mutually exclusive and can be understood as complementary in inducing emotional responses to music. Juslin et al. (2010) observed that some of the appraisals that have been suggested to be related to emotions in the utilitarian view of emotions (see Grandjean, Sander & Scherer, 2008; Scherer, 2001) are also thought to be important for musical emotions, especially evaluative conditioning and musical expectancies. Regarding the emotions expressed by music, several studies have shown that individuals are able to attribute different kinds of emotions expressed by music. For instance, Fritz and colleagues (2009) have demonstrated the universal recognition of joy, sadness, and 49

51 fear expressed by different musical excerpts from Western classical music in a population in Africa who were unfamiliar with this type of music. Curtis and Bharucha (2010) have shown the importance of the perception of pitch in decoding emotions expressed by music. Moreover, Juslin (1997) applied the Lens model proposed by Brunswik (1955) to investigate emotional communication in the context of musical performance between performer and listener, and showed some evidence for attributing a specific emotion to music by using certain acoustical cues. Indeed, if the listeners agree about which emotions are expressed by music, it is thought that their judgments are influenced by the information inherent in the music and the perception and representations of acoustic signals or musical structure (Juslin & Lindström, 2010). Regarding the distinction between felt emotions, that is, feelings, and emotions expressed by music, it has also been argued that the correlation between felt emotions and emotions expressed by music is not necessarily positive (Evans & Schubert, 2008). Indeed, a piece of music that expresses sadness or melancholy could be listened to in order to provide a feeling of nostalgia to help calm the listener down (Labbé, Schmidt, Babin & Pharr, 2007). The recognition of emotion conveyed through music is more related to objective processes such as perception and recognition of emotion in the voice, in comparison with the feelings induced by music. Indeed, it seems easier to agree on the emotions expressed by music than on felt emotions (Campbell, 1942, cited in Schubert, 2004); there is effectively a high reliability among people on the emotions expressed by music (Fabian & Schubert, 2003; Fritz et al., 2009; Gabrielsson & Juslin, 2003; Hevner, 1935, 1936), whereas felt emotions are more related to subtle subjective and intimate processes, as demonstrated by Gabrielsson s (2001) strong experiences in music study and by music preferences studies (Rentfrow & Gosling, 2003; Rentfrow & McDonald, 2010). Like speech, music possesses a structure and is governed by rules (Scherer & Zentner, 2001). Rousseau observed a link between voice and music, particularly in terms of melodic aspects: The 50

52 melody, imitating the voice inflections, expresses the complaints, the cries of pain or joy, the threats, the wailings; all vocal signs of passion are possible with the melody (Rousseau, ). A recent meta-analysis by Schirmer, Fox and Grandjean (2012) compared brain processes related to the perception of the voice and music in a meta-analysis that included 297 studies in brain imagery. It showed the existence of common brain processes in the temporal lobes with a large overlap of brain activations for music and voice; only three small clusters survived the statistical analysis in the left hemisphere, including Brodman areas 41 and 42 and the middle temporal gyrus. A study by Escoffier, Zhong, Schirmer, and Qiu (2013) also highlights the links between the perception of music and the perception of voice at the brain level, demonstrating, for instance, that the superior temporal gyri regions are activated in the brain processing of both music and voice. Some assumptions were made to explain such links, such as that there were probable phylogenetic origins of vocal emotions, or that the melodic and spectral aspects of the emotional voice could be at the basis of the notion of musical emotionality (Grandjean & Baenziger, 2009). However, unlike language, music has no clear segmental elements such as words, which have an almost fixed meaning. The musical elements are essentially ambiguous and can have different meanings in different contexts (Gabrielsson & Juslin, 2003). The music looks a lot more complex and incorporates disparate elements organized in different layers, which also have complex relationships that change over time. Therefore, in the same musical excerpt, it is possible to distinguish both specificity and complexity in the emotional message conveyed by the musical structure and its expressivity. Gabrielsson and Lindström (2010) highlight the role of musical structure in emotional expression and the process of recognition and attribution, for example, the variation of the tonality/mode, rhythm, and melody leading to different emotional attributions according to these variations. Indeed, evidence regarding the diversity of emotions present in the same 51

53 piece of music are numerous. Livingstone, Muhlberger, Brown and Thompson (2010) demonstrate that by varying some basic musical elements, such as pitch, loudness, or articulation, it is possible to change the initial conveyed emotion as written by the composer. By changing the tempo (slow/fast) and the mode (major/minor) of a piece of music, Hunter, Schellenberg and Schimmack (2010) demonstrated the specificity and complexity of emotion in music for two emotions: sadness and happiness. In their experiment, some musical excerpts had consistent happy cues (i.e., fast tempo and major mode), some had consistent sad cues (i.e., slow tempo and minor mode), and others had conflicting affective cues (i.e., fast/minor and slow/major). An important finding of this experiment was that mixed happy and sad responses were increased after participants listened to music with conflicting as opposed to consistent cues. This finding illustrates how complex the expression of emotion in music can be. Although individuals are capable of recognizing emotions expressed by music and can agree on a specific emotion, there is also confusion about the precise recognition of emotions in music, especially with dimensions such as pain, peacefulness, tenderness, and nostalgia (Gabrielsson & Juslin, 2003). If one observes the correlation matrix of the GEMS dimensions (Appendix A in Supplemental Materials), the factorial organization of the emotions linked to music through different scales (three dimensions and nine dimensions) illustrates both the distinction and the overlap between musical emotions, namely, the complexity and the specificity of these emotions. For instance, an important overlap occurs with certain dimensions (e.g., Nostalgia and Tenderness dimensions, Transcendence and Wonder dimensions), as well as a clear distinction (Tension and Peacefulness dimensions, Power and Sadness dimensions). The specificity and complexity might be even more obvious with the use of continuous measurements because there are different emotions expressed in the temporal unfolding of a piece of music. 52

54 Dynamic Aspect of Music The traditional measures most used to investigate emotional responses to music are Likert scales, adjectives checklists, and free reports (Gabrielsson & Juslin, 2003; Zentner & Eerola, 2010). Although these methods are used most often, the answers given by listeners are often delayed, meaning that the listeners judge emotions after listening to the musical excerpts. More importantly, these methods are static and therefore unable to account for the dynamic aspects of music and emotion. These kinds of judgments are thought to be the result of an integrative process of the emotions judged or felt by someone while the listening to the musical excerpt. However, a key feature of music is that it unfolds over time, as does emotional processing (Verduyn, Van Mechelen, Tuerlinckx, Meers & Van Coillie, 2009). Taking into account the dynamic aspect of emotional experience in music is crucial because individuals often speak about a particular moment in a piece of music to describe the presence of emotional content (Vines, Nuzzo, & Levitin, 2005). In order to effectively apprehend the emotions expressed by music, it is preferable, and probably essential, to base judgments on continuous measurements in time. The works of Nielsen (Madsen & Fredrickson, 1993) and Schubert (2001, 2004) were among the first to take into account this characteristic of time and to use continuous measurements. This method allows experimenters to record the judgments of emotions expressed by music in real time and then to follow the changes of perception and attribution over time. On the basis of the concept of tension and release in music, Vines et al. (2005) proposed an interesting parallel between musical and physical dynamics. The authors describe the music as a series of changes over time with three main characteristics: energy, velocity, and acceleration. They insist on the fact that there are different moments in a musical excerpt with increases, decreases, and peaks. Asking individuals to rate their own perception of musical tension in a piece of music by using a computer interface, Vines and collaborators (2005) describe the 53

55 structure of the piece of music in terms of periodicity. Because of this kind of continuous measurement, it is possible to observe the temporal unfolding of emotion in music by cutting different time windows. More recently, in focusing on the emotion felt by listeners, Coutinho and Cangelosi (2011) argued that the structure of affect elicited by music depends on dynamic temporal patterns in low-level music structural parameters. Using the dimensions of valence and arousal in time, these researchers found six psychoacoustic features in the prediction of listeners reported emotions in time (loudness, pitch level, pitch contour, tempo, texture, and sharpness). In this context, we herein propose using an approach called dynamic judgment in order to capture the time course of the emotions expressed by music. As noted earlier, some emotions related to music are thought to be quite specific, and an appropriate set of categories is needed to better understand how humans are able to attribute emotions to music. Because the concept of tension and release or of valence and arousal is restrictive, we focused our approach on a more sophisticated model of emotion elicited by music. The GEMS model proposed by Zentner, Grandjean and Scherer (2008) concerns emotions induced by music, and it currently represents the most effective attempt to study specific feelings related to music. For this reason, we used it to investigate emotions expressed by music. Our main interest is in understanding the dynamic aspect of the emotions expressed by music using this model. In the current GEMS model, only the static and global aspects differentiate the emotion. However, the dynamic aspect is fundamental and its measure allows investigation of the complexity and specificity of the emotions expressed by music, which is not the case with global and delayed judgments. Indeed, thanks to this method of dynamic judgment, it is possible to explore the temporal pattern of specific emotions. Regarding specificity, one might expect the dimensions Peacefulness and Tension to have completely different patterns of dynamic judgments for the same musical excerpt, or even to have opposing emotional 54

56 patterns. In contrast, one might expect the dimensions Nostalgia and Tenderness, or Power and Transcendence, to present similar emotional patterns through time for the same piece of music because they overlap (see Appendix A in Supplemental Materials). However, one can argue that although some musical emotions are highly correlated, some parts of specific musical pieces would be recognized as being more intense when judging, for example, Transcendence than when judging Power, indicating subtle differences at specific times for these emotions expressed through music. The three main purposes of the studies reported here are therefore (a) to capture dynamic emotional judgments while participants listen to musical excerpts by using continuous measurement to study the reliability of such a method, (b) to investigate the temporal structure of the GEMS dimensions on the basis of dynamic emotional judgments, and (c) to investigate the specificity and complexity of emotions expressed by music by using dynamic emotional judgments. This new dynamic approach allows one to understand the unfolding of musical emotions by asking for an immediate response from the listeners and therefore provides important additional information about the enigmatic emotional phenomenon related to music. To achieve these goals, we first performed a pilot study to test how dynamic judgments might be different from static and delayed judgments for the same musical pieces. We then performed two studies to investigate the dynamic pattern of emotion by investigating the specificity and complexity of emotion expressed through music Pilot Study As discussed earlier, we decided to use a new method of measurement, namely, dynamic emotional judgment. Indeed, we argue that this type of measurement is more effective than global and delayed judgments because it offers more information about emotional decoding. 55

57 Method The method of dynamic judgment that we used was developed by using a Flash interface, allowing us to record the dynamic judgments in real time. During the judgments, participants used a graphic interface to judge the intensity of one specific emotion through time (e.g., Nostalgia). The width of the graph was 1,000 pixels (corresponding to a duration of 4 min, 16s) and the height was 300 pixels (1,280 1,024 pixels, 17 in.). Participants had direct visual feedback of the judgments they were making in the graphic interface by moving a computer mouse up and down as time advanced automatically (if necessary, the graphics window could be scrolled). Measurements were made every 250 ms. The x-axis represented time, while the y-axis represented the intensity of the emotion expressed by music (e.g., Peacefulness) through a continuous scale marked by three levels of intensity: low, medium, and high. The main instruction was Rate to what extent the music expresses [dimension of interest], including the main items describing the dimensions (Figure 1). Figure 1. Screen shot of the dynamic Flash interface (in French) during the dynamic judgments task, here with the dimension of Peacefulness with the instruction: Rate to what extent the music is expressing: peacefulness a calm, relaxed, serene, soothed, and meditative style. 56

58 Before beginning the experiment, the participants had to do a training trial to become familiar with the procedure. To test whether this type of dynamic measurement is more efficient and provides more information than the global measurement, we conducted a pilot study. We collaborated with the famous French violinist Renaud Capuçon, with whom we have established a list of nine musical excerpts corresponding to the nine GEMS dimensions. We asked him to play these nine musical excerpts in three different musical styles: deadpan, concert-like (natural way to play), and emphatic. We then created two experiments. In the first experiment, a group of 44 people dynamically evaluated the nine musical excerpts on one GEMS dimension, and the general expressivity of the same musical excerpts globally, using a slider from 0 (not expressive) to 100 (very expressive). In the second experiment, a group of 45 people dynamically evaluated the musical expressivity of the musical excerpts, and the nine GEMS dimensions expressed by the music globally, using a slider of 0 (not at all) to 100 (entirely). The details of the musical excerpts are shown in Appendix B (Supplemental Materials). To compare these two types of measurement, we applied a generalized linear mixed model (GLMM) for the statistical analyses of the excerpts played as in a concert (concertlike). As highlighted by McCulloch (2000): The whole idea behind GLMMs is the development of a strategy and philosophy for approaching statistical problems, especially those involving non-normally distributed data, in a way that retains much of the simplicity of linear models (p. 1324). Indeed, the GLMM authorizes a flexible generalization of linear regression that allows for response variables that have error distribution models other than a normal distribution. Furthermore, it allows one to also specify random factors (e.g., participants and trials) and then avoids the problem of averaging the data within participants (averaging is often based on non-normal distributions). 57

59 Results We used GLMMs (using the lmer function, R software, version ) to test the main effects and the effect of interaction that we specified as follows: type of judgment (averaged dynamic or slider) and emotion (nine GEMS dimensions and one dimension of expressivity), with two random factors, including participants and musical excerpts (a total of 3120 trials). The comparisons of the different models (log likelihood, deviance, analysis of variance [ANOVA]), including main effects and effect of interaction, revealed a main effect of type of judgment, χ 2 (1) = 27.79, p <.0001, R 2 m =.14, R 2 c =.30, and emotion, χ 2 (9) = , p <.0001, R 2 m =.18, R 2 c =.34, and a significant interaction effect, χ 2 (9) = , p <.0001, R 2 m =.20, R 2 c =.35. The contrast analysis for each emotion as a function of type of judgment revealed significant differences with higher values for dynamic compared with static judgment for the dimensions Joyful, χ 2 (1) = 16.34, p <.0001; Nostalgia, χ 2 (1) = 19.79, p <.0001; Peacefulness, χ 2 (1) = 15.75, p <.0001; Tenderness, χ 2 (1) = 6.71, p =.0096; Tension, χ 2 (1) = 4.49, p =.034; and Wonder, χ 2 (1) = 4.5, p =.034. Other contrasts were not significant (ps >.1; Appendix C, Figure C1, in Supplemental Materials). Note that the residuals were normally distributed. One can argue that the participants involved in the dynamic judgments had to evaluate only one dimension, whereas during the static judgments, participants also had to judge other GEMS dimensions, which could introduce a potential bias in our first analysis. To control for this bias, we selected only the same dimension for each excerpt for the slider judgments (and not all other potentially nonrelevant sliders). For this second analysis, we also used a classic GLMM model with the same factors as for the first analysis (with 1,888 trials). 58

60 Using the same comparison method between the models (log likelihood, deviance, ANOVA), the results of this analysis indicated a significant main effect of emotion, χ 2 (9) = 175.3, p <.0001, R 2 m =.07, R 2 c =.43, with no main effect of type of judgment, χ 2 (1) = 2.04, p =.15, and a significant interaction effect, χ 2 (9) = 48.98, p <.0001, R 2 m =.09, R 2 c =.44. Note that the residuals were normally distributed. The contrast analysis for each emotion as a function of type of judgment revealed significant higher values, corrected for multiple comparisons, for dynamic compared with static judgments for Nostalgia, χ 2 (1) = 5.96, p =.015; Tenderness, χ 2 (1) = 10.58, p <.005; and Wonder, χ 2 (1) = 8.24, p <.005 and marginally significant higher values for Transcendence, χ 2 (1) = 3.47, p =.06. Power, χ 2 (1) = 7.19, p =.0073, and Sadness, χ 2 (1) = 5.32, p =.021, showed higher values for the static compared with dynamic judgments (see Appendix C, Figure C2, in Supplemental Materials). Discussion We conducted this pilot study to test whether the method of dynamic emotional judgment is more efficient and provides more information than the traditional measure of global and delayed assessment. Our first aim was therefore to investigate the time effect during the attribution of emotional characteristics to the music, for which we performed a series of statistical analyses (GLMM) on our data. We can underline two important findings: (a) we were able to measure different kinds of expressed emotions (i.e., the GEMS dimensions), as demonstrated by the significant effect of the emotion factor; and (b) we found a significant interaction effect between emotion and type of judgment. These findings raise a question about the mechanism of temporal integration phenomena for the different GEMS dimensions judged by participants. Static judgments ask participants to integrate information through the unfolding of music and then to focus their attention on a built representation about the entire piece of music by using their short-term memory and setting aside their 59

61 sustained attention on the allocation of emotional characteristics expressed by music. The reflective process is thus delayed and global, and the participants have to make a judgment on several dimensions. This type of judgment uses short-term memory skills and leans on the general musical character of the excerpt extracted and integrated by the listeners. A significant amount of information about the specificity and complexity of the emotional attribution process is therefore lost. In contrast, dynamic judgments are immediate and are not based on short-term memory and integrative processing; they may therefore be more efficient in capturing the emotion expressed through music. Indeed, we observed higher emotional judgments, consistent with our predictions, for Joyful Activation, Nostalgia, Peacefulness, Tenderness, Tension, and Wonder. However, the fact that participants had to judge several dimensions on the static judgment might have introduced a potential bias into our measures; for example, music expressing sadness would be judged as being very low on the scale of Joyful Activation and then would have an impact on the averaged judgment of this dimension for the static judgments. To test for such a bias, we selected the observations performed on specific scales in the static condition, that is, ignoring the values judged on nonrelevant scales for the different GEMS dimensions. This second analysis confirmed that, compared with static judgments, dynamic judgments were higher for Nostalgia, Tenderness, and Wonder. Surprisingly, and in contrast to our predictions, static emotional judgments were higher for Sadness and Power dimensions. This finding may be explained by the fact that, for example, the Power dimension is characterized globally by clear and dominant parameters of the musical structure. As pointed out by Vines et al. (2005), there are different moments in a piece of music, with increases, decreases, and peaks. One can imagine that the Power dimension is characterized globally through the use of acoustic parameters and elements of musical structure that are well defined: high volume, fast tempo, important contrasts in the 60

62 use of pitch, and high intensity. The process that highlights the subtleties of sound and musical combinations to express Power precisely contrasts the quieter musical moments. Nevertheless, participants retain and are biased about the Power character of the piece when they have to globally evaluate the musical excerpt, weighting as more important the burst related to Power compared with dynamic judgments in which quieter moments have more impact on the emotional judgment. The judgment of Sadness might also be exposed to similar biases. The results of this pilot study highlight that the method of dynamic judgments provides additional information in the study of emotional phenomena expressed through music, especially for the Nostalgia, Tenderness, and Wonder dimensions. Our results demonstrate that these two ways of judging emotion in music significantly interact with the GEMS dimensions, suggesting that the phenomenon is probably not restricted to these dimensions, as it might be the same for arousal and valence, for example. Taken together, these preliminary findings highlight the importance of taking into account the temporal dimension to understand how people attribute emotional characteristics to the music. In the following sections, we investigate the temporal specificities and complexities of emotions in music by using dynamic judgments on the GEMS dimensions Study 1 We conducted the first study to investigate the dynamics of emotional judgments of the emotions expressed by music through time. More specifically, we wanted to test, on a given dimension based on the GEMS model, (a) the extent to which participants agreed on the emotion expressed by music through time and (b) the dynamic differences of the temporal structure of the GEMS dimensions. 61

63 Method Participants. Seventy-one undergraduate students (eight men) from the University of Geneva took part in this experiment for course credits (note that these individuals were not involved in the pilot study). The average age was years (range 18-36). This study was accepted by the local ethical committee of the University of Geneva, and before beginning the experiment, all participants filled out a written consent form in which the experiment, the data processes, and the use of the data for publication were described. Materials and procedure. On the basis of our musical expertise and our knowledge of the GEMS dimensions, we chose a series of musical excerpts that corresponded more or less with the nine dimensions of the GEMS. We had 36 musical excerpts in total, that is, four musical excerpts per dimension (see Appendix D, Supplemental Materials, for details). The mean duration of the excerpts was 2 min, 36 s (range 2 min, 21 s to 3 min, 18 s). We used the same Flash interface for the dynamic measurement as that used in the pilot study (see the Pilot Study section and Figure 1 for a complete description). A description of the GEMS dimensions was provided before the beginning of the judgments (see Appendix E, Supplemental Materials). Eight orders of the musical excerpts were chosen (pseudo-randomization, i.e., never the same dimension or composer one after the other) in order to avoid bias because of the order of the pieces or composers. Each participant had to judge nine excerpts, one excerpt per GEMS dimension, and performed the task in groups of 2 to 10 people for the different sessions. The sessions took place in a computer room at the University of Geneva, and headphones (Sennheiser model HD 201) were used for the listening part of the task. 62

64 Results To estimate the reliability of the measure, we computed Cronbach s alpha for all excerpts and GEMS dimensions across participants. Cronbach s alpha ranged from.84 to.98 (Figure 2). Cronbach s alpha GEMS dimensions Figure 2. Cronbach s alpha for the 36 musical excerpts of the first study and the Geneva Emotional Music Scale (GEMS) dimensions. The highest Cronbach s alpha lies on the dimension Power for the 4th movement of the New World Symphony by A. Dvorak (.98), and the lowest on the dimension Tenderness for the 2nd movement of the Symphony No. 6, Pathétique, by P.I. Tchaikovsky (.84). Overall, these results mean that people strongly agree on the emotion expressed by music because even the lowest Cronbach s alpha (.84) remains satisfactory. Figure 3 illustrates the strong agreement between participants regarding the emotions expressed by music. The lines represent the normalized participant s judgments. The red line represents the average. The y-axis represents the intensity of the emotion expressed by music (here, Power) and the x-axis, time. We observe that the emotional judgment of this musical excerpt rises rapidly in power and then drops slightly to finally rise again and then drop completely. We note that the expression of the emotion in this musical excerpt does not 63

65 represent a straight line along time and that the temporal unfolding of emotion is a complex result. All averaged values of the dynamic judgments for each musical excerpt are reported in Appendix F (Supplemental Materials). Judgments of Power intensity (z-scores) Time (s) Figure 3. Individual and averaged (thick red line) z-scores (N = 18) for the dynamic emotional judgment of Power for the 4th movement of the New World Symphony by Dvorak (duration: 2 min, 80 s). To evaluate the specificities of the GEMS temporal pattern, that is, the temporal structure specificities of the GEMS dimensions, we analyzed the averaged data across participants (normalized to 563 time frames, i.e., the minimum time frames for all 36 musical excerpts) using two methods. First, we performed a generalized additive mixed model (GAMM; McKeown & Sneddon, 2014) and second, we performed a wavelet analysis on the dynamic judgments to investigate the specific temporal fluctuations across the GEMS. The purpose of the GAMM analysis (using the gamm function, R software, version , with an autoregressive model corar1; see McKeown & Sneddon, 2014) was to test (a) how time explains the unfolding of emotional judgments and a significant part of the variance across the GEMS dimensions, and (b) how emotion and time interact for the different GEMS dimensions. In other words, this kind of model allowed us to test how time and emotional 64

66 judgments are differentially related for the different GEMS dimensions, with musical excerpts (N = 36, four per GEMS dimension) as a random factor. This analysis revealed a significant effect of smoothness terms for time (edf = 8.94, F = 345.5, p <.0001, R 2 =.58) and a significant effect of time in the linear mixed-effects model (number of observations = 20,268; t(20231) = 36.71, p <.0001). The comparison of the two GAMM models, the first with only time and the second with time and emotion (nine GEMS dimensions), revealed a significant interaction effect for time and emotion (L.ratio = 62.39, p <.0001). The GAMM analysis revealed that all smooth terms for the nine GEMS dimensions are significant (all ps <.0001, R 2 =.68; see an example in Figure 4). This analysis revealed that the unfolding of emotional judgments is significantly different for the different GEMS dimensions (see Appendix G, Supplemental Materials, for all results). Averaged intensity of emotional judgment for Joyful Activation (z-scores) Time Figure 4. Dynamic pattern of Joyful Activation dimension from the generalized additive mixed model analysis. Black line: averaged judgment; grey zone: 95% confidence interval. 65

67 The main goal of the second analysis, that is, the time frequency analysis, was to investigate in more detail the organization of the unfolding of the GEMS dimensions. To achieve this, we decided to use continuous wavelet decomposition to characterize the contribution of the main frequencies of the judgment fluctuations on the GEMS dimensions. After wavelet decomposition was achieved, we tested the variance explained by the different selected frequency components for the different dynamic GEMS judgments by using a GLMM. The wavelet analysis (Morlet wavelet) was performed by using a custom-made procedure under Matlab (version 2011B), with a high-pass filter fixed at.001 Hz, a low-pass filter at.2 Hz, and a frequency step fixed at.0005 Hz, given the observation of the temporal fluctuations in our time series samples (from 5 s/cycle [s/c] to 1,000 s/c). For each trial, for each participant, we performed this wavelet analysis and extracted the modulus of each coefficient as a time series. To reduce the dimensionalities of the extracted coefficients, we averaged them in time and by a step of 50 coefficients, resulting in eight coefficient classes subject to an exploratory principal component analysis (PCA) in order to select the most informative and less correlated coefficients. This PCA analysis revealed, through the scree plot, that four dimensions explained 97.9% of the variance (first component: 76.05%; second: 11.21%; third: 7.56%; fourth: 3.05%). The most saturated coefficient classes for each PCA dimension were then selected for the GLMM analysis, from low to high frequencies: the first series of coefficients from 9.95 to s/c, the second series from 7.97 to 9.90 s/c, the third series from 5.70 to 6.62 s/c, and the fourth from 5.0 to 5.68 s/c. We also removed the first 50 time frames of each trial to avoid high frequency artefacts related to the beginning of dynamic judgments. We normalized the distribution by using a log transformation. On the basis of our hypothesis, we first tested the difference between one model with the main effects of the fixed factors emotion and coefficient against a model with the addition of the interaction between the two fixed factors. This analysis 66

68 revealed, as predicted, a significant difference of the variance explained between these two models, χ 2 (24) = , p <.00001, for the model with the interaction R 2 m =.81, R 2 c =.82. The contrast analysis of each coefficient was conducted separately and corrected for multiple comparisons by using Bonferroni correction (p value corrected =.05/4 =.0125; Figure 5, Table 1; see also the time frequency maps for each GEMS dimension in Appendix H, Supplemental Materials). Figure 5. Averaged modulus values (log) obtained by wavelet decomposition for each Geneva Emotional Musical Scale (GEMS) judged dimension. (A) For the slower component between 9.95 to seconds per cycle. (B) Between 7.97 to 9.90 seconds per cycle. (C) Between 5.70 to 6.62 seconds per cycle. (D) For the higher frequency component between 5.00 to 5.68 seconds per cycle. Tender. = Tenderness; Transc. = Transcendence. 67

69 Table 1 Summary of the Corrected and Uncorrected Significant Differences (p <.0125) Across the GEMS Dimensions Within Each Class of the Four Wavelet Coefficient Series. Note. Coefficients are ordered from low to high frequencies: Coefficient class 1: 9.95 to s/cycle (s/c); Coefficient class 2: 7.97 to 9.90 s/c; Coefficient class 3: 5.70 to 6.62 s/c; Coefficient class 4: 5.00 to χ 2 values are reported (all dfs = 1). GEMS = Geneva Emotional Music Scale. The unfolding structure of the emotional judgment of Joyful Activation was significantly different from all other GEMS emotions, with higher modulus amplitude, especially for the low frequencies (from 8 to 13 s/c), while the amount of energy was lowest in the highest frequency component. Nostalgia was significantly different from Sadness, Peacefulness, and Transcendence on the first two frequency components, revealing that even for the dimensions saturating on the same second order level, that is, Sublimity (Zentner et al., 2008), the temporal structure of these dimensions was different. The same was true for Power and Joyful Activation saturating on the Vitality second order level for three components (first, second, and highest frequencies), as well for Tension and Sadness saturating on Unease for the three first frequency components. Power was well characterized by the highest frequency component, being significantly different from all GEMS dimensions except Wonder and Transcendence. Indeed, this frequency analysis revealed many significant differences among the GEMS dimensions in terms of the unfolding structure of emotional judgments. 68

70 Discussion The two aims of our first study were (a) to observe, using the method of dynamic judgments, whether individuals were able to agree on the emotional characteristics expressed through music; and (b) to investigate the differences in the temporal structure of the GEMS dimensions. Our first objective showed spectacular results. Indeed, regardless of the GEMS dimension, Cronbach's alpha indices were extremely high with a rank of.84 for the Tenderness dimension to.98 for the Power dimension. The strength of these interrater agreements demonstrates that individuals are able to agree on the emotional characteristics expressed through music. For our second objective, we relied on the following two statistical analyses to investigate the temporal structure of the GEMS dimensions. First, GAMM analyses showed that the time course of emotional dynamic judgments was different for each GEMS dimension, highlighting the importance of the time factor in understanding the allocation of emotional characteristics to the music. A piece of music can be well evaluated globally, but the method of dynamic judgments allows us to highlight the temporal subtlety of aesthetic emotions. Second, we applied a method of wavelet analysis to investigate in more detail the organization of temporal fluctuations of the GEMS dimensions. These analyses showed that the temporal structure of GEMS dimensions, divided into four classes of coefficients, according to the distribution of energy in the frequencies, is different for several GEMS emotions. The third class of coefficients, characterized by the presence of high frequencies, well distinguished the Power dimension, while the two first classes of coefficients well distinguished the Joyful Activation dimension compared with all other GEMS dimensions. Taken together, these results support the usefulness and effectiveness of our method of dynamic judgments to better understand the temporal structure of assigning emotions 69

71 expressed through music, demonstrating that the process is indeed complex and is organized differently for the different GEMS dimensions. We conducted a second study to investigate the relationships, complexity, and specificity between different GEMS dimensions for the same musical excerpt Study 2 As shown in Study 1, the method of dynamic judgment allowed for the differentiation of more specific emotions that present a high overlap. The main questions of interest in Study 2 were as follows: Will a musical excerpt, characterized by a specific emotional GEMS dimension (in Study 1), be judged in a similar way for another dimension known to be positively correlated with it (in Study 2)? In the same vein, will a musical excerpt, characterized by a specific emotional GEMS dimension (in Study 1), be evaluated differently on an opposite and negatively correlated emotional GEMS dimension (in Study 2)? More specifically, and following the correlation matrix of GEMS dimensions (Appendix A, Supplemental Materials; Zentner et al., 2008), will the dynamic judgment on the Power dimension to a given musical excerpt present the same dynamic pattern if this same musical excerpt is judged on the Transcendence dimension (i.e., a dimension close to Power)? In contrast, will the dynamic judgment on the Wonder dimension to a given musical excerpt present an opposite dynamic pattern when this same musical excerpt is evaluated on the Tension dimension (i.e., a distant dimension from Wonder)? It can be expected that when two GEMS dimensions overlap with one another (e.g., Nostalgia and Tenderness), a similar pattern will appear for the same musical excerpt, whereas when two GEMS dimensions are very distant (e.g., Power and Peacefulness), a completely opposite dynamic pattern will occur for the same musical excerpt. 70

72 As noted previously, the degree of correlation between the GEMS dimensions is higher or lower following the compared dimensions. For instance, the Power dimension presents a strong and positive correlation with the Transcendence dimension (.42) and a positive but weak correlation with the Peacefulness dimension (.06). The Wonder dimension presents a strong and positive correlation with the Transcendence dimension (.44) and a positive but weak correlation with the Tension dimension. The Joyful activation dimension presents a strong and positive correlation with the Wonder dimension (.41) and a positive but weak correlation with the Sadness dimension (.08). On the basis of Cronbach s alpha and the degree of variance in the dynamic judgments, we proposed to focus on certain GEMS dimensions that showed interesting results in Study 1. Ten musical excerpts from Study 1 were chosen to investigate the specificity and complexity of the temporal unfolding of emotions expressed by music: five to be judged on similar dimensions as the dimension on which they were evaluated in Study 1, and five to be judged on an opposite dimension from the one on which they were evaluated in Study 1 (Table 2). 71

73 Table 2 Summary of the GEMS Evaluation of the Musical Excerpts for Study 2 Excerpt GEMS dimension judged in Study 1 GEMS dimension judged in Study 2 Dvorak Power Transcendence Beethoven Power Peacefulness C.P.E Bach Joyful activation Wonder Bach Joyful activation Sadness Sarasate Nostalgia Tenderness Schubert Nostalgia Tension Vivaldi Wonder Transcendence Bruch Wonder Tension Bazzini Tension Power Prokofiev Tension Peacefulness Note. The musical excerpts evaluated on the GEMS dimensions that are highly and positively correlated are in italic, whereas those in bold are uncorrelated or negatively correlated. GEMS = Geneva Emotional Music Scale. Method Participants. Twenty-five students (three males and 22 females) from the University of Geneva took part in this experiment for course credits (these individuals were neither involved in the pilot study nor in the first study). The participants were aged between 18 and 33 years (mean 21.67). 72

74 Materials and procedure. The same Flash interface as that used in Study 1 and the pilot study for the task of dynamic judgments was used in this study (see the Pilot Study section and Figure 1 for a complete description). The procedure was also the same as that used in Study 1 (see the Study 1 section for more details). This study was accepted by the local ethical committee of the University of Geneva, and before the beginning of the experiment, all participants filled out a written consent form in which the experiment, the data processes, and the use of the data for publication were described. To test significance differences on the different emotional judgments performed by the participants for two different GEMS dimensions for a given musical excerpt, we used a permutation method with a cluster-based correction. The permutation analysis was based on 1,000 iterations each time, allowing us to build up the distribution of differences emerging by chance. These distributions, resulting from the permutations, were used to test the significant differences between the participants' emotional judgments on the two GEMS dimensions that we investigated. To take into account the multiple comparison issue, we used a modified cluster-based correction implemented in FieldTrip under Matlab and usually used for comparisons of time series of event-related electrophysiological components (Oostenveld, Fries, Maris, & Schoffelen, 2011). Results As in Study 1, the reliability of the measure and the degree of agreement between participants were calculated by using Cronbach s alpha analysis. Table 3 presents Cronbach s alphas for the musical excerpts judged in Study 1 and Study 2 in a similar or different GEMS dimension, showing a similar agreement for the different emotions judged. 73

75 Table 3 Cronbach s Alpha in Study 1 and Study 2 Note. The musical excerpts evaluated on the Geneva Emotional Music Scale dimensions that are highly and positively correlated are in italic, whereas those in bold are uncorrelated or negatively correlated. To test the significant differences between different GEMS judgments on the same musical excerpts, we performed systematic statistical analysis for the 10 musical excerpts. Each musical excerpt was judged by two different groups of participants on two different GEMS dimensions (time series obtained through dynamic participants judgments). The dynamic judgments were evaluated on two GEMS dimensions: presenting positive correlations (e.g., Joyful Activation vs. Wonder) or negative correlations (e.g., Joyful Activation vs. Sadness). Figure 6 shows an example for the musical excerpt for C.P.E. Bach (GEMS positively correlated) and Beethoven (GEMS negatively correlated; see Appendix I, Supplemental Materials, for all results). 74

76 A Averaged judgments (z-scores) Time (s) p-values Time (s) 75

77 B Averaged judgments (z-scores) Time (s) p-values Time (s) Figure 6. Averaged judgments of the Geneva Emotional Music Scale (GEMS) dimensions and significant differences based on permutation tests. (A) The upper panel depicts the averaged z-scores for the C.P.E. Bach dynamic judgment (in black) for Study 1 evaluated on the Joyful Activation dimension and average z-scores for the C.P.E. Bach dynamic judgment 76

78 for Study 2 (red), evaluated on the Wonder dimension. Blue lines correspond to significant differences (cluster corrected) between the two dynamic judgments, here on GEMS dimensions that present high positive correlation. The lower panel depicts the statistical comparisons, based on permutations, without correction (blue line) and with cluster correction (red line), as well as the threshold-corrected p-value at.05 (green). (B) The upper panel depicts the averaged z-scores for the Beethoven dynamic judgment (in black) for Study 1 evaluated on the Power dimension and averaged z-scores of the Beethoven dynamic judgment for Study 2 (red), evaluated on the Peacefulness dimension. Blue lines correspond to significant differences (cluster corrected) between the two dynamic judgments, here on GEMS dimensions that present high negative correlation. The lower panel depicts the statistical comparisons, based on permutations, without correction (blue line) and with cluster correction (red line), as well as the threshold-corrected p-value at.05 (green). 77

79 Discussion Regarding the results of the dynamic judgments for this second study, the first observation that emerges is that all the averaged judgments evaluated on GEMS dimensions presenting overlap (i.e., high positive correlations) shared a similar temporal pattern throughout the musical excerpt (Appendix I, Supplemental Materials). Indeed, Dvorak s excerpt evaluated on the Power dimension in Study 1 presents a similar dynamic emotional pattern when the musical excerpt is evaluated on the Transcendence dimension, that is, a dimension highly correlated with Power in the GEMS correlation matrix. In this example, no significant differences reached the threshold based on the cluster correction. The emotional expression rises in Power and Transcendence at the beginning of the excerpt, decreases at the middle, and increases again at the end. This observation goes both in the direction of specificity and complexity of the emotions expressed by music, meaning that it is possible to obtain exactly the same dynamic emotional pattern with two different emotions that are highly correlated on the same musical excerpts. Similarly, the dynamic emotional patterns of the musical excerpts by Bazzini are almost the same according to the dimension evaluated in Study 1 (Tension) and the dimension evaluated in Study 2 (Power); that is, if dimensions were highly and positively correlated, then no significant differences reached the threshold fixed by the cluster-based permutation method. Regarding the musical excerpts by C.P.E. Bach, Sarasate, and Vivaldi, even though the global pattern of emotional judgments is similar, several time periods reached significance. Indeed, the dynamic emotional patterns are essentially the same throughout the musical excerpts, but a certain complexity emerges from these dynamic judgments. For instance, it is possible to note within the Sarasate excerpt that there is a gap at the beginning of the excerpt between the dimensions of Tenderness and Nostalgia, showing a decrease in the expression of Tenderness. Similarly, in the Vivaldi 78

80 excerpt, the two dynamic patterns are similar, but two main moments in the piece diverged, as demonstrated by the two fluctuations in which the dimension of Wonder is dominant while the expression of Transcendence decreased. For the C.P.E. Bach excerpt, the pattern of the two emotion dimensions (Joyful Activation and Wonder) are similar but significantly differed during two periods: at the beginning of the excerpts, participants judged Joyful Activation as being higher than Wonder, which is the opposite later on. The fact that these five musical excerpts evaluated in Study 1 and Study 2 on GEMS dimensions presented high correlations argues for a sense of specificity of the emotion expressed by music through time by presenting almost the same dynamic pattern on GEMS dimensions that are highly correlated but with specific moments of the pieces showing significant differences. Further studies should investigate the characteristics of acoustical and musical structures to explain these significant differences about emotions expressed by music for highly correlated GEMS dimensions. Regarding the five musical excerpts evaluated on GEMS dimensions that are uncorrelated or negatively correlated, the results revealed several pieces of information. First, the same observation can be made for four excerpts, namely, Prokofiev, Bach, Bruch, and Schubert: All of these excerpts begin with a clear distinction at the beginning, showing specificity of the emotions expressed by music, and then present a pattern that is less obviously differentiated. It almost overlaps regarding the musical excerpt by Prokofiev, evaluated on the Tension dimension in Study 1, and on the Peacefulness dimension in Study 2, meaning that the expression of emotion in music is complex when the unfolding is taken into account. The musical excerpt by Bruch more obviously shows the distinction between the two opposite dimensions. Indeed, a clear distinction appears at the beginning and finally alternates between the expression of Wonder and the expression of Tension during the piece. The perfect exemplification of the complexity of the emotions expressed by music is 79

81 illustrated by the Beethoven excerpt. From the beginning to the end of the excerpt, exactly the opposite pattern can be observed between the expressions of Peacefulness and of Power: When one is rising, the other is decreasing, each mirroring the other. Concluding Discussion The main goal of this series of experiments was to investigate the emotions expressed by music through time. Using a dynamic method of emotion measurement allowing for the dynamic capture of the responses of the individuals, we conducted two experiments to refine the study of such aesthetic emotions. More specifically, the following three goals were pursued: (a) the capture and characterization of dynamic emotional judgments using a continuous measurement while participants listened to musical excerpts, (b) the investigation of the temporal structure of the GEMS dimensions on the basis of these dynamic emotional judgments, and (c) the investigation of the specificity and complexity of emotions expressed by music using this method. The first aim of this exploratory work was to investigate a new method of dynamic judgment to test the reliability of continuous judgments for GEMS dimensions. This objective was achieved in the pilot study, demonstrating that this new approach is an effective method that opens up important perspectives in our understanding of emotion and music. The method of dynamic judgment allows us to characterize the temporal structure of the emotions expressed by music and therefore to cut different temporal windows. Indeed, two types of analyses were performed: first, a macro-analysis to view the temporal structure of a certain type of emotion the general aim of the present study; and second, a micro-analysis to cut temporal windows in order to focus on a certain moment in which we can observe, for instance, an increase or decrease in the emotion expressed by music. According to this second objective and the results obtained, the GEMS dimensions do not have the same emotional 80

82 time course. The main limit that one could highlight is that the result is dependent on the choice of musical excerpts. However, we have four musical excerpts per GEMS dimension in Study 1, which seems sufficient to counteract this limit. Regarding the totality of the results obtained in this series of experiments, one can assume that the GEMS model, coupled with the method of dynamic judgments, offers a finer granularity in the investigation of the emotions expressed by music. Our third and last objective was the investigation of the specificity and complexity of the GEMS dimensions in Study 2. The results reveal first, that the method of dynamic judgment is effective for investigating this topic. Global judgments are not able to account for this specificity and complexity of emotions expressed by music through time. Second, by observing the dynamic emotional patterns of the musical excerpts evaluated on both similar and opposite GEMS dimensions, a wealth of results emerged. We obtained clear contrasting dynamic emotional patterns, illustrating a certain specificity of the emotion expressed by music (e.g., Beethoven s musical excerpt evaluated on the Power and Peacefulness dimensions), but we also obtained a similar dynamic emotional pattern regarding the musical excerpts that were evaluated on correlated GEMS dimensions (e.g., Sarasate s musical excerpt evaluated on the Nostalgia and Tenderness dimensions). An analysis of the musical structure and of acoustic parameters will provide more information that is better refined regarding the specificity of an emotion expressed by music at a specific time. Being able to focus on a specific moment in the musical excerpt allows us to establish specific musical elements responsible for the attribution of emotional characteristics to the music. In other words, this method of dynamic judgments will enable researchers to refine the links between musical analysis and emotional judgments thanks to real-time measurements. From a Brunswikian view (1955), one of the main perspectives of these experiments was to build a foundation for the creation of a typology of musical structure in order to list all the elements likely to have an impact on the attribution of emotional 81

83 characteristics to the music. The novelty proposed from this foundation for future research is the ability to understand the influence of these musical elements in time. Indeed, many studies have focused on understanding the weight of a particular musical element (Gabrielsson & Lindström, 2010, for a review), but only in a static context and based on global and delayed responses. This ambitious perspective gives rise to collaboration with professionals of the Geneva University of Music and engineers for informatics implementation of the typology. Thanks to this dynamic approach, it will be possible in the near future to apply a time series analysis with two levels of predictions acoustic parameters and musical structure to the dynamic judgments of the expression of the GEMS dimensions in order to better understand the complexity and specificity of emotions expressed when people listen to music. 82

84 References Brunswik, E. (1955). Representative design and probabilistic theory in functional psychology. Psychological Review, 62, doi: /h Chapin, H., Jantzen, K., Kelso, J. A., Steinberg, F., & Large, E. (2010). Dynamic emotional and neural responses to music depend on performance expression and listener expression. PLoS ONE, 5(12) doi: /journal.pone Coutinho, E., & Cangelosi, A. (2011). Musical emotions: Predicting second-by-second subjective feelings of emotion from low-level psychoacoustic features and physiological measurements. Emotion, 11, doi: /a Curtis, M. E., & Bharucha, J. J. (2010). The minor third communicates sadness in speech, mirroring its use in music. Emotion, 10, doi: /a Escoffier, N., Zhong, J., Schirmer, A., & Qiu, A. (2013). Emotional expressions in voice and music: Same code, same effect? Human Brain Mapping, 34, doi: /hbm Evans, P., & Schubert, E. (2008). Relationships between expressed and felt emotions in music. Musicae Scientiae, 12, doi: / Fabian, D., & Schubert, E. (2003). Expressive devices and perceived musical character in 34 performances of Variation 7 from Bach s Goldberg Variations. Musicae Scientiae, 7(Suppl. 1), doi: / S103 Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A.D., & Koelsch, S. (2009). Universal recognition of three basic emotions in music. Current Biology, 19, doi: /j.cub Gabrielsson, A. (2001). Emotions in strong experiences with music. In P. Juslin & J. Sloboda (Eds.), Music and emotion: Theory and research (pp ). Oxford, England: Oxford University Press. Gabrielsson, A., & Juslin, P. (2003). Emotional expression in music. In R. J. Davidson, H. H. Goldsmith, & K. R. Scherer (Eds.), Handbook of affective sciences (pp ). New York, NY: Oxford University Press. Gabrielsson, A., & Lindström, E. (2010). The role of structure in the musical expression of emotions. In P. Juslin & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp ). Oxford, England: Oxford University Press. Grandjean, D., & Baenziger, T. (2009). L expression vocale des émotions [Vocal expression of emotions]. In D. Sander & K. Scherer (Eds.), Traité de psychologie des émotions (pp ). Paris: Dunod. Grandjean, D., Sander, D., & Scherer, K. (2008). Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization. Consciousness and Cognition, 17, doi: /j.concog Hevner, K. (1935). The affective character of the major and minor modes in music. American Journal of Psychology, 47, doi: / Hevner, K. (1936). Experimental studies of the elements of expression in music. American Journal of Psychology, 48, doi: / Hunter, P. G., Schellenberg, E. G., & Schimmack, U. (2010). Feelings and perceptions of happiness and sadness induced by music: Similarities, differences, and mixed emotions. Psychology of Aesthetics, Creativity and the Arts, 4, doi: /a Juslin, P. (1997). Emotional communication in music performance: A functionalist perspective and some data. Music Perception, 14, doi: / Juslin, P. (2000). Cue utilization in communication of emotion in music performance: relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance, 26, doi: /

85 Juslin, P., Liljeström, S., Västfjäll, D., & Lundqvist, L. (2010). How does music evoke emotions?: Exploring the underlying mechanisms. In P. Juslin & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp ). Oxford, England: Oxford University Press. Juslin, P. & Lindström, E. (2010). Musical expression of emotions: modeling listener s judgments of composed and performed features. Music Analysis, 29, doi: /j x Juslin, P., & Sloboda, J. (Eds.). (2001). Music and emotion: Theory and research. Oxford, England: Oxford University Press. Juslin, P., & Sloboda, J. (2010). Handbook of music and emotion: Theory, research, applications. Oxford, England: Oxford University Press. Labbé, E., Schmidt, N., Babin, J., & Pharr, M. (2007). Coping with stress: The effectiveness of different types of music. Applied Psychophysiology and Biofeedback, 32, doi: /s Livingstone, S., Muhlberger, R., Brown, A., Thompson, W. (2010). Changing musical emotion: a computational rule system for modifying score and performance. Computer Music Journal, 34, doi: /comj Madsen, C. K., & Fredrickson, W. E. (1993). The experience of musical tension: A replication of Nielsen s research using the Continuous Response Digital Interface. Journal of Music Therapy, 30, McCulloch, C. E. (2000). Generalized linear models. Journal of the American Statistical Association, 95, 452, McKeown, G. J., & Sneddon, I. (2014). Modeling continuous self-report measures of perceived emotion using generalized additive mixed models. Psychological Methods, 19, doi: /a Oostenveld, R., Fries, P., Maris, E., & Schoffelen, J. M. (2011). FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Computational Intelligence and Neuroscience, doi: /2011/ Rentfrow, P. J., & Gosling, S. D. (2003). The do re mi s of everyday life: the structure and personality correlates of music preferences. Journal of Personality and Social Psychology, 84, doi: / Rentfrow, P. J., & McDonald, J. A. (2010). Preference, personality, and emotion. In P. Juslin & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp ). Oxford, England: Oxford University Press. Rousseau, J.-J. ( ). Volume 9. Dictionnaire de musique in Collection complète des oeuvres, Genève, , vol. 9, in-4. Scherer, K. R. (2001). Appraisal considered as a process of multi-level sequential checking. In K. R. Scherer, A. Schorr, & T. Johnstone (Eds.), Appraisal processes in emotion: Theory, methods, research (pp ). New York, NY: Oxford University Press. Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them? Journal of New Music Research, 33, doi: / Scherer, K. R., & Zentner, M. (2001). Emotion effects of music: Production rules. In P. Juslin & J. Sloboda (Eds.), Music and emotion: Theory and research (pp ). Oxford, England: Oxford University Press. Schirmer, A., Fox, P.M., & Grandjean, D. (2012). On the spatial organization of sound processing in the human temporal lobe: a meta-analysis. Neuroimage, 63, doi: /j.neuroimage

86 Schubert, E. (2001). Continuous measurement of self-report emotional response to music. In P. Juslin & J. Sloboda (Eds.), Music and emotion: Theory and research (pp ). Oxford, England: Oxford University Press. Schubert, E. (2004). Modeling perceived emotion with continuous musical features. Music Perception, 21, doi: /mp Verduyn, P., Van Mechelen, I., Tuerlinckx, F., Meers, K., & Van Coillie, H. (2009). Intensity profiles of emotional experience over time. Cognition & Emotion, 23, Vieillard, S., Peretz, I., Gosselin, N., Khalfa, S., Gagnon, L., & Bouchard, B. (2008). Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition & Emotion, 22, doi: / Vines, B. W., Nuzzo, R. L., & Levitin, D. J. (2005). Analyzing temporal dynamics in music. Music Perception: An Interdisciplinary Journal, 23, doi: /mp Zentner, M., & Eerola, T. (2010). Self-report measures and models. In Juslin, P., & Sloboda, J. Handbook of music and emotion: theory, research, applications (pp ). Oxford, England: Oxford University Press. Zentner, M., Grandjean, D., & Scherer K. R. (2008). Emotions evoked by the sound of music: Characterization, classification and measurement. Emotion, 4, doi: /

87 Supplemental Materials Appendix A Intercorrelations of the Geneva Emotion Music Scale Dimensions (Zentner, Grandjean, & Scherer, 2008) 86

88 Appendix B Details of the Musical Excerpts for the Pilot Study, in Collaboration With the French Violinist Renaud Capuçon 1. Mozart, Wolfgang, Amadeus. Violin concerto n 3 in G major, K216, Allegro (I) for the Joyful activation GEMS dimension. (Bars 38-94) 2. Franck, César. Piano and violin sonata in A major, FWV 8, Allegro (II) for the Sadness GEMS dimension. (Bars 14-23, 29-79) 3. Bach, Jean-Sébastien. Partita n 2 in D minor, BWV 1004, Allemande (I) for the Nostalgia GEMS dimension. (Full movement) 4. Gluck, Christoph Willibald. Orfeo ed Euridice Melody - Largo for the Tenderness GEMS dimension. (Full movement) 5. Beethoven, Ludwig van. Violin concerto in D major, op.61, Larghetto (II) for the Peacefulness GEMS dimension. (Bars 40-79) 6. Sibelius, Jean. Violin concerto in D minor, op.47, Allegro moderato (I) for the Wonder GEMS dimension. (Bars 4-59) 7. Mendelssohn, Felix. Violin concerto n 2 in E minor, op.64, Allegro molto appassionato (I) for the Tension GEMS dimension. (Bars 2-47) 8. Schumann, Robert. Violin concerto in D minor, Op. Posth.: In Kräftigem, Nicht Zu Schnellem Tempo (I) for the Power GEMS dimension. (Bars ) 9. Massenet, Jules. Meditation from Thaïs for the Transcendence GEMS dimension. (Bars 3-40) 87

89 Appendix C Averaged Judgments for Dynamic Versus Static Emotional Judgments for the Pilot Study Figure C1. Averaged judgments and confidence interval (95%) for Expressivity scale and GEMS dimensions as a function of type of judgment for all dynamic and static judgments (dynamic vs. static). All judgments included (relevant and nonrelevant scales for the static condition, see the main text). 88

90 Figure C2. Averaged judgments and confidence interval (95%) for each emotion as a function of type of judgment for the matched dimension between the two conditions of judgment (dynamic vs. static). Only relevant scales have been included for the static condition (the nonrelevant scales have been excluded, see the main text). 89

91 Appendix D Details of the Musical Excerpts for the First Study Musical excerpt (ME) 1_Joyful Activation: Bach, Jean-Sebastian. Brandenburg concerto No.2 in F major, BWV First movement, Bars 1-59 [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Hamburg, Germany: Deutsche Grammophon. (1992) ME 2_Peacefulness: Bach, Jean-Sebastian. Aria, Orchestral suite No.3 in D minor, BWV Full piece [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Hamburg, Germany: Deutsche Grammophon. (1992) ME 3_Joyful Activation: Bach, Carl Philipp Emanuel. Cello concerto in A major, WQ 172. Third Movement, Bars [Recorded by The Café Zimmermann, Pablo Valetti], [CD]. Alpha. (2006) ME 4_Tension: Bazzini, Antonio. The Dance of the Goblins, op.25. Bars [Recorded by Maxim Vengerov & Itamar Golan], [CD]. Les Incontournables du Classique. Teldec Classics. (2002) ME 5_Power: Beethoven, Ludwig Van. Symphony No.9 in D minor, op.125. Scherzo: Molto vivace Presto Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Hamburg, Germany: Deutsche Grammophon. (2001) ME 6_Sadness: Beethoven, Ludwig Van. Symphony No. 7 in A major, op.92. Allegretto, Bars 1-82 [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Hamburg, Germany: Deutsche Grammophon. (2001) ME 7_Wonder: Beethoven, Ludwig Van. Violin Concerto in D major, op.61. Rondo. Allegro, Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan, Anne-Sophie Mutter], [CD]. Hamburg, Germany: Deutsche Grammophon. (1999) ME 8_Nostalgia: Brahms, Johannes. Symphony No.3 in F major, op.90. Poco Allegretto, Bars 1-78 [Recorded by The Vienna Philharmonic Orchestra, Karl Böhm], [CD]. Deutsche Grammophon. (2002) ME 9_Wonder: Bruch, Max. Violin Concerto No.1 in G minor, op.26. Finale: Allegro energico, Bars [Recorded by The Gewandhausorchester Leipzig, Kurt Masur, Maxim Vengerov] [CD]. Teldec Classics. (1994) ME 10_Power: Bruckner, Anton. Symphony No.9 in D minor. First movement,bars [Recorded by The Berlin Philharmonic Orchestra, Eugen Jochum], [CD]. Deutsche Grammophon. (2002) 90

92 ME 11_Tenderness: Carulli, Ferdinando. Concerto for flute, guitar and orchestra. Second movement, Bars 1-64 [Recorded by Franz Liszt Chamber Orchestra, Jànos Rolla], [CD]. CBS Records Masterworks. (1988) ME 12_Power: Dvorak, Antonin. New World Symphony, Symphony 9 in E minor, op. 95, B.178. Allegro con fuoco. Bars 1-92 [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Deutsche Grammophon. (1999) ME 13_Tenderness: Dvorak, Antonin. Symphony No.8 in G major, op.88, B Allegretto grazioso-molto vivace Bars [Recorded by The Vienna Philharmonic Orchestra, Herbert von Karajan] [CD]. Deutsche Grammophon. (1990) ME 14_Tenderness: Elgar, Edward. Salut d amour, op.12 Bars 1-68 [Recorded by The Saint Louis Symphony Orchestra, Pinchas Zukerman, Leonard Slatkin [CD]. Pinchas Zukerman. (1993) ME 15_Peacefulness: Fauré, Gabriel. Pavane in F sharp minor, op. 50. Bars 1-47 [Recorded by Boston Symphony Orchestra, Carlo Maria Giulini] [CD]. Deutsche Grammophon. (2009) ME 16_Joyful Activation: Hummel, Johann Nepomuk. Trumpet concerto in E-flat major, Third movement Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan, Maurice André], [CD]. EMI Classics. (1999) ME 17_Transcendence: Mahler, Gustav. Symphony No.1 in D major. First Movement Bars 1-34 [Recorded by The Concertgebouw Orchestra of Amsterdam, Leonard Bernstein], [CD]. Deutsche Grammophon. (1999) ME 18_Power: Mendelssohn, Felix. String Symphony No.12 in G minor. Allegro molto Bars [Recorded by The Gewandhausorchester Leipzig, Kurt Masur], [CD]. Brilliant Classics ME 19_Tension: Mendelssohn, Felix. String octet in E-flat major, op.20. Scherzo Bars [Recorded by The Wiener Oktett] [CD]. Decca. (1988) ME 20_Peacefulness: Molino, Francesco. Guitar concerto in E minor, op.56. Maestoso molto. [Recorded by The Academy of St Martin in the Fields, Iona Brown, Pepe Romero], [CD]. Philips Digital Classics. (1990) 91

93 ME 21_Joyful Activation: Mozart, Wolfgang, Amadeus. Divertimento in D major. Allegro. Bars 1-51 [Recorded by The Amsterdam Baroque Orchestra, Ton Koopman], [CD]. Erato (1990) ME 22_Sadness: Mozart, Wolfgang Amadeus. Piano Concerto No.23 in A major, KV 488. Adagio Bars 1-42 [Recorded by Derek Han], [CD]. Brilliant Classics (2005) ME 23_Sadness: Porpora, Nicola. Alto Giove, Polifemo, instrumental version (oboe and orchestra) Bars 1-23 [Recorded by Derek-Lee Ragin], [CD]. Auvidis Travelling (1994) ME 24_Tension: Prokofiev, Sergei. Violin concerto in D major, op.19. Scherzo-Vivacissimo, Bars [Recorded by The London Symphony Orchestra, Maxim Vengerov], [CD]. Teldec Classics (1994) ME 25_Transcendence: Ravel, Maurice. Boléro in C major, Ballet music for orchestra Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan [CD]. Deutsche Grammophon. (1999) ME 26_Nostalgia: Sarasate, Pablo. Zigeunerweisen (Gypsy Airs), op.20 n 1 Bars [Recorded by The Vienna Philharmonic Orchestra, James Levine, Anne-Sophie Mutter], [CD]. Deutsche Grammophon. (1993) ME 27_Transcendence: Schonberg, Arnold. Verklärte Nacht, op. 4. Sehr Ruhig Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan] [CD]. Deutsche Grammophon. (1999) ME 28_Nostalgia: Schubert, Franz. Piano Trio No.2 in E-flat major, op.100, D 929. Andante con moto. Bars 1-34 [Recorded by Renaud Capuçon, Gautier Capuçon, Frank Braley ] [CD]. Parlophone (2007) ME 29_Tension: Schubert, Franz. Impromptus n 4 in F minor, D 935. Allegro scherzando Bars [Recorded by Wilhelm Kempff ] [CD]. Deutsche Grammophon. (2009) ME 30_Tenderness: Tchaïkovski, Piotr Ilitch. Symphony No.6 in B minor, op.74, Pathetic. Allegro con grazia Bars 1-56 [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan] [CD]. Deutsche Grammophon. (2008) ME 31_Transcendence: Tchaïkovski, Piotr Ilitch. Swan Lake, ballet, op.20. Act two Bars 1-51 [Recorded by The Boston Symphony Orchestra, Seiji Ozawa] [CD]. Deutsche Grammophon. (1997) ME 32_Nostalgia: Tiersen, Yann. Comptine d un autre été: L après-midi. Film: Amélie Poulain. Full piece [Recorded by Yann Tiersen] [CD]. EMI Virgin. (2001) 92

94 ME 33_Wonder: Vivaldi, Antonio. Double trumpet concerto in C major, RV537. Allegro Bars 1-68 [Recorded by The Munich Chamber Orchestra, Maurice André] [CD]. Deutsche Grammophon. (1997) ME 34_Peacefulness: Vivaldi, Antonio. Violin Concerto in D major, RV 190. Andante Bars [Recorded by The Venice Baroque Orchestra, Giuliano Carmignola] [CD]. Sony Classical (2001) ME 35_Sadness: Vivaldi, Antonio. Violin Sonata No.8 in D minor. Preludio [Recorded by The Akademie für Alte Musik Berlin, Clemens-Maria Nuszbaumer] [CD]. Harmonia Mundi (2011) ME 36_Wonder: Vivaldi, Antonio. Violin Concerto in C major, RV 190. Allegro Bars [Recorded by The Venice Baroque Orchestra, Giuliano Carmignola] [CD]. Sony Classical (2001) 93

95 Appendix E Description of the Geneva Emotion Music Scale Dimensions For the next piece, we ask you to judge the emotion of Wonder expressed by music, i.e., a music style that is happy, amazed, dazzled, and allured. For the next piece, we ask you to judge the emotion of Transcendence expressed by music, i.e., a music style that is inspired, spiritual, and transcendent. For the next piece, we ask you to judge the emotion of Tenderness expressed by music, i.e., a music style that is affectionate, sensual, tender, and softened up. For the next piece, we ask you to judge the emotion of Nostalgia expressed by music, i.e., a music style that is dreamy, melancholic, nostalgic, and sentimental. For the next piece, we ask you to judge the emotion of Peacefulness expressed by music, i.e., a music style that is calm, relaxed, serene, soothed, and meditative. For the next piece, we ask you to judge the emotion of Power expressed by music, i.e., a music style that is energetic, triumphant, fiery, strong, and heroic. For the next piece, we ask you to judge the emotion of Joyful Activation expressed by music, i.e., a music style that is animated, stimulated, dancing, amused, and joyful. For the next piece, we ask you to judge the emotion of Tension expressed by music, i.e., a music style that is nervous, agitated, tense, impatient, and irritated. For the next piece, we ask you to judge the emotion of Sadness expressed by music, i.e., a music style that is sad and sorrowful. 94

96 Appendix F Averages of All Dynamic Judgments for Each Musical Excerpt of the First Study Wonder Transcendence Tenderness Nostalgia Peacefulness Power Joyful Activation Tension Sadness 95

97 Appendix G Results of the Generalized Additive Mixed Model Analysis With Time and Emotion 96

98 Appendix H Time-Frequency Maps of the Dynamic Judgments for the Different GEMS Dimensions Vitality Sublimity Unease Joy Wonder Sadness Power Nostalgia Tension Peacefulness Tenderness Transcendence 97

99 Appendix I Summary of the Dynamic Average Z-Score Results for the 10 Musical Excerpts Evaluated in the Third Study on Two GEMS Dimensions That Were Positively or Negatively Correlated or Uncorrelated Bach CPE: Joyful activation vs. Wonder Bach CPE: Joyful activation vs. Wonder Bach: Joyful activation vs. Sadness Bach: Joyful activation vs. Sadness Bazzini: Power vs. Tension Bazzini: Power vs. Tension Beethoven: Power vs. Peacefulness Beethoven: Power vs. Peacefulness Bruch: Tension vs. Wonder Bruch: Tension vs. Wonder 98

100 Dvorak: Power vs. Transcendence Dvorak: Power vs. Transcendence Prokofiev: Peacefulness vs. Tension Prokofiev: Peacefulness vs. Tension Sarasate: Nostalgia vs. Tenderness Sarasate: Nostalgia vs. Tenderness Schubert: Nostalgia vs. Tension Schubert: Nostalgia vs. Tension Vivaldi: Wonder vs. Transcendence Vivaldi: Wonder vs. Transcendence 99

101 3.2. Etude 2 How the dynamics of musical structure and acoustical features contribute to emotion recognition in music? Kim Thibault de Beauregard, Olivier Lartillot, Donato Cereghetti, Simon Schaerlaeken, Marc-André Rappaz, Donald Glowinski, Didier Grandjean. Faculty of Psychology and Educational Sciences and Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, Geneva, Switzerland. Abstract This experiment focuses on the impact of several factors in the process of perception of emotional characteristics to the music, based on the GEMS model and the method of dynamic judgments. By using the MIRtoolbox and favoring an interdisciplinary approach, this article proposes a new framework to study and refine the GEMS model. From a list of 36 predictors and using Factorial and Principal components analyses, two studies (Study 1, n = 71 and Study 2, n=111) highlight the relevance of three major factors (Factor 1 characterized by spectral aspects, Factor 2 characterized by energy and dynamic aspects and Factor 3 characterized by novelty aspects) to explain the GEMS dimensions expressed by music. In a second time, we performed GLMM and GAMM analyses. This kind of statistical models allowed to test whether the temporal structure of the acoustical/structural factors is significant predictors of the unfolding GEMS dimensions. Our models demonstrate that these three factors, with or without time, explained between 50.1% and 70.1% of explained variance. Keywords: musical structure, emotion, perception, dynamics 100

102 Acoustic Parameters and Auditory Percepts Humans appreciate and evaluate aesthetically a wide range of musical entities (Cross, 2003). Music can express different emotions, according to a combination of several elements both in the acoustic signal, the musical score, the style and the interpretation/performance. For example, all individuals, with rare exceptions, are able to attribute the valence of a major or a minor musical chord. As pointed out by Johann Joachim Quantz, famous musician of the Prussian Court during the 18th century: the harsh tone (major) is generally used to express joy, insolence, seriousness, majestic, whereas soft tone (minor), to express flattery, sadness, tenderness. (Droz, 2001, p. 6). There are indeed numerous studies demonstrating that individuals are able to recognize and attribute emotional characteristics to the music. Fritz et al. (2009) demonstrated the existence of universality in recognizing basic emotions (joy, sadness, fear) expressed by music among a native African population (the Mafas) and therefore unfamiliar with the repertory of Western classical music. Curtis & Bharucha (2010) have demonstrated the importance of the pitch for decoding an emotion expressed by music. Vieillard et al. (2009) have also shown the ability of individuals to evaluate the happy, sad, scary and peaceful characters of musical excerpts and have proposed a list with 56 musical excerpts representative of these four emotional states. The fundamental and distinctive characteristics of music are constituted by a temporal ordering of the action (pulsation/rhythm) and an organization based on the frequencies (melody/harmony) (Bispham, 2010). An objective method to analyse the expression of emotion through music or voice is to perform acoustic analyses (Grandjean & Baenziger, 2009). There are several relevant parameters in the acoustic signal that allow individuals to decode emotional expression, both vocally and musically (Juslin & Laukka, 2003). The acoustic analyses traditionally used for the study of emotional prosody correspond to the analyses of the outline of the fundamental frequency, the outline of the intensity and the 101

103 duration of expressions. The fundamental frequency (F0) is expressed in hertz (Hz) and corresponds to the number of repetitions of the fundamental period of the acoustic signal per second. The outline of F0 represents the evolution of this fundamental period during an emotional expression. The acoustic intensity is expressed in decibels (db) and is derived from the amplitude of the acoustic signal. The outline of the acoustic intensity corresponds to the evolution of the sound intensity during an emotional expression (Grandjean & Baenziger, 2009). This is also possible to perform analysis of the relative energy in different frequency bands of the acoustic signal (Banse & Scherer, 1996). Due to these different acoustic analyses, a review by Scherer (2003, in Grandjean & Baenziger, 2009) highlighted certain specific patterns in the production of vocal emotional expressions. Anger is characterized for instance with high energy in high frequencies, high tempo of speech and articulation, and an increase in the extent of the F0, whereas sadness is characterized by an inverse pattern. In their literature review, Juslin & Laukka (2003) highlighted an important parallel between vocal expression and music as Rousseau did already in 1780 to Both are part of nonverbal communication (the vocal expression is the non-verbal aspect of the speech) and can communicate specific emotions according to the change of certain acoustic parameters. Following this meta-analysis, Juslin & Laukka (2003) confirm that musical and emotional expressions involve similar structural cues, such as fundamental frequency, pitch, rhythm, loudness. Moreover, Zatorre, Belin & Penhune (2002) have demonstrated the implication of common brain structures in the processing of prosody and musical signals. Similarly, Escoffier, Zhong, Schirmer & Qiu (2013) have compared directly the emotions of Joy and Sadness through music and voice, and they demonstrated that the regions of the superior temporal gyrus are activated for both music and voice (for a review of the experiments between music and voice, see Schirmer, Fox & Grandjean, 2012). The objects (sounds) that make music are not inherently musical. To obtain an emotional content, it is necessary to 102

104 combine these objects. After this review of the physical characteristics of the acoustic signal, the next section deals with the musical structure, which is a level more sophisticated in the process of attribution of emotional characteristics to the music. Musical Structure The production of a musical piece is most of the time stemming from a music score representing the frame of the work. The musical structure is undoubtedly linked to the musical sounds and indicates four fundamental characteristics of the sound on the paper: the pitch, the duration, the intensity of the notes, and the timbre of the instruments (Gabrielsson & Lindström, 2001). The composer uses various means to convey the target emotion of a musical piece, means present in the musical structure, i.e. in the music score. Generally, composers seek and manipulate the spectral and temporal complexity in order to increase the interest of their works. Studies have highlighted the importance of several musical cues allowing the attribution of emotional characteristics to the music: the mode (major/minor) (Hevner, 1935); the tempo (speed of the music piece) (Peretz, Gagnon & Bouchard, 1998); the articulation (legato/staccato) (Juslin, 1997); the intensity/loudness (strong/weak) (Juslin, 2000); the melodic outline (Schubert, 2004); the pitch (Curtis & Bharucha, 2010); the rhythm (Thompson & Robitaille, 1992); the harmony (consonance/dissonance) (Hevner, 1936) (for a review, see Juslin & Sloboda, 2001, 2010). Furthermore, dissonant sounds activate brain regions that control emotions. Blood, Zatorre, Bermudez & Evans (1999) have emphasized a significant correlation between brain activations in the left parahippocampal gyrus and in the right precuneus area, and a raise of dissonance and displeasure. Dissonant sounds are extremely unpleasant and even young children aged to two to four months adopt avoidance behaviour when they hear dissonant sounds (Trainor, Tsang & Cheung, 2002). By focusing on 103

105 specific elements of the music score, it has been shown for example that a fast rhythm was rated as happier than a slow rhythm; the intervals analyses revealed that wide ranges were considered as more powerful than small intervals the second minor being traditionally considered as the most melancholic interval, whereas the octave, the fourth, fifth, and major sixth, as «happy/careless» intervals (Gabrielsson & Lindström, 2001, for a review). There are strong interactions within the elements of the musical structure. For instance, between the rhythm and the pitch: the combination of a semiquaver rhythm and the note G with a violin may be related to the expression of tension/fear whereas the combination of a slow rhythm of half notes may be related to the expression of sadness. A key concept of the musical structure refers to the musical expectancy and the tension/release resulting therefrom (Farbood, 2012). As pointed out by Pearce and Wiggins (2012, p.625): Expectations play a role in a multitude of cognitive processes from sensory perception, through learning and memory, to motor responses and emotion generation. In their attempt to identify the mechanisms behind the music-evoked emotions, Juslin & Västfjäll (2008) highlighted the importance of musical expectancy (see also Juslin, Liljeström, Västfjäll, & Lundqvist, 2010). Similarly, and more recently, Koelsch (2015) listed the principles underlying the evocation of emotion with music. Among them, he described the structural factors that give rise to musical tension: the acoustical features (see above), the stability of the musical structure, the structural breach, the resolution of the breach and finally the resolution of a sequence. The musical expectancy concept is linked to the sound imagery, i.e. the capacity of the auditory system to generate an inner sound. Kraemer, Macrae, Green, & Kelley (2005) have demonstrated that when individuals know a musical excerpt, there is an activation of the primary auditory cortex in the absence of sensory stimulation. This phenomenon is due to the fact that even in the absence of sound, the brain generates a series of core expectations in the construction of the auditory representation, underlying the capacity 104

106 of the system to have expectations on what will happen in dozen or hundred milliseconds that follow. The concepts of musical expectancy and tension/release demonstrate how the time course of music is complex to generate emotion in a musical piece (Vines, Levitin & Nuzzo, 2005). Models of Emotional Perception As pointed out above, music is a composite phenomenon (Cross, 2003). From a perceptual point of view, this is possible to split the music into several dimensions, each of which have their own level of complexity and their own structure. The sensory equipment and the sophistication of the neural networks make individuals able to build up the world, perceive their own reality. The aesthetic experience consists of fast receptive processes and partially unconscious, i.e. processes that involve sensory organs, and domain-general processes based on mental representations using the contribution of sensory organs. The main feature of the musical rhythmic behaviour is constituted by the fact that the temporal structure of actions is based on a pulsation containing regular elements allowing individuals, within the time limits of a psychological present, to interact in real-time by a synchronization process (entrainment, Clayton et al., 2003, in Deliège, Vitouch & Ladinig, 2010). The reaction to a musical beat involves such activation of the motor system that makes individuals able of managing a temporal control. As mentioned previously, from a macro-perspective, there are the basic dimensions of music (e.g. loudness, duration, flow separation, consonance/dissonance), concerning all general perceptual characteristics of the auditory system. Then, individuals have the ability to define the musical outline and the metric organization of musical sequences, allowing the recognition and the attribution of emotional characteristics to music through diverse cognitive functions such selective attention, memory, and pattern organization. 105

107 In the context of the social psychology, Brunswik (1955) created the Lens model. This model makes a certain number of theoretical and methodological principles for the study of perception, studying especially the assignation of psychological traits from the observation of the external appearance of an individual. This framework brings a comprehensive account of emotion communication considering the entire path from sender to receiver/observer. On the sender side, transient states are expressed in behaviour by means of cues, which represent an objective measure. These cues (called distal because they are remote from the observer) correlate with the sender s state and the degree of correlation constitutes the ecological validity of the cues. On the observer side, distal cues are the object of the perception process. Perceived cues (called proximal because they are close to the receiver/perceiver) represent the basis on which the observer makes inferences to attribute states to the sender. The Lens model has inspired several domain-researches and has been adapted several times. For instance, Scherer (2003) has adapted this model to investigate the judgments concerning personality traits based on vocal expression. In their modified version of the Lens model, Grandjean, Baenziger and Scherer (2006) linked the physical properties of the acoustic signal to the percepts built by the auditory cortex areas in order to attribute an emotional state. In this version, the distal indicators represent the objective measure (e.g. fundamental frequency, quantization of energy in different frequency bands) whereas the proximal percepts represent the subjective measure (e.g. loudness, pitch). The more the perceiver has cues, the more the evaluation is accurate (for a complete view of this adaptation, see Grandjean & Baenziger, 2009, p. 135). In the field of musical research, Juslin (1997) asked to three professional guitarists to interpret three short melodies («When the Saints», «Nobody knows» and «Greensleeves»), by modulating their play in order to express joy, sadness, angry and fear. This study underlies three major findings: firstly, the performers are able to express emotions by modulating 106

108 different aspects of their play; then, the cues used by the performers in order to express a specific emotion are the same as the cues used by the listeners to attribute a specific emotion to the music; finally, the use of the cues are more consistent and stable across the different melodies than across the different interpretations. The process of perception, recognition and attribution of emotional characteristics to the music may be summarized as following: i) a basic and physical level concerns the acoustic parameters of the signal, e.g. the fundamental frequency or the distribution of energy in the signal (Juslin & Laukka, 2003); ii) a low-level processing, i.e. related to sensory processing, refers to the perceptual level of acoustic parameters (Banse & Scherer, 1996); individuals perceive the loudness (volume), the pitch (perceptual correlate of the fundamental frequency), the timbre; iii) then, individuals have the ability to define the music outline and the metric organization of musical sequences concerning a level more elaborated linked to the concept of perception and musical structure, which is invariant across different types of musical performances. Some perceptual features are specific to the music, e.g. pitch perception on a musical scale, and this ability of pitch perception is present in all cultures (Brattico, Brattico & Jacobsen, in Deliège, Vitouch & Ladinig, 2010); iv) finally, a last level of interest concerns the emotional attribution by the listeners through a judgement. This last level includes the attribution of emotion(s) expressed by music that the listener can infer using several cues based on the percepts, themselves correlated to the acoustic parameters, and cues from aspects of perceptual organization to a level more elaborated and related to the notion of musical structure. The attribution of emotional characteristics to music involves cognitive functions and corresponds to the more elaborated level of interest. Musically, there are strong interactions between the acoustic parameters, the percepts and the musical structure, in the decoding of emotions expressed by music. Therefore, a major question is how the percept mediates the relationship between acoustic parameters and emotional judgments. 107

109 In the present study, we will focus on the GEMS model proposed by Zentner, Grandjean and Scherer (2008) in order to better understand the key cues that individuals use to attribute emotional characteristics to the music. One can expect for example that individuals will rely on spectral aspects for GEMS dimensions as Power or Joyful Activation. Some aspects of novelty should also be relevant through all GEMS dimensions and one can imagine that specific cues will not have the same impact or relevance between GEMS dimensions moreover whether we add the time factor or not. The two following experiments are therefore interested in the emotion expressed by music and not the subjective feeling (for a review, see Gabrielsson and Lindström, 2001). Methods There are different types of contrast scales in music, e.g. râgas in Indian classical music or maqam in Arab classical music, but we will focus on the Western classical music for the present study. Using a dynamic method of measurement Thibault de Beauregard, Ott, Labbé & Grandjean (submitted) have underlined the unfolding complexity and the specificity of the GEMS dimensions. Non-exclusives, each of the GEMS dimensions can be combined with one or more other, within the same music score, while retaining a dominant trend. In order to refine the investigation of the process of the attribution of emotional characteristics to the music, the present study will use the acoustic parameters as a first level predictor and the aspects of musical structure as second level predictor. Favouring an interdisciplinary perspective, we collaborated with the Geneva University of Music to build up a musical typology and with one of the engineer who created the MIRtoolbox. The MIRToolbox (Music Information Retrieval toolbox) is an integrated set of functions written in Matlab, dedicated to the extraction of musical features from audio files (Lartillot, Toiviainen & Eerola, 2008). The design is based on a modular framework: the different algorithms are decomposed into stages, formalized using a minimal set of elementary mechanisms, and integrating different variants 108

110 proposed by alternative approaches, that users can select and parameterize. The development of the MIRToolbox facilitates the investigation of the relation between musical features and musical emotions. The feature extractors of the MIRtoolbox can be organized according to the main musical dimensions: timbre, tone, rhythm/metric, and dynamic/structure. The list of the acoustic and musical predictors defined from the MIRtoolbox, and with the collaboration of the professor from the Geneva University of Music is presented in table 1. A detailed description of each feature is available in Appendix D of this manuscript. Table 1. List of the 36 predictors for the acoustical and musical analyses of the GEMS dimensions. Figure 1 illustrates the main question of the present study concerning the complexity of the combination of the cues. 109

111 Figure 1. Adaptation of an emotional perspective of the Lens Model during a dynamic judgement proposed by Glowinski et. al (2015), adapted from Juslin & Lindström, M: Musical excerpt(s); X, Y, Z: emotional cues presents in the musical excerpt and represented by acoustic parameters and aspects of the musical structure; XY, YZ, XZ: interactions between emotional cues; PC: perceived cues by listener (L) on which are based the attribution of emotional characteristics. In the initial version of this model (Glowinski et al. 2015), the M corresponds to «musician» but we change the object of interest to adapt this model to our point of interest in this article, and the M corresponds to «musical excerpts» here. The general idea is that there are several acoustical and musical cues in the musical signal, on which the listener relies on, and these cues interact in time. In order to characterize the relevant cues in the process of emotional attribution of the music, the present experiment consists of two studies. The main aim of the Study 1, focusing on 36 musical excerpts used in a previous study (see Thibault de Beauregard, Ott, Labbé & Grandjean, submitted), was to investigate the acoustic parameters and the aspects of musical structure as predictors in the attribution of GEMS dimensions. The purpose of the Study 2 110

112 was to compare two uncorrelated specific GEMS dimensions: Power and Tenderness (see the matrix correlations of GEMS dimensions in Zentner, Grandjean & Scherer (2008), p. 506) Study 1 Participants 71 undergraduate students (8 men) from the University of Geneva took part in this experiment, for course credits. The average was (range = 18-36). This study has been accepted by the local ethical committee of the University of Geneva and before the beginning of the experiment, all participants filled out a consent in which the experiment, the data processes, and the utilisation of the data for publications were described. Materials and procedure Based on our musical expertise and our knowledge of the GEMS dimensions, we chose a series of musical excerpts in order to correspond more or less to the 9 dimensions of the GEMS. We had 36 musical excerpts in total, i.e. 4 musical excerpts per dimension (see Thibault de Beauregard, Ott, Labbé & Grandjean, submitted). The mean duration of the excerpts was 2 36 (range from 2 21 to 3 18 ). The detailed description of the 36 musical excerpts is available in Appendix A of the supplemental materials. The method of dynamic judgment that we used was developed by using a Flash interface, allowing us to record the dynamic judgments in real time. During the task, participants used a graphic interface to judge the intensity of one specific GEMS dimension through time (e.g., Nostalgia). The width of the graph was 1,000 pixels (corresponding to a duration of 4 min, 16 s) and the height was 300 pixels (1,280 1,024 pixels, 17 in.). Participants had direct visual feedback of the judgments they were making in the graphic interface by moving a computer mouse up and down as time advanced automatically (if necessary, the graphics window could 111

113 be scrolled). Measurements were made every 250 ms. The x-axis represented time, while the y-axis represented the intensity of the GEMS dimension expressed by music (e.g., Peacefulness) through a continuous scale marked by three levels of intensity: low, medium, and high. The main instruction was Rate to what extent the music expresses [dimension of interest], including the main items describing the dimensions (Figure 2). Figure 2. Screen shot of the dynamic Flash interface (in French) for the task of dynamic judgments, here with the dimension of Peacefulness with the instruction: Rate how the music is expressing peacefulness, music style calm, relaxed, serene, soothed, and meditative. Before beginning the experiment, the participants had to do a training trial to become familiar with the procedure. These 36 musical excerpts have been studied in a first step of experiments (Thibault de Beauregard, Ott, Labbé & Grandjean, submitted) and taken together, the results obtained in this first set of analyses reveals that the following three goals were pursued: (a) the capture and characterization of dynamic emotional judgments using a continuous measurement while participants listened to musical excerpts; (b) the investigation of the temporal structure of the GEMS dimensions on the basis of these dynamic emotional judgments, and (c) the investigation of the specificity and complexity of emotions expressed by music using this method. Following these findings, the next step is to characterize the acoustical and musical dynamic predictors in the attribution of emotion to music with these musical excerpts. 112

114 Results In order to control the multicollinearity of the 36 predictors (see table 1) and to select the most informative and less correlated group of potential predictors of perceived emotion, we first performed a PCA on all acoustical/structural features of all musical excerpts (Principal Component Analysis) in order to define the number of relevant factors. This PCA analysis revealed, through the scree plot, that three factors explained 42.94% of the variance (first component: 24.85%; second: 9.62%; third: 8.47%). In a second step, we performed a factorial analysis with three factors based on the PCA analysis (using normalized varimax rotation). We selected the higher saturated predictors for each factor (higher than.7). The first factor consisted of the spectral aspects with the following nine variables: spectral centroid, spectral skewness, spectral kurtosis, spectral entropy, rolloff, brightness_1000, brightness_1500, brightness_3000, zerocross. The second factor consisted of energy and dynamics spectral aspects with the following three variables: RMS energy, roughness and spectral flux. The third and last factor consisted of novelty dynamic aspects with the following three variables: spectral novelty, novelty waveform, and novelty chromagram 1. In order to investigate the relationships between these factors and the emotional judgments we extracted the score factors for all musical excerpts to perform the General Linear Mixed Model analysis (GLMM). We removed the first 50 time frames of each trial to avoid high frequency artefacts related to the beginning of dynamic judgments, representing 2.7% of the observations. A first classical GLMM model on the acoustic variables according to the emotions was performed in order to test how the three acoustical/structural factors can differentially explain the variance of the emotional judgments (in this case without dynamic aspects). We used GLMMs (using the lmer function, R software, version ) to test the main effects and the effect of interaction that we specified as follows: Factor (three levels: Factor 1, Factor 2, Factor 3) and 113

115 emotion (the nine GEMS dimensions), with one random factor, the musical excerpts. The comparisons of the different models (log likelihood, deviance, analysis of variance [ANOVA]) revealed a main effect of the GEMS dimensions (χ 2 (8)= 21.24, p <.01), such that Power, Joyful Activation and Transcendence were significantly different from the six other GEMS dimensions (Figure 3; see Appendix B for all averaged judgments and confidence interval (95%) for each emotion as a function of each factor). Figure 3. Averaged values of the GEMS dimensions. All other main effects were significant: Factor 1 (χ 2 (1) = , p <.001), Factor 2 (χ 2 (1) = , p <.001), Factor 3 (χ 2 (1) = , p <.001). All the interaction effects between the factors and the GEMS dimensions were also significant: Factor 1*GEMS dimensions (χ 2 (8) = , p <.001), Factor 2*GEMS dimensions (χ 2 (8) = , p <.001), Factor 3*GEMS dimensions (χ 2 (8) = , p <.001). In order to investigate the specific impact of each factor on each GEMS dimensions, we performed a systematic contrast analysis (Table 2). The 114

116 p-values were corrected for this contrast analysis (.05/9= p<.0056). We reported only the significant values here. Table 2. T-values for the significant differences (corrected p-values <.0056) for each GEMS dimension with each factor and their interactions of the GLMM analysis. These above GLMM analyses demonstrate that there is time-independent information in our data. In order to investigate the impact of the factors in time, we performed General Additive Mixed Model (GAMM) analysis (using gam function with bs= cs 1 and knots between 30 to 50, R software, version ). This kind of statistical model allowed us to test whether the temporal structure of the acoustical/structural factors are significant predictors of the unfolding GEMS dimensions, with musical excerpts (N = 36, four per GEMS dimension) as a random factor. The approximate significance of smooth terms is reported in Table 3. Table 3. Approximate significance of smooth terms for the GAMM analysis (F-values) (pvalues corrected:.05/48= p<.001). 1 bs= cs specifies a penalized cubic regression spline which has had its penalty modified to shrink towards zero at high enough smoothing parameters. 115

117 This analysis reveals that 70.1% of GEMS variance can be explained by our model composed of the unfolding of emotional judgements, the unfolding of the three factors and their unfolding interactions. In order to check the model, we use the GAM check function: the knots numbers have been adapted according to the related p-value and k-index (Wood, 2006). Figure 4 presents an example of the GAMM result for Joyful activation (see Appendix C in the supplemental materials for the other GEMS dimensions; the Appendix D presents the fitted values of the GAMM model through the four musical excerpts by GEMS dimensions). Averaged judgments (z-scores) Time (s) Figure 4. Example of results for the GAMM analysis for Joyful Activation. In order to test if our general GAMM model is effective and powerful, we performed the same procedure (PCA and factorial analysis) on each GEMS dimensions and we tested specific GAMM for each GEMS dimensions. The results are quite satisfactory because the analysis reveals the same three factors as those in the general PCA and factorial analysis, meaning that the general GAMM model is a good approximation of the specific GAMM models for each GEMS. In the comparison between these models, i.e. GAMM general vs. GAMM specifics, we note the following relevant differences (Table 4). 116

118 Table 4. Comparison of the acoustical and structural music aspects of the three factors between general and specific emotion models. Discussion The predictor cepstre mean appears among the predictors for the Factor 1 for Peacefulness, Sadness, Tension, Wonder and Transcendence for the specific and not for the general factorial analysis. Actually, the Factor 1 is quite stable between the general GAMM and the specific GAMM (except for the cepstre mean ), and concerns always the spectral aspects. In the general GAMM, the Factor 1 was significant for all GEMS dimensions, except for Tenderness and Tension; the Factor 2 (energy-dynamic aspects) was significant for all GEMS dimensions, with a very pronounced effect for Power, and the Factor 3 (novelty aspects) was significant for all GEMS dimensions except for Nostalgia and Tenderness. An interesting remark that one can observe between the two models is the inversion between the Factor 2 117

119 and the Factor 3, for Joyful Activation, Nostalgia, Peacefulness, Sadness and Transcendence, meaning that in the general GAMM, the Factor 2 concerned dynamic aspects but novelty aspects in the specific GAMM and conversely for the Factor 3, then the part of variance explained by these two factors is function of the emotions. Moreover, the Factor 1 and the Factor 3 were not significant in the general GAMM whereas all the factors are significant in the specific GAMM for Tenderness. The Factor 3 was not significant in the general GAMM for Nostalgia but it is significant in the specific GAMM. Regarding the Power dimension, the specific GAMM fits better than the general one and there are only two factors in the specific GAMM with 41,1% of explained variance. In the specific GAMM, one can observe the apparition of the variable event density in the Factor 2 (dynamic aspects) for Wonder. Figure 5 presents two examples: the first one (Figure 5a) shows that the values are similar between the two types of GAMM (specific vs. general), here for Sadness (through the four musical excerpts); the second one (Figure 5b) shows that the values are different for this GEMS dimension, here for Tenderness (through the four musical excerpts), and that the specific GAMM is better for this GEMS dimension. A Averaged judgments (z-scores) Time Figure 5a) The values of the two types of GAMM models are similar for Sadness dimension (four excerpts). 118

120 B Averaged judgments (z-scores) Time Figure 5b) The values between the two types of GAMM models are different for Tenderness dimension (four excerpts). Figure 5. Examples of differences between GEMS dimension in the accuracy of prediction with the two types of GAMM (specific vs. general). Green: dynamic judgement; Red: values predicted by the general GAMM, Blue: values predicted by the specific GAMM. Taken together, these analyses showed that the results issued from our first general GAMM were satisfactory and therefore that the 3 factors were stable predictors, as well as the time, for the dynamic judgements, and this, whatever the GEMS dimensions. However, one can argue that the emergence of these 3 factors is too dependent on the number of musical excerpts (4 in the present study). In order to control this bias, we conducted a second experiment, by comparing two GEMS dimensions weakly correlated, i.e. Power vs. Tenderness using more musical excerpts. 119

121 Study undergraduate students (45 men) from the University of Geneva took part in this experiment, for course credits. The age average was (range = 18-51). This study has been accepted by the local ethical committee of the University of Geneva and before the beginning of the experiment, all participants filled out a consent in which the experiment, the data processes, and the utilisation of the data for publications were described. Based on our expertise and with the help of a student from the Geneva University of Music, we select 56 musical excerpts: 28 musical excerpts to be judge on the Power dimension vs. 28 musical excerpts to be judge on the Tenderness dimension. The detail of the musical excerpts is available in Appendix E. The procedure and the task were exactly the same as Study 1 (see the section Study 1 for complete details). Results In order to test whether the three same factors as Study 1 were relevant to explain the musical excerpts for the present study, we proceed to a PCA on all the musical excerpts of the two GEMS dimensions. The PCA analysis revealed, through the scree plot, that three factors explained 44.04% of the variance (first component: 26.89%; second: 8.89%; third: 8.25%). The first factor of the factorial analysis (with three factors based on the PCA using normalized varimax rotation) consists of the spectral aspects with the following eleven variables: spectral centroid, spectral spread, spectral skewness, spectral kurtosis, spectral flatness, spectral entropy, rolloff, brightness_1000, brightness_1500, brightness_3000, zerocross. Two variables were added to this first factor compared to the Study 1: spectral spread and spectral flatness. The second factor consists of energy and dynamics aspects with the following four variables: RMS energy, roughness, spectral flux and event density. Note that event density was added to this factor compared to the Study 1. The third and last factor 120

122 consists of the novelty dynamic aspects with the following three variables: spectral novelty, novelty waveform, and novelty chromagram 1 (the same than in the Study 1). As in the Study 1, we extracted the factor scores in order to perform the GLMM and GAMM analysis on the three factors, we used the same procedure as described in the Study 1 for these analyses. GLMM analysis revealed an Emotion effect (χ 2 (1) = 4.27, p<.05). All other main effects were significant: Factor 1 (χ 2 (1) = , p <.001), Factor 2 (χ 2 (1) = , p <.001), Factor 3 (χ 2 (1) = , p <.001). All the interaction effects between the factors and the GEMS dimensions were also significant: Factor 1*GEMS dimensions (χ 2 (1) = , p <.001), Factor 2*GEMS dimensions (χ 2 (1) = , p <.001), Factor 3*GEMS dimensions (χ 2 (1) = , p <.001). In order to investigate the specific impact of each factor on each both GEMS dimensions, we performed a systematic contrast analysis revealing that the three factors are significantly related to Power judgments (t(1) Factor1 = 56.09; t(1) Factor2 = ; t(1) Factor3 = ; all pvalues<.001;) while only the third factor is significantly related to Tenderness judgements (t(1) Factor1 = 0.36, n.s.; t(1) Factor2 = 1.28, n.s.; t(1) Factor3 = 25.80, p<.001). The p-values were corrected for this contrast analysis (.05/2= p<.025). The GAMM analysis, performed with the same procedure than for the Study 1 (knot values were fixed from 40 to 120 based on GAM checks), reveals that all the main effects and the interaction effects were significant (see Table 5). Table 5. Approximate significance of smooth terms for the GAMM analysis (F-values) (p- values corrected:.05/8= p< ). 121

123 This analysis reveals that our GAMM model explained 57% of the judgment variance. Figures 6 and 7 present the result of the GAMM analysis for the two dimensions. Averaged judgments (z-scores) Time (s) Figure 6. Example of results for the GAMM analysis for Power. Averaged judgments (z-scores) Time (s) Figure 7. Example of results for the GAMM analysis for Tenderness. 122

124 We performed also specific factorial analysis for each dimension (Power and Tenderness), the three extracted factors are composed exactly with the same acoustical/structural musical features for these specific factorial analysis compared to the general one. The GAMM analysis with these specific factorial scores explained respectively 50,1% of the Power judgments and 57.5 % for Tenderness judgments (see Figures 8 and 9 for the comparison of the models on the explained variance of the emotional judgments). Averaged judgments (z-scores) Time Figure 8. Examples of differences for Power dimension (with seven musical excerpts) in the accuracy of prediction with the two types of GAMM (specific vs. general). Green: dynamic judgement; Red: values predicted by the general GAMM, Blue: values predicted by the specific GAMM. 123

125 Averaged judgments (z-scores) Time Figure 9. Examples of differences for Tenderness dimension (with seven musical excerpts) in the accuracy of prediction with the two types of GAMM (specific vs. general). Green: dynamic judgement; Red: values predicted by the general GAMM, Blue: values predicted by the specific GAMM. The GLMM analysis revealed a difference in the relevance of the three factors for Power and Tenderness dimensions. The novelty dimensions seem to have quite importance for Tenderness, as shown by the significant result whereas Power is well distinguished by spectral dimensions, novelty aspects and energy-dynamics dimensions. Another interesting finding of the above analyses regards the difference between GLMM and GAMM analyses showing that factor 2 and factor 3 are largely linked to timing aspects because in the GLMM analysis, i.e. when the temporal aspect is not taken into account, these two factors are not significant. Taken together these results demonstrate that the three factors that emerge in Study 1 are stable and relevant as revealed by these GLMM and GAMM analysis for the Study 2, whatever the number of musical excerpts per GEMS dimensions. However, we demonstrated that differences between the structure of the factors between Study 1 and Study 2 (inversion 124

126 of factor 2 and 3, addition of relevant descriptors through the specificity of each factor). Despite these variables differences, the factors concern the same aspect (i.e. spectral dimensions for factor 1, energy-dynamic dimensions for factor 2 and novelty dimensions for factor 3). The GAMM models proposed in Study 1 has the most satisfactory percentage of explained variance (70.1%). However, the percentages of explained variance in study 2 are quite satisfactory with 57% for the general GAMM model, and 50.1% and 57.5%, respectively for Power and Tenderness dimensions with the specific GAMM analysis. Concluding Discussion The main aim of the present two studies was to identify the key cues that allow individuals to attribute emotional characteristics to music through the GEMS dimensions. Based on the literature, the MIRtoolbox and the collaboration between professionals of music, we listed 36 musical and acoustical cues that would be relevant. Through the two studies reported above, statistical analyses revealed the emergence of three main factors as relevant predictors for the GEMS dimensions. The first one concerns the spectral aspects, the second one the energy and spectral dynamics and the third one the novelty dynamic aspects. More specifically, we noticed a very strong effect of Factor 2 for Power dimension in the first study. We demonstrated that whatever the GEMS dimensions and the type of statistical model applied, these three factors were stable with minimal differences (e.g. event density added in the model of the Study 2 on Power and Tenderness), and explained between 50,1 % and 70,1% of the unfolding variance of the judgments. In our first experiment, the three factors explained 70% of the variance on the 36 musical excerpts. In our second experiment, the percentage of variance explained was 57%, which is less satisfactory but still high. However if the spectral aspects, the energy and spectral dynamics, and the dynamic novelty aspects explain high percentages rates, there is still a variance percentage that is not explained by these three 125

127 factors. These three factors are therefore not fully able to accounting for the complexity of emotional judgment of music. The unexplained variance percentage could be due to several factors. First, the extracted acoustical/structural features might not covered the entire complexity of perceptual aspects related to music listening. Further research is needed to better characterized the relevant acoustical/structural features able to explain the different levels involved in emotional judgment including i) unfolding perceptual aspects (i.e. how the mind structure music in time with different scales and granularity) and ii) unfolding structuration at higher cognitive levels (e.g. the organization of the musical excerpt in different phrase or movements). Second, a large part of music that the MIRtoolbox and the current state of knowledge are not able to explain yet is the field of metaphors. The metaphors have been highlighted by Juslin et al. (2010) as a key factor characterized by their fifth mechanism in the study of music-induced emotion, as the mental imagery. When individuals are listening to music, they produce a certain number of mental imageries containing colours, landscapes, and movement. One can argue that the variance percentage not explained by the three factors in our dynamic judgements could be explained, at least partly, by the metaphors. Finally, our GAMM models are not able, at this stage, to have different time scales in order to take into account the different levels of musical expectancies and its impact on emotional judgment (at different levels, e.g. perceptual or related to higher structuration, even for metaphors). One can argue that the emotion judged at a certain point of time is dependant not only of the immediate previous aspects but can also depend strongly of distal perceptual aspects in previous phrase or movements (Koelsch, Fritz, Schulze, Alsop & Schlaug, 2005). However, even if the amplitude of the response is sometimes poorly represented, generating a large part of the unexplained variance, we can highlight that the proximal temporal patterns are well captured by the statistical models used in this study. The limitations of this study 126

128 might pave future research in this fascinating domain of the unfolding structuration of music and its relationships with not only emotional judgments but also feelings related to music. 127

129 References Banse, R., & Scherer, K.R. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology, 70, Bispham, J. (2010). Le modèle musical et ses caractéristiques: motivation, pulsation et hauteur. In I. Deliège, O. Vitouch, & O. Ladinig (eds), Musique et évolution : théorie, débats, synthèses, p Editions Mardaga. Blood, A.J., Zatorre, R.J., Bermudez, P., & Evans, A.C. (1999). Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic regions. Nature Neuroscience, 2, doi: /7299 Brattico, E., Brattico, P. & Jacobsen, T. (2010). Les origines du plaisir esthétique de la musique : examen de la literature existante. In I. Deliège, O. Vitouch, & O. Ladinig (eds), Musique et évolution : théorie, débats, synthèses, p Editions Mardaga. Brunswik, E. (1955). Representative design and probabilistic theory in functional psychology. Psychological Review, 62, doi/ /h Cross, I. (2003). Music, cognition, culture and evolution. In I. Peretz & R.J. Zatorre (eds), The cognitive neuroscience of music, p Oxford University Press. Curtis, M.E., & Bharucha, J.J. (2010). The minor third communicates sadness in speech, mirroring its use in music. Emotion, 10, doi: /a Deliège, I., Vitouch, O., & Ladinig, O. (2010). Musique et évolution : théorie, débats, synthèses. Editions Mardaga. Droz, R. (2001). Musique et émotions. Actualités psychologiques, 11, Escoffier, N., Zhong, J., Schirmer, A. & Qiu, A. (2013). Emotional expressions in voice and music: same code, same effect? Human Brain Mapping, 34, doi: /hbm Farbood, M. (2012). A parametric, temporal model of musical tension. Music Perception, 29, doi: /mp Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A.D., & Koelsch, S. (2009). Universal recognition of three basic emotions in music. Current Biology, 19, doi: /j.cub Gabrielsson, A. & Lindström, E. (2001). The influence of musical structure on emotional expression. In P. Juslin & J. Sloboda (Eds.), Music and emotion: Theory and research (pp ). Oxford, England: Oxford University Press. Glowinski, D., Riolfo, A., Shirole, K., Torres-Eliard, K., Chiorri, C., & Grandjean, D. (2015). Is he playing solo or within an ensemble? How the context, visual information, and expertise may impact upon the perception of musical expressivity. Perception, 43(8): Grandjean, D., Baenziger, T., & Scherer, K.R (2006). Intonation as an interface between language and affect. Progress in Brain Research, 156, Grandjean, D., & Baenziger, T. (2009). Expression vocale des émotions. In D. Sander & K.R. Scherer, Traité de psychologie des émotions, p Dunod : Paris Hevner, K. (1935). The affective character of the major and minor modes in music. American Journal of Psychology, 47, doi: /

130 Hevner, K. (1936). Experimental studies of the elements of expression in music. American Journal of Psychology, 48, doi: / Juslin, P. (1997). Emotional communication in music performance: a functionalist perspective and some data. Music Perception, 14, doi: / Juslin, P. (2000). Cue utilization in communication of emotion in music performance: relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance, 26, doi: / Juslin, P., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance : different channels, same code? Psychological Bulletin, 129, doi: / Juslin, P., Liljeström, S., Västfjäll, D., & Lundqvist, L.(2010). How does music evoke emotions? : Exploring the underlying mechanisms. In Juslin, P., & Sloboda, J.Handbook of music and emotion: theory, research, applications (pp ). Oxford, England: Oxford University Press. Juslin, P., & Lindström, E. (2010). Musical expression of emotions: modeling listener s judgments of composed and performed features. Music Analysis, 29, doi: /j x Juslin, P., & Sloboda, J. (Eds). (2001). Music and emotion: Theory and research. Oxford, England: Oxford University Press. Juslin, P., & Sloboda, J. (2010). Handbook of music and emotion: theory, research, applications. Oxford, England: Oxford University Press. Juslin, P. & Västfjäll, D. (2008). Emotional responses to music: the need to consider underlying mechanisms. Behavioral and Brain Sciences, 31, doi: Koelsch,S. (2015). Music-evoked emotions: principles, brain correlates, and implications for theory. Annals of the New York Academy of Sciences, 1337, p doi: /nyas Koelsch, S., Fritz, T., Schulze, K., Alsop, D., & Schlaug, G. (2005). Adults and children pro-cessing music: an fmri study. NeuroImage, 25(4), doi: /j.neuroimage Kraemer, D., Macrae, C., Green, A., & Kelley, W. (2005). Musical imagery: sound of silence activate auditory cortex. Nature, 434, 158. doi: /434158a Lartillot, O., Toiviainen, P., & Eerola, T. (2008). A Matlab Toolbox for Music Information Retrieval. Data Analysis, Machine Learning and Applications, part of the series Studies in Classification, Data Analysis and Knowledge Organization, Pearce, M.T. & Wiggins, G.A. (2012). Auditory expectation: the information dynamics of music perception and cognition. Topics in Cognitive Science, 4, doi : /j x Peretz, I., Gagnon, L. & Bouchard, B. (1998). Music and emotion: perceptual determinants, immediacy, and isolation after brain damage. Cognition, 68, doi : /S (98)

131 Scherer, K.R., Johnstone, T. & Klasmeyer, G. (2003). Vocal expression of emotion. In R. J. Davidson, H. H. Goldsmith, & K. R. Scherer (Eds.), Handbook of affective sciences (pp ). New York, NY: Oxford University Press. Schirmer, A., Fox, P.M., & Grandjean, D. (2012). On the spatial organization of sound processing in the human temporal lobe: a meta-analysis. NeuroImage, 63, doi: /j.neuroimage Schubert, E. (2004). Modeling perceived emotion with continuous musical features. Music Perception, 21, doi: /mp Thibault de Beauregard, K., Ott, T., Labbé, C. & Grandjean, D. (submitted). The dynamics of emotion expressed by music. Thompson, W.F., Robitaille, T. (1992). Can composers express emotion through music? Empirical studies of the arts, 10, doi: GW58-MTEL Trainor, L.J., Tsang, C.D., & Cheung, V.H.M. (2002). Preference for sensory consonance in 2- and 4- month-old infants. Music Perception, 20, doi: /mp Vieillard, S., Peretz, I., Gosselin, N., Khalfa, S., Gagnon, L., & Bouchard, B. (2008). Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition & Emotion, 22, Wood, N. (2006) Generalized Additive Models: An Introduction with R. CRC. Zatorre, R.J., Belin, P., & Penhune, V.B. (2002). Structure and function of auditory cortex: music and speech. Trends in Cognitive Sciences, 6, doi: /S (00) Zentner, M., Grandjean, D., & Scherer K.R. (2008). Emotions evoked by the sound of music: Characterization, Classification and Measurement. Emotion, 4 (8),

132 Supplemental Materials Details of the musical excerpts for study 1 Appendix A ME 1_Joyful Activation: Bach, Jean-Sebastian. Brandenburg concerto No.2 in F major, BWV First movement, Bars 1-59 [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Hamburg, Germany: Deutsche Grammophon. (1992) ME 2_Peacefulness: Bach, Jean-Sebastian. Aria, Orchestral suite No.3 in D minor, BWV Full piece [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Hamburg, Germany: Deutsche Grammophon. (1992) ME 3_Joyful Activation: Bach, Carl Philipp Emanuel. Cello concerto in A major, WQ 172. Third Movement, Bars [Recorded by The Café Zimmermann, Pablo Valetti], [CD]. Alpha. (2006) ME 4_Tension: Bazzini, Antonio. The Dance of the Goblins, op.25. Bars [Recorded by Maxim Vengerov & Itamar Golan], [CD]. Les Incontournables du Classique. Teldec Classics. (2002) ME 5_Power: Beethoven, Ludwig Van. Symphony No.9 in D minor, op.125. Scherzo: Molto vivace Presto Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Hamburg, Germany: Deutsche Grammophon. (2001) ME 6_Sadness: Beethoven, Ludwig Van. Symphony No. 7 in A major, op.92. Allegretto, Bars 1-82 [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Hamburg, Germany: Deutsche Grammophon. (2001) ME 7_Wonder: Beethoven, Ludwig Van. Violin Concerto in D major, op.61. Rondo. Allegro, Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan, Anne-Sophie Mutter], [CD]. Hamburg, Germany: Deutsche Grammophon. (1999) ME 8_Nostalgia: Brahms, Johannes. Symphony No.3 in F major, op.90. Poco Allegretto, Bars 1-78 [Recorded by The Vienna Philharmonic Orchestra, Karl Böhm], [CD]. Deutsche Grammophon. (2002) ME 9_Wonder: Bruch, Max. Violin Concerto No.1 in G minor, op.26. Finale: Allegro energico, Bars [Recorded by The Gewandhausorchester Leipzig, Kurt Masur, Maxim Vengerov] [CD]. Teldec Classics. (1994) 131

133 ME 10_Power: Bruckner, Anton. Symphony No.9 in D minor. First movement,bars [Recorded by The Berlin Philharmonic Orchestra, Eugen Jochum], [CD]. Deutsche Grammophon. (2002) ME 11_Tenderness: Carulli, Ferdinando. Concerto for flute, guitar and orchestra. Second movement, Bars 1-64 [Recorded by Franz Liszt Chamber Orchestra, Jànos Rolla], [CD]. CBS Records Masterworks. (1988) ME 12_Power: Dvorak, Antonin. New World Symphony, Symphony 9 in E minor, op. 95, B.178. Allegro con fuoco. Bars 1-92 [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Deutsche Grammophon. (1999) ME 13_Tenderness: Dvorak, Antonin. Symphony No.8 in G major, op.88, B Allegretto grazioso-molto vivace Bars [Recorded by The Vienna Philharmonic Orchestra, Herbert von Karajan] [CD]. Deutsche Grammophon. (1990) ME 14_Tenderness: Elgar, Edward. Salut d amour, op.12 Bars 1-68 [Recorded by The Saint Louis Symphony Orchestra, Pinchas Zukerman, Leonard Slatkin [CD]. Pinchas Zukerman. (1993) ME 15_Peacefulness: Fauré, Gabriel. Pavane in F sharp minor, op. 50. Bars 1-47 [Recorded by Boston Symphony Orchestra, Carlo Maria Giulini] [CD]. Deutsche Grammophon. (2009) ME 16_Joyful Activation: Hummel, Johann Nepomuk. Trumpet concerto in E-flat major, Third movement Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan, Maurice André], [CD]. EMI Classics. (1999) ME 17_Transcendence: Mahler, Gustav. Symphony No.1 in D major. First Movement Bars 1-34 [Recorded by The Concertgebouw Orchestra of Amsterdam, Leonard Bernstein], [CD]. Deutsche Grammophon. (1999) ME 18_Power: Mendelssohn, Felix. String Symphony No.12 in G minor. Allegro molto Bars [Recorded by The Gewandhausorchester Leipzig, Kurt Masur], [CD]. Brilliant Classics ME 19_Tension: Mendelssohn, Felix. String octet in E-flat major, op.20. Scherzo Bars [Recorded by The Wiener Oktett] [CD]. Decca. (1988) ME 20_Peacefulness: Molino, Francesco. Guitar concerto in E minor, op.56. Maestoso molto. [Recorded by The Academy of St Martin in the Fields, Iona Brown, Pepe Romero], [CD]. Philips Digital Classics. (1990) 132

134 ME 21_Joyful Activation: Mozart, Wolfgang, Amadeus. Divertimento in D major. Allegro. Bars 1-51 [Recorded by The Amsterdam Baroque Orchestra, Ton Koopman], [CD]. Erato (1990) ME 22_Sadness: Mozart, Wolfgang Amadeus. Piano Concerto No.23 in A major, KV 488. Adagio Bars 1-42 [Recorded by Derek Han], [CD]. Brilliant Classics (2005) ME 23_Sadness: Porpora, Nicola. Alto Giove, Polifemo, instrumental version (oboe and orchestra) Bars 1-23 [Recorded by Derek-Lee Ragin], [CD]. Auvidis Travelling (1994) ME 24_Tension: Prokofiev, Sergei. Violin concerto in D major, op.19. Scherzo-Vivacissimo, Bars [Recorded by The London Symphony Orchestra, Maxim Vengerov], [CD]. Teldec Classics (1994) ME 25_Transcendence: Ravel, Maurice. Boléro in C major, Ballet music for orchestra Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan [CD]. Deutsche Grammophon. (1999) ME 26_Nostalgia: Sarasate, Pablo. Zigeunerweisen (Gypsy Airs), op.20 n 1 Bars [Recorded by The Vienna Philharmonic Orchestra, James Levine, Anne-Sophie Mutter], [CD]. Deutsche Grammophon. (1993) ME 27_Transcendence: Schonberg, Arnold. Verklärte Nacht, op. 4. Sehr Ruhig Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan] [CD]. Deutsche Grammophon. (1999) ME 28_Nostalgia: Schubert, Franz. Piano Trio No.2 in E-flat major, op.100, D 929. Andante con moto. Bars 1-34 [Recorded by Renaud Capuçon, Gautier Capuçon, Frank Braley ] [CD]. Parlophone (2007) ME 29_Tension: Schubert, Franz. Impromptus n 4 in F minor, D 935. Allegro scherzando Bars [Recorded by Wilhelm Kempff ] [CD]. Deutsche Grammophon. (2009) ME 30_Tenderness: Tchaïkovski, Piotr Ilitch. Symphony No.6 in B minor, op.74, Pathetic. Allegro con grazia Bars 1-56 [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan] [CD]. Deutsche Grammophon. (2008) ME 31_Transcendence: Tchaïkovski, Piotr Ilitch. Swan Lake, ballet, op.20. Act I, scene two, Bars 1-51 [Recorded by The Boston Symphony Orchestra, Seiji Ozawa] [CD]. Deutsche Grammophon. (1997) 133

135 ME 32_Nostalgia: Tiersen, Yann. Comptine d un autre été: L après-midi. Film: Amélie Poulain. Full piece [Recorded by Yann Tiersen] [CD]. EMI Virgin. (2001) ME 33_Wonder: Vivaldi, Antonio. Double trumpet concerto in C major, RV537. Allegro Bars 1-68 [Recorded by The Munich Chamber Orchestra, Maurice André] [CD]. Deutsche Grammophon. (1997) ME 34_Peacefulness: Vivaldi, Antonio. Violin Concerto in D major, RV 190. Andante Bars [Recorded by The Venice Baroque Orchestra, Giuliano Carmignola] [CD]. Sony Classical (2001) ME 35_Sadness: Vivaldi, Antonio. Violin Sonata No.8 in D minor. Preludio [Recorded by The Akademie für Alte Musik Berlin, Clemens-Maria Nuszbaumer] [CD]. Harmonia Mundi (2011) ME 36_Wonder: Vivaldi, Antonio. Violin Concerto in C major, RV 190. Allegro Bars [Recorded by The Venice Baroque Orchestra, Giuliano Carmignola] [CD]. Sony Classical (2001) 134

136 Appendix B Averaged judgments and confidence interval (95%) for each emotion as a function of each factor. Emotion * Factor 1 JA. N. Pea. P. S. Tend.T. Tr. W. JA. N. Pea. P. S. Tend.T. Tr. W. Figure A1.Effect of factor 1 on each GEMS dimension. JA. : Joyful Activation ; N. : Nostalgia ; Pea. : Peacefulness ; P. :Power ; S. : Sadness ; Tend. : Tenderness ; T. : Tension ; Tr. : Transcendence ; W. : Wonder. 135

137 Emotion * Factor 2 JA. N. Pea. P. S. Tend. T. Tr. W. Figure A2.Effect of factor 2 on each GEMS dimension. JA. : Joyful Activation ; N. : Nostalgia ; Pea. : Peacefulness ; P. :Power ; S. : Sadness ; Tend. : Tenderness ; T. : Tension ; Tr. : Transcendence ; W. : Wonder. 136

138 Emotion * Factor 3 JA. N. Pea. P. S. Tend.T. Tr. W. JA. N. Pea.P. S. Tend.T. Tr. W. Figure A3.Effect of factor 3 on each GEMS dimension. JA. : Joyful Activation ; N. : Nostalgia ; Pea. : Peacefulness ; P. :Power ; S. : Sadness ; Tend. : Tenderness ; T. : Tension ; Tr. : Transcendence ; W. : Wonder. 137

139 GAMM results Nostalgia Appendix C Peacefulness Power Sadness Tenderness Tension Transcendence Wonder 138

140 Appendix D GAMM results through the four musical excerpts by GEMS dimensions. The black dots represent the predictions of the model and the green line represents the average normalized judgments. Joyful Activation Peacefulness 139

141 Tension Power 140

142 Sadness Wonder 141

143 Nostalgia Tenderness 142

144 Transcendence 143

145 Appendix E Details of the 56 musical excerpts for study 2 Power Musical Excerpt (ME) 1 Bartok, Bela. Concerto for Orchestra, SZ 116, Finale Bars [Recorded by The Chicago Symphony Orchestra, Fritz Reiner] [CD]. RCA Living Stereo (2004) ME 2 Bernstein, Leonard. Candide, Ouverture Bars [Recorded by The London Symphony Orchestra, Leonard Bernstein] [CD]. Deutsche Grammophon (2004) ME 3 Moussorgski, Modeste. The Hut on Fowl's Legs, Baba-Jaga, Full movement [Recorded by The Czech Philharmonic Orchestra, Karel Ancerl] [CD]. Supraphon (2002) ME 4 Bruch, Max. Concerto for violin & orchestra n 1 in G minor, op.26: Prelude Allegro moderato Bars [Recorded by The Gewandhausorchester Leipzig, Kurt Masur, Maxim Vengerov] [CD]. Teldec Classics. (1994) ME 5 Franck, César. Symphony in D minor, Allegro non troppo 5 bars after figure L 3 bars before figure Q [Recorded by The Paris Symphony Orchestra, Daniel Barenboïm] [CD]. Deutsche Grammophon (2005) ME 6 Gershwin, George. Concerto in F major for piano and orchestra, Allegro Agitato, 5 bars before Figure 5 end of the movement [Recorded by The San Francisco Symphony Orchestra, Michael Tilson Thomas] [CD]. Sony Classical (2001) ME 7 Liszt, Franz. Piano concerto n 1 in E flat major, Allegro maestoso Bars 1-69 [Recorded by Arthur Rubinstein] [CD]. RCA Red Seal (2013) ME 8 Mahler, Gustav. Symphony n 5 in C sharp minor, Funeral March. Bars [Recorded by The Berlin Philharmonic Orchestra, Claudio Abbado] [CD]. Deutsche Grammophon (1999) ME 9 Mahler, Gustav. Symphony n 1 in D major, Titan, Stürmisch Bewegt 8 bars before Figure 22-3 bars after Figure 34 [Recorded by The Concertgebouw Orchestra of Amsterdam, Leonard Bernstein], [CD]. Deutsche Grammophon. (1999) 144

146 ME 10 Mendelssohn, Felix. Violin concerto in E minor, op.64: Allegro molto appassionato Bars [Recorded by The Berlin Philharmonic Orchestra, Yehudi Menuhin], [CD]. EMI Classics. ME 11 Mozart, Wolfgang Amadeus. Piano concerto n 23 KV 488, Allegro Assai Bars [Recorded by Derek Han], [CD]. Brilliant Classics (2005) ME 12 Moussorgski, Modeste. Great Gate of Kiev Bars [Recorded by The Czech Philharmonic Orchestra, Karel Ancerl] [CD]. Supraphon (2002) ME 13 Rachmaninov, Sergueï. Rhapsody on a theme of Paganini, op.43, variation XII Bars [Recorded by The Royal Philharmonic Orchestra, Rafael Orozco] [CD]. Philips Classics (1999) ME 14 Rachmaninov, Sergueï. Piano concerto n 3 in D minor, op.30, Finale: Alla breve Figure 67- end of the movement [Recorded by The New York Philharmonic Orchestra, Vladimir Horowitz] [CD]. High Performance (2000) ME 15 Rachmaninov, Sergueï. Prelude n 5 in G minor, op.23/5 Full piece [Recorded by Nicolaï Lugansky] [CD]. Erato (2001) ME 16 Rachmaninov, Sergueï. Piano sonata n 2 in B flat Minor, op.36, Allegro molto Bars [Recorded by Hélène Grimaud] [CD]. Deutsche Grammophon (2009) ME 17 Rachmaninov, Sergueï. Symphony n 2 in E minor, op.27: Allegro molto Bars: beginning movement to 4 before Figure 65[Recorded by The Concertgebouw Orchestra, Vladimir Ashkenazy] [CD]. Decca (2011) ME 18 Rachmaninov, Sergueï. Symphony n 2 in E minor, OP.27: Allegro vivace Bars: beginning movement to 13 after Figure 32 [Recorded by The Concertgebouw Orchestra, Vladimir Ashkenazy] [CD]. Decca (2011) ME 19 Rachmaninov, Sergueï.. Rhapsody on a theme of Paganini, op.43, whole variation XXII [Recorded by The Royal Philharmonic Orchestra, Rafael Orozco] [CD]. Philips Classics (1999) ME 20 Saint-Saëns, Camille. Symphony n 3 in C minor, op.78, Organ -2B. Maestoso, Allegro molto 5 bars after Figure S 6 bars before Figure BB [Recorded by The Chicago Symphony Orchestra, Daniel Barenboïm] [CD]. Deutsche Grammophon (2005) 145

147 ME 21 Saint-Saëns, Camille. Symphony n 3 in C minor, op.78, Organ -2B. Maestoso, Allegro molto 10 bars after Figure BB end of the movement [Recorded by The Chicago Symphony Orchestra, Daniel Barenboïm] [CD]. Deutsche Grammophon (2005) ME 22 Schumann, Robert. Piano concerto op.54 Bars 377-end of the movement [Recorded by The London Symphony Orchestra, Sir Colin Davis] [CD]. RCA Red Seal (2005) ME 23 Smetana, Bedrich. Mà vlast: Vltava (Die Moldau) Bars [Recorded by The Berlin Philharmonic Orchestra, Herbert von Karajan], [CD]. Deutsche Grammophon. (1999) ME 24 Strauss, Richard. Don Juan, op.20, 11 bars after Figure N 17 bars before Figure W [Recorded by The Tonhalle Orchestra Zurich, David Zinman], [CD]. Phantom Sound & Vision ME 25 Strauss, Richard. Don Juan, op.20, Beginning of the movement - 1 bar before Figure F [Recorded by The Tonhalle Orchestra Zurich, David Zinman], [CD]. Phantom Sound & Vision ME 26 Tchaikovsky, Piotr Ilitch. Violin concerto in D major, op.35, Allegro moderato Bars [Recorded by The Boston Symphony Orchestra, Seiji Ozawa, Viktoria Mullova], [CD]. Philips Classics. (2001) ME 27 Vivaldi, Antonio. Violin concerto, for violin, strings & continuo in C major, RV177, Third movement, Full movement [Recorded by The Venice Baroque Orchestra, Giuliano Carmignola] [CD]. Sony Classical (2001) ME 28 Vivaldi, Antonio. Violin concerto in G, RV Allegro, Full movement [Recorded by The Venice Baroque Orchestra, Giuliano Carmignola] [CD]. Sony Classical (2001) Tenderness ME 29 Bach, Jean-Sebastian. Concerto for 2 violins, strings and b.c. in D minor, BWV Largo Bars 1-23 [Recorded by The Netherlands Bach Ensemble] [CD]. Brilliant Classics ME 30 Bach, Jean-Sebastian. Keyboard Concerto in F minor, BWV 1056: II: Largo, Full movement [Recorded by Les Violons du Roy, Alexandre Tharaud] [CD]. Harmonia Mundi (2005) 146

148 ME 31 Bach, Jean-Sebastian. Keyboard Concerto in G minor, BWV 1058: II: Andante Bars 1-25 [Recorded by Les Violons du Roy, Alexandre Tharaud] [CD]. Harmonia Mundi (2005) ME 32 Beethoven, Ludwig Van. Sonata for violin & piano n 5 in F major The Spring, op.24: Allegro Bars 1-38 (second repeat) [Recorded by Renaud Capuçon & Frank Braley] [CD]. Erato (2011) ME 33 Chopin, Frédéric. Waltz for piano n 9 in A flat major ( Farewell ).69/1 (posth.), B.95 Full movement [Recorded by Georges Cziffra] [CD]. EMI Classics (2002) ME 34 Chopin, Frédéric. Prelude n 15 in D flat, op.28/15 Raindrop Bars 1-75 [Recorded by Martha Argerich] [CD]. Deutsche Grammophon (1999) ME 35 Couperin, François. Le dodo ou l amour au berceau (15E ordre) Full movement [Recorded by Alexandre Tharaud] [CD]. Harmonia Mundi (2007) ME 36 Couperin, François. Le Carillon de Cithére (14E ordre), Full movement [Recorded by Alexandre Tharaud] [CD]. Harmonia Mundi (2007) ME 37 Couperin, François. Les Juméles (12E ordre), Full movement [Recorded by Alexandre Tharaud] [CD]. Harmonia Mundi (2007) ME 38 Haydn, Joseph. Symphony n 38 in C major: II- Andante molto, Full movement [Recorded by the Cologne Chamber Orchestra] [CD]. Naxos (2005) ME 39 Kreisler, Fritz. Liebesleid. Full movement [Recorded by Anne-Sophie Mutter] [CD]. Deutsche Grammophon (2003) ME 40 Mendelssohn, Felix. Songs without words 1-Op.62. Full movement [Recorded by Axel Strauss & Cord Garben] [CD]. Naxos (2007) ME 41 Mendelssohn, Felix. Songs without words 10-Op.67.1 Full movement [Recorded by Axel Strauss & Cord Garben] [CD]. Naxos (2007) ME 42 Mozart, Wolfgang Amadeus. Piano sonata n 12 in F, K332: 2.Adagio Bars 1-33 [Recorded by Andreas Staier] [CD]. Harmonia Mundi (2011) 147

149 ME 43 Mozart, Wolfgang Amadeus. Piano concerto n 11 KV413: 2. Larghetto Bars 1-41[Recorded by Derek Han], [CD]. Brilliant Classics (2005) ME 44 Mozart, Wolfgang Amadeus. Piano concerto n 15 KV450: Allegro Bars [Recorded by Derek Han], [CD]. Brilliant Classics (2005) ME 45 Pleyel, Ignace Joseph. Cello concerto in C, Ben.106: 2. Adagio [Recorded by the Akademie für Alte Musik Berlin, Ivan Monighetti] [CD]. Harmonia Mundi (2011) ME 46 Rachmaninov, Serguei. Rhapsody on a theme of Paganini, op.43, 2 bars before variation XVIII 7 bars before variation XIX [Recorded by The Royal Philharmonic Orchestra, Rafael Orozco] [CD]. Philips Classics (1999) ME 47 Schobert, Johann. Sonata for fortepiano, violin & cello in F, Op.16/4: 1. Andante. [Recorded by Chiara Banchini, Véronique Méjean, Philipp Bosbach & Luciano Sgrizzi] [CD]. Harmonia Mundi (2011) ME 48 Schubert, Franz. Impromptus op.90 (n 3 ges-dur) Bars 1-68 [Recorded by Alfred Brendel] [CD]. Philips Classics (2006) ME 49 Tchaikovsky, Piotr Ilitch. Souvenir d un lieu cher, op. 42. Melody, Full movement [Recorded by Maxim Vengerov & Itamar Golan], [CD]. Les Incontournables du Classique. Teldec Classics. (2002) ME 50 Stamitz, Carl. Oboe quartet in D, Op.8 2.Andante Amoroso Bars 1-43 [Recorded by Alessandro Baccini, Nuovo Quartetto italiano & Luca Stevanato], [CD]. Naxos (2006) ME 51 Tchaikovsky, Piotr Ilitch. Valse sentimentale in F minor op.51 n 6, Full movement [Recorded by Renaud Capuçon & Jérôme Ducros], [CD]. Virgin Classics (2006) ME 52 Tchaikovsky, Piotr Ilitch. Violin concerto in D major, op.35, 2 nd movement duration Bars 1-58 [Recorded by The Boston Symphony Orchestra, Seiji Ozawa, Viktoria Mullova], [CD]. Philips Classics. (2001) ME 53 Telemann, Georg Philipp. Sonata Prima n 4 for flute, violin, cello & basso continuo in A 1. Soave, Full movement [Recorded by The Freiburger Barock Consort], [CD]. Harmonia Mundi. (2011) 148

150 ME 54 Vivaldi, Antonio. Violin concerto in C, RV Largo, Full movement [Recorded by The Venice Baroque Orchestra, Giuliano Carmignola] [CD]. Sony Classical (2001) ME 55 Vivaldi, Antonio. Concerto for violin, strings and continuo in D major, RV 222: 2 nd movement, Full movement [Recorded by The Venice Baroque Orchestra, Giuliano Carmignola] [CD]. Archiv Produktion (2001) ME 56 Bach, Wilhelm Friedemann. Sinfonie F-Dur Falck 67-4.Menuetto I&II, Full movement [Recorded by The Akademie für Alte Musik Berlin] [CD]. Harmonia Mundi (2011) 149

151 3.3. Etude 3 Do you hear what I feel? A study of the dynamic unfolding of perceived and felt emotions in music listening Kim Thibault de Beauregard, Carolina Labbé, and Didier Grandjean Faculty of Psychology and Educational Sciences, and Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, Geneva, Switzerland. Abstract In the following study we tested the effects of expressive style in musical performances on felt and perceived emotion ratings, using both dynamic ratings during music listening and overall static ratings after listening to nine pieces for solo violin. To this end we conducted two experiments where participants either rated the intensity of their own experiences during music listening (feeling, n = 119) or the intensity of the emotion being expressed by the music (perception, n = 89). Our results show that the expressive style in which the pieces were played had significant but different effects on ratings of perceived and felt emotion. In the perception experiment listeners rated the intensity of expressed emotions more strongly during emphatic (emotionally over-expressive) compared to natural/concert-like and deadpan (emotionally unexpressive) performances, while during the feeling experiment listeners felt more sad during natural/concert-like performances compared to emphatic and deadpan performances of the same piece. We also found significant differences in the intensity of emotion ratings between experiments (felt < perception) and rating type (static < dynamic) as well as an interaction between experiment and rating type such that the difference between felt and perceived emotion ratings is less strong when using dynamic ratings. Keywords: music, emotion, perception, feeling, performance 150

152 Theoretical Framework Undoubtedly, one of the main reasons we invest so much time and energy in music is the affective component of the music-listening experience. Though determining why this is the case remains somewhat elusive despite years of research in the topic. In a series of gratification studies Lonsdale and North (2011) found that among the top reasons for listening to music positive mood management and negative mood management were in the first and third position respectively, which is in line with research concerning the pleasant aspect of music-listening and its ability to activate the reward system (Blood & Zatorre, 2001; Blood, Zatorre, Bermudez, & Evans, 1999; Salimpoor, Benovoy, Larcher, Dagher, & Zatorre, 2011; Salimpoor, Benovoy, Longo, Cooperstock, & Zatorre, 2009). Nevertheless, the relationship between listener s subjective experiences of emotion and the emotions expressed in music as might be perceived by the listener is not so clear. Indeed, though the question of whether music can induce emotions or merely express them is marginally less controversial now than it once was (c.f. emotivist vs. cognitivist perspective, Juslin & Västfjäll, 2008; Krumhansl, 1997; Scherer, 2004), the difficulty remains in identifying the links between felt and perceived emotions if and when they happen (for a recent review see Schubert, 2013). Are they the same? Are they different? How are they induced? According to Gabrielsson (2002), the most common relationship between emotions perceived and felt, which he does distinguish, is a positive one with listeners tending to feel the same thing they hear expressed in the music most of the time. Yet while Juslin (2013) advocates for the use of basic emotions such as happiness, sadness, anger, fear, and love (or tenderness) for the study of expression and perception of emotion in music, for the study of induced emotions he proposes a wider range of possible dimensions (for a detailed list see Juslin, Liljeström, Västfjäll, & Lundqvist, 2010). Not only that, but one can presume that while there might be great agreement amongst listeners concerning what a given piece 151

153 expresses, less agreement can occur concerning what that same piece induces leading to Gabrielsson s negative relationship (2002). This negative relationship can lead to reports of mixed emotions of very different categories such as sadness and joy in the case of basic emotions, or nostalgia and peacefulness in the case of more similar and music-specific dimension as well (see Trost, Ethofer, Zentner, & Vuilleumier, 2011). Finally, it is also possible to find no systematic relationship, such as when no emotion is aroused while an emotion is still perceived, or no relationship which goes back to the issue of there not necessarily being a correspondence between the kinds of emotions one can perceive in music and the ones it can arouse (Gabrielsson, 2002). The recognition of emotions conveyed through music is more related to objective processes, in comparison with the feelings induced by music. Indeed, it seems easier to agree on the emotions expressed by music than on felt emotions (Campbell, 1942, cited in Schubert, 2004). Indeed, there is a high reliability among people s judgments concerning the emotions expressed by music (Fabian & Schubert, 2003; Fritz et al., 2009; Gabrielsson & Juslin, 2003), whereas felt emotions are more related to subtle subjective and personal processes. Moreover, the perception of emotions expressed by music is a quick process (Bigand, Vieillard, Madurell, Marozeau, & Dacquet, 2005) whereas it seems to take more time for an emotional feelings to emerge during music listening, as evidenced by mood-induction models (for a discussion see Gabrielsson, 2001; Scherer & Zentner 2001). Thus far there seems to be one study by Kallinen and Ravaja (2006) that has systematically looked at the relationship between perceived and felt basic emotions in the same listeners. They found that while these tended to be qualitatively similar, felt emotions were nevertheless rated much more intensely in terms of arousal than perceived emotions for positive valence. This is however not necessarily surprising because unlike everyday objects of emotion, music tends to induce more positively valenced experiences regardless of the 152

154 emotional tone conveyed. In a more recent study (Kawakami, Furukawa, Katahira, & Okanoya, 2013), the pleasant aspect of sad music was explored which challenges the idea that there is always a one to one correspondence between what one hears and what one feels, lending support to the idea that these two phenomena are qualitatively different components of the affective experience (Gabrielson & Lindström, 2010). Indeed, Evans and Schubert (2006) found that when mapping listener s ratings of felt emotion and perceived emotion into a two-dimensional valence and arousal space, 30% of the time their judgments do not match. Nevertheless, however expedient this approach might be, we believe it is important to go beyond two-dimensional models when studying aesthetic experiences. Therefore in order to fine-tune our investigation of the relationship between felt and expressed emotion we decided to use nine musical pieces intended to represent the nine dimensions of the Geneva Emotional Music Scale (GEMS; Zentner, Grandjean, & Scherer, 2008) which was specifically developed for the study of musical emotions, a subcategory of aesthetic emotions which are conceptualized as being distinct from other everyday life affective states (Scherer, 2004). This scale was used in two separate studies where we asked participants to continuously rate the amount of perceived wonder, transcendence, tenderness, nostalgia, peacefulness, power, joyful activation, tension, sadness, and overall emotional expression of the music in one (perception experiment), and to rate the intensity of their own emotional experience along these same nine dimensions, as well as how moved or how much affect they felt in general in the other (feeling experiment). In both experiments the pieces were repeated three times, each presentation conveying a different level of intensity in terms of the expression in the performance. Beauty in music is largely based on artistic deviations from the exact and rigid interpretation of the score. In music performances, listeners discriminate the expressive intentions of the musician by using auditory cues (Gabrielsson & Juslin, 1996; Juslin, 2000, 153

155 2001). There is an important intra- and inter-individual variability in music performance. The musical performance is not only a question of technical motor skills, it also requires the ability to generate different expressive performances of the same piece of music according to the nature of the musical structure and emotion communication. Sloboda (2000) proposes that music performances consist of two major elements: i) the technical component, which is connected to the fluid production mechanism of coordinated actions; and ii) the expressive component of the musical performance, which is derived from the intentional variations in performance parameters chosen by the performer to influence the cognitive and emotional responses of listeners. Technical and expressive skills are separate components although they interact with each other and depend in part from one another. Moreover, Sloboda (2000) stresses that technical skill is, in theory at least, not connected to the musical or artistic content of the music. Therefore, it is quite possible to interpret a piece of music with absolute technical mastery but no expressive skill whatsoever. Expressive skills require knowledge of the underlying structure and stylistic constraints of the musical style. Indeed, because an effective expressive performance often requires very fine and subtle variations in the parameters of interpretation, expressive intentions often cannot be effectively communicated without a high level of technical skill. Finally, Sloboda (2000) insists on the necessity to conceive of expressive variation as a means to emphasize/highlight the structural features of the music, and as a way to report information about the character of music, especially its emotional content. Juslin (2003) defines performance expression as a multi-dimensional phenomenon and proposes five components in his GERMS model: (a) Generative rules that serve to clarify the musical structure; (b) Emotional expression that serves to convey intended emotions to listeners; (c) Random variations that reflect human limitations with regard to internal time-keeper variance and motor delays; (d) Motion principles that prescribe that some aspects of the performance should be shaped in accordance with patterns of biological 154

156 motion; and finally (e) Stylistic unexpectedness that involves local deviations from performance conventions. In reviewing potential routes of emotion elicitation through music, Scherer and Zentner (2001) already mention the importance of (emotional) expression in theatrical and musical performances. They argue that a kind of empathy with the emotion presumed to be felt by the performer could take place via a mechanism of emotional contagion and speculate that the musical structure alone, as interpreted by the listener, might actually achieve this as well. This is in line with Juslin s assertion that the power of music to express and our ability to attribute expressive meaning, especially emotional meaning, to music lies in the iconic similarity between music with the human voice and movement (2013), a link already discussed by Rousseau (Rousseau, ). It follows then that the more expressive the performance, the more intense the judgments of perceived emotion should be and, presumably, judgments of felt emotion as well, though this remains to be seen. Brain imaging studies comparing expressive and so-called mechanical performances have found expressive piano performances to increase activity in areas related to emotion processing and even more so in musicians (Chapin, Jantzen, Kelso, Steinberg, & Large, 2010). In another study investigating the role of expectancy with expressive and non-expressive versions of the same piano sonatas, the electrodermal response to certain chords was found to be stronger in the expressive compared to the non-expressive version (Koelsch, Kilches, Steinbeis, & Schelinski, 2008) and there is evidence elsewhere for a greater number of chills and skin conductance responses to emotionally powerful music compared to simply arousing or relaxing music (Rickard, 2004). But even for violin phrases played in a technical, expressive, or emotional (via mood induction) manner, listeners ratings of preference and perceived expertise reveal a significant bias toward expressive performances (Van Zijl & Luck, 2013). Discontinuity, i.e. the fact that musical pieces are played by humans and not 155

157 by robots, in perception is useful to make sense of the music and playing the violin is not a continuous action. Another important aspect in the musical performance is the intentionality of the musician inferred by the listener during musical exposure. If individuals are moved by an expressive or emotional performance, one can argue that the authenticity of the performer as perceived by the listener plays a relevant role, both in the feeling and the perception of emotion (Bänziger, Mortillaro & Scherer, 2012). Indeed, it is likely that if a performer plays in an exaggerated manner, individuals will feel duped and they will not be moved by the performance. Conversely, the perception of the intensity of emotion expressed through music should increase even if the performer plays in an over-emotional manner. With the following two studies, experiment 1 (perception) and experiment 2 (feeling), we sought to test the following hypotheses: i) we predicted higher agreement (as measured with Cronbach s alpha) and higher intensities for perceived emotion compared to felt emotions; ii) we predicted an interaction between felt or perceived emotions with the level of expressivity. More specifically, we predicted higher values of felt emotions for music performed in a natural, i.e. concert-like, expressive style compared to emphatic or deadpan performances, while for perceived emotions we predicted higher values for emphatic compared to the concert-like and deadpan performances; iii) finally, we also predicted an interaction with the type of rating (static vs dynamic), experiment (perception vs feeling), the level of expressivity (deadpan, natural/concert-like, emphatic) and emotion plus expression/affect (GEMS dimensions + expression/affect). 156

158 Method Participants Classical music lovers were recruited separately in both experiments via advertisements posted at the University of Geneva and announcements made at introductory classes in affective neuropsychology and affective neuroscience given in the Faculty of Psychology. Due to the underrepresentation of male students, a total of 119 female participants were included in the feeling study (M = 23.5 years, SD = 5.2 years) and 89 participants in the perception study (27 males and 62 females) (M = years, SD = 10.5 years). They were either paid 15 Swiss francs or given course credit upon successfully completing the task. The local ethical committee of the University of Geneva approved this study. Materials Musical stimuli The musical stimuli were created with the help of the famous French violinist Renaud Capuçon. We selected with him nine musical excerpts for the nine GEMS dimensions (Table 1). 157

159 Table 1 Musical pieces used with the targeted emotional dimension and duration Bach Beethoven *Brahms Franck Gluck Massenet Mendelssohn Mozart Schumann Sibelius Piece Target D. E. N. Partita no. 2 in D minor, BWV 1004, I. Allemanda Full movement Violin concerto in D major, Op. 61, II. Larghetto Bars Violin sonata no. 1 in G major, Op. 78, I. Vivace ma non troppo Sonata for piano & violin in A major, FWV 8, II. Allegro Bars 14-23, Nostalgia Peacefulness 2'08'' 2'01'' 2'22'' 3'46'' 3'55'' 4'04'' Peacefulness - - 1'06'' Sadness 2'11'' 2'15'' 2'22'' Melody from Orpheus & Eurydice Tenderness 3'34'' 3'37'' 3'35'' Méditation from Thaïs Bars 3-40 Transcendence 2'51'' 2'48'' 2'58'' Violin concerto no. 2 in E minor, Op. 64, I. Allegro molto appassionato Bars 2-47 Violin concerto no. 3 in G major, K.216, I. Allegro Bars Violin concerto in D minor, Op. Posth., I. In Kräftigem, Nicht Zu Schnellem Tempo Bars Violin concerto in D minor, Op. 47, I. Allegro moderato Bars 4-59 Tension 1'04'' 59'' 1' Joyful Activation Power Wonder 1'47'' 1'41'' 1'42'' 2'18'' 2'13'' 2'21'' 2'17'' 2'12'' 2'16'' Note. Participants in groups 1 and 4 heard the Bach, Mozart, and Sibelius pieces; participants in groups 2 and 5 heard the Beethoven, Franck, and Mendelssohn pieces; and participants in groups 3 and 6 heard the Gluck, Massenet, and Schumann pieces. All participants heard the Schumann piece in N style. * = Training piece; D. = Deadpan; E. = Emphatic; N. = Natural/concert-like. 158

160 After this selection, we recorded the musical excerpts in a specialized room at the Brain and Behavior Laboratory at the University of Geneva with the help of a sound engineer (Lucas Tamarit). The recordings took place during a whole day and a ten-minute break was scheduled every hour. We used a Neumann BCM104 microphone for audio recordings. We also filmed the experiment with a Panasonic AG-HPX171E camera but we did not use the films in this study. For all musical samples we asked Renaud Capuçon to modulate the musical expressivity of his performances according to three expressive styles: deadpan, emphatic, and natural/concert-like. The deadpan style corresponds to an academic manner of playing that is emotionally unexpressive and does not follow the dynamics indicated on the score. The emphatic style corresponds to an exaggerated manner of playing with too much emotional expression. The natural/concert-like style corresponds to the violinist s usual manner of playing. In order to avoid biases linked to the order in which the musical excerpts were performed, we recorded these using a pseudorandom order varying to the expressive styles and musical excerpts. Design and Procedure Before the beginning of the experiment, all participants filled out a consent form in which the experiment, the data processing, and the utilisation of the data for publication purposes were described. Both experiment 1 (perception) and 2 (feeling) were identical in the sense that in both listeners were asked to rate 10 recordings: 3 pieces performed in the 3 expressive styles (deadpan, natural/concert-like and emphatic) plus 1 additional piece in the natural/concertlike style. Thus not all listeners could rate all 27 recordings (9 pieces x 3 styles) because the experiment would have taken too long. Instead, all participants listened to 10 recordings (3 159

161 pieces x 3 styles + 1 control piece in natural/concert-like style) by being assigned to one of 6 groups. In each group, a selection of 10 recordings (3 pieces x 3 styles + 1 control) was presented in such a way that neither the same piece nor the same expressive style was ever heard twice in a row. The effect of order was dealt with by having each participant a different order of the excerpts. Figure 1. Experimental paradigm. Note. Left: flow of experiments 1a and 2a, which start with dynamic ratings of the intensity of perceived expression or felt affect, and are followed by nine static ratings of perceived or felt emotion respectively. Right: flow for experiments 1b and 2b, which start with dynamic ratings of the intensity of a specific perceived or felt emotion, e.g. power, and are followed by an overall static rating of perceived expression or felt affect respectively. A description of the GEMS dimensions was provided before the beginning of the judgments task (Appendix A). The sessions took place in a computer room at the University of Geneva and headphones (Sennheiser model HD 201) were used for the listening part of the 160

162 task. For a complete description of the dynamic judgment task, see Appendix B in supplemental materials. Experiment 1 Perception As can be seen in Figure 1, in experiment 1a participants (n = 44, M = years, SD = 9.46) continuously rated the amount of musical expression as they listened to the piece and then globally rated to what extent the piece expressed each of the GEMS dimensions on nine visual analog scales (sliders). In experiment 1b participants (n = 45, M = years, SD = 10.53) continuously rated to what extent the piece expressed a specific GEMS dimension, e.g. power, as they listened and then proceeded to rate how expressive (expression) the piece had been in general on a single slider (regardless of the emotion being expressed). Experiment 2 Feeling Similarly, in experiment 2a, participants (n = 61, M = 23.6 years, SD = 4.8) continuously rated how moved, i.e. the amount of affect, they felt as they listened to the piece and then globally assessed to what extent they had experienced each of the nine GEMS dimensions on nine visual analog scales (sliders). Experiment 2b had participants (n = 58, M = 23.4 years, SD = 5.6 years) continuously rate how strongly they felt a specific GEMS dimension, e.g. power, as they listened to the piece and then rate how moved (affect) they had felt in general throughout the piece (regardless of the specific emotion being felt). Data processing Dynamic and static (i.e. slider) ratings were normalized across pieces and expressive styles for each participant and then rescaled from 0 to 100 to allow for direct comparisons between dynamic and static ratings. After discarding the first 15 s, we averaged the dynamic judgments to obtain a single averaged value for each participant and each trial. Statistical analyses were performed using R version in R Studio version for Windows. 161

163 We chose linear mixed effects models because they allow for the definition of both fixed and random effects, which was necessary with our data. Furthermore this kind of statistical analysis does not require an averaging procedure across the trials for each participant, note that such averaging procedure is often used in classical ANOVA without checking for normal distribution across trials for a given participant. Since lmer does not produce outputs that directly report the main effects of predictors, the main effects reported here are the results of a χ2 test between models with and without the fixed effects of interest. Post-hoc interaction analyses were performed using the phia package with Bonferroni correction. Results Cronbach s alpha To estimate the reliability of the measures, we computed Cronbach s alpha across participants. Cronbach s alpha ranged from 0.06 to 0.99 through the two experiments. Figures 2 and 3 presents the results for the perception Experiment and the feeling Experiment, for both Rating Type (static vs. dynamic). 162

164 Figure 2. Cronbach s Alphas for dynamic ratings in the perception and feeling experiments. Note. Top panel: Cronbach s Alphas of the dynamic judgements of the perceived intensity of emotional expression (blue) or intensity of felt affect (red). Bottom panel: Cronbach s Alphas of the dynamic judgement of the intensity of perceived GEMS dimensions expressed by music (blue) or felt by listeners (red). D. = Deadpan; E. = Emphatic; N. = Natural/concert-like; Trans = Transcendence. 163

165 experiments. Figure 3. Cronbach s Alphas for the static ratings in the feeling and perception Note. Top panel: Cronbach s Alphas of the static judgements of the intensity of emotional expression perceived in the music (blue) / intensity of felt affect (red). Bottom panel: Cronbach s Alphas of the static judgement of the intensity of GEMS dimensions expressed by music (blue) / felt by listeners (red). It is considered that the measure is good when the value is above Based on our data, the results for the Cronbach s Alpha are quite satisfactory for the majority of the excerpts. However, it is important to insist on the fact that this index is very sensitive to the number of participants, which could explain the very low values on certain musical excerpts (e.g. Sibelius_D_Wonder in Experiment Feeling 2b). Figure 4 depicts that, as predicted, the values are significantly higher (based on 1000 permutations; p<.016) in the Experiment 164

166 Perception (M=0.84; SD = 0.15) compared to Experiment Feelings (M=0.71; SD=0.25), whatever the Type of Rating (dynamic vs. static), meaning that individuals agree more on the emotions expressed by music than when they judge their own feeling. Figure 4. Graph permutation of the results for the comparison of the Cronbach s Alphas between Perception and Feeling experiments. Linear mixed effects model To test the effect of our manipulations we fit all the ratings values to a linear mixed effects model where we defined Rating Type (dynamic, static), expressive Style (deadpan, emphatic, natural/concert-like), Experiment (feeling, perception), and Emotion (wonder, transcendence, tenderness, nostalgia, peacefulness, power, joyful activation, tension, sadness, expression/affect) as well as, in line with our predictions, the interaction between Experiment and Emotion, the interactions between Experiment, Rating type, Style, and Emotion, as fixed 165

167 effect factors, and Participant and Piece as random factors. Post-hoc interaction analyses were performed using the phia package. All main effects were significant. Firstly, as predicted, we found a main effect of Experiment (χ 2 = 23.35, d.f. = 1, p <.001), such that the values were significantly higher in the perception Experiment than in the feeling Experiment. There was also a main effect of expressive Style (χ 2 = 58.08, d.f. = 2, p <.001), such that ratings were significantly lower ratings in the deadpan condition compared to the emphatic and the natural/concert-like conditions. We also found a main effect of Rating type (χ 2 = 69.69, d.f. = 1, p <.001), such that ratings were significantly higher when participants used dynamic ratings. Finally, there was a main effect of Emotion (χ 2 = , d.f. = 9, p <.001) with higher values in the expression/affect category. The analyses also revealed significant interaction effects. As predicted, the results revealed a significant interaction between Experiment and expressive Style (χ 2 = 6.84, d.f. = 2, p =.032), showing higher values in the natural/concert-like style compared to deadpan and emphatic styles in the feeling experiment, and higher values for emphatic style in the perception experiment compared to natural/concert-like and deadpan performances (Figure 5). The analyses revealed also a significant interaction between Experiment and Emotion (χ 2 = 19.7, d.f. = 9, p =.02) (Figure 6). 166

168 Experiment * Expressive Style Figure 5. Averaged emotional judgment values of the significant interaction between Experiment and Expressive Style. Experiment * Emotion Exp/Affect JA. N. Pea. P. S. Tend. T. Tr. W. Figure 6. Averaged emotional judgment values of the significant interaction between Experiment and Emotion. Exp/Affect : Expressive value/affect value ; JA. : Joyful Activation ; N. : Nostalgia ; Pea. : Peacefulness ; P. :Power ; S. : Sadness ; Tend. : Tenderness ; T. : Tension ; Tr. : Transcendence ; W. : Wonder. 167

169 We found a significant interaction between Experiment and Rating Type (χ 2 = 17.94, d.f. = 1, p <.002), showing that the difference between perception and feeling is less strong when using dynamic ratings compared to static ratings (Figure 7), as well as an interaction between Style and Emotion (χ 2 = 40.46, d.f. = 18, p <.002) (Figure 8) where the Style have a differential effect on each GEMS dimensions, specifically for Peacefulness with higher value for natural/concert-like style and with higher value for Power for emphatic style. We found a significant interaction between Rating Type and Emotion (χ 2 = 46.02, d.f. = 9, p <.002) (Figure 9), showing higher values for all GEMS dimensions with the dynamic Rating Type compared to the static one. Experiment * Rating Type Figure 7. Average values of the significant interaction between Experiment and Rating Type. 168

170 Expressive Style * Emotion Exp/Affect JA. N. Pea. P. S. Tend. T. Tr. W. Figure 8. Average values of the significant interaction between Style and Emotion. Exp/Affect : Expressive value/affect value ; JA. : Joyful Activation ; N. : Nostalgia ; Pea. : Peacefulness ; P. :Power ; S. : Sadness ; Tend. : Tenderness ; T. : Tension ; Tr. : Transcendence ; W. : Wonder. 169

171 Rating Type * Emotion Exp/Affect JA. N. Pea. P. S. Tend. T. Tr. W. Figure 9. Average values of the significant interaction between Rating Type and Emotion. Exp/Affect : Expressive value/affect value ; JA. : Joyful Activation ; N. : Nostalgia ; Pea. : Peacefulness ; P. :Power ; S. : Sadness ; Tend. : Tenderness ; T. : Tension ; Tr. : Transcendence ; W. : Wonder. We also found a triple interaction effect between Emotion, Experiment, and RatingType (χ 2 = 18.13, d.f. = 9, p =.034) (Figure 10), but the quadruple interaction was not significant. 170

172 Emotion*Experiment*Rating Type Exp/Affect JA. N. Pea. P. S. Tend. T. Tr. W. Figure 10. Average values of the significant triple interaction between Emotion, Experiment and Rating Type. Exp/Affect : Expressive value/affect value ; JA. : Joyful Activation ; N. : Nostalgia ; Pea. : Peacefulness ; P. :Power ; S. : Sadness ; Tend. : Tenderness ; T. : Tension ; Tr. : Transcendence ; W. : Wonder. As predicted, we observe an impact of Style on the GEMS dimensions in the feeling Experiment, meaning that for sadness, the values are higher for performances in the natural than in the emphatic style (χ 2 =2.86, d.f. = 1, p =. 09) (marginal effect). Conversely, and as predicted as well, the values are always higher for emphatic in the perception Experiment. Discussion This study highlighted several points of interest in the field of music and emotion research, confronting both the emotions felt by listeners and the emotions expressed by music, and the concept of levels of musical expressivity and emotion as characterized by the GEMS dimensions. In our hypotheses, we first assumed that there would be higher agreement and 171

173 higher intensities for ratings of perceived emotions compared to ratings of felt emotions. Indeed, the Cronbach s alphas demonstrate higher agreement between individuals in the perception Experiment, supporting traditional findings of the literature (Fabian & Schubert, 2003; Schubert, 2004; Fritz et al., 2009; Gabrielsson & Juslin, 2003). In addition, statistical analyses show a main effect of experiment, highlighting higher values in the perception experiment than in the feeling experiment, as predicted. As pointed out by Juslin and Laukka (2004): Emotion perception is relatively easy to measure and is a cognitive process in the sense that it may well proceed without any emotional involvement on the part of the listener (p. 218). However, it is important to clarify that from our point of view, there is an emotional involvement of the listener in the perception process because the recognition of emotions in music implies knowledge of the emotional cues shared by all individuals in a given culture (Balkwill & Thompson, 1999). In fact, according to Molnar-Szakacs and Overy (2006), it is partly thanks to the interaction between the limbic system and the neural circuits that respond to both the production and the perception of sounds that we can grasp emotion and meaning in music at all (see Overy & Molnar-Szakacs Shared Affective Motion Experience model, 2009). Regarding the emotions felt by listeners, the process of induction can also be longer than that for perception and the pieces may have sometimes been too short to induce an emotion. However, this is not a surprising result because, as predicted by Scherer and Zentner (2001) in their original production rules model, the emotions a listener feels are likely the result of many factors beyond the structural or performance features of a piece of music. Other factors that would not necessarily come into play as strongly in perception but that can have important effects in induction studies are contextual features, such as the location of the listening experience (e.g. a laboratory vs a concert hall), and listener features, such as musical expertise and personal preferences or even fatigue (e.g. first vs tenth stimulus). Furthermore, while participants may have been able to rate even low levels of emotional expression for all 172

174 expressive styles of performance in the perception experiment, this was not the case in the feeling experiment leading to overall lower ratings of felt emotion (on average) and so to our next hypothesis. As predicted in our second hypothesis, the level of expressivity plays a differential role depending on the type of experiment as was demonstrated by the interaction effect between experiment (i.e. expressed emotions compared to felt emotions) and the level of expressivity (i.e. expressive style). More specifically, the analyses revealed higher values of felt emotions for natural/concert-like compared to emphatic and deadpan performance styles, while for expressed emotions, the emphatic condition led to higher ratings than natural/concert-like and deadpan performances. The first finding mirrors and confirms the proposition made by Bänziger, Mortillaro & Scherer (2012) according to which individuals have the innate capacity to feel when they are being manipulated, meaning that when something (here a musical stimulus) is purposefully exaggerated or over-emotional, the feeling vanished; it is probably related to the inferred authenticity (by the listener) of the emotional expression of the musician. This is an important finding in the sense that it shows that while there is a bias for emotionally expressive performances (Van Zijl & Luck, 2013), there is a sweet-spot in terms of how much expression is not too much or too little for the induction of emotion. This is similar to what has been proposed by other authors such as Berlyne (1971) who suggests that listeners prefer music that induces an optimal level of arousal as could be conveyed by the music s loudness or speed. Since both these variables varied according to the expressive style the pieces were recorded in, it is possible that it is the variability of these acoustic parameters that predisposed our participants to experience more or less emotions in the feeling experiment. The second finding shows that our musical stimuli were effective because there is a gradation in the attribution of musical expressivity between the three levels, corroborating the findings according to which performers are able to transmit 173

175 a specific message in the musical communication context and that individuals are able to recognize the content of the message precisely (Juslin & Lindström, 2010; Thompson & Robitaille, 1992; Gabrielsson & Juslin, 2003). Our final hypothesis predicted a quadruple interaction with the type of ratings (static vs dynamic), type of experiment (perceived vs felt), the level of expressivity in terms of expressive style (deadpan, natural/concert-like, emphatic), and emotion (GEMS dimensions and expression/affect). The results demonstrate a non-significant effect for this quadruple interaction. As pointed out by Zentner & Eerola (2010), self-report instruments are derived from a theory or model of emotion and there are many measuring instruments, such as Likert scales, adjective checklist, visual analogue scales, real-time measurement. In this study, our theoretical framework lies on the GEMS model, which represents currently the most effective attempt to accounting for the emotions related to music, compared to the discrete (or basic) emotion model or dimensional model (Zentner, Grandjean & Scherer, 2008). We decided to use two types of rating, a continuous one with the task of dynamic judgments and a global one with visual analogue scales (VAS, slider, without intermediate scales). These two types of rating imply different cognitive appraisals and even more depending on the experience (Perception vs. Feeling). As we have demonstrated in a previous study (Thibault de Beauregard, Ott, Labbé, & Grandjean, under review), these two types of rating are complementary and both useful in terms of providing information, because the VAS take the advantage of being fast to administer and practical for the participants who just needs to use their short-term memory and metacognitive processes, whereas the dynamic judgments, by the instantaneous advantage offered by the time-to-time measure, allow to capture the complexity and the specificity of the emotion through music. We can emphasize several limitations and perspectives for this experiment. Firstly, in both experiments, participants listened to the same musical pieces several times. This, 174

176 coupled to the total duration of the study and therefore the fatigue felt, could have biased the results between both experiments. Indeed, from a cognitive and emotional point of view, it seems to be easier and less tiring to evaluate what the music expresses or to attribute emotional characteristics to the music, than to monitoring continuously the subjective feeling. Moreover, whatever the specificity of the task asked to the participants, i.e. perception or feeling experiment, both types of rating are self-report instruments and imply the traditional inconvenient linked to this type of approach, characterized by the bias of subjectivity (Zentner & Eerola, 2010). Indeed, it would always be better to combine subjective/individuals measures with indirect emotional activation measures, such as skin conductance or heart-rate activation (Baltes, Avram, Mircea & Miu, 2011). Finally, a last limitation that one could highlight is that the musical pieces were played by a solo violin whereas usually, these musical pieces are composed for several instruments and therefore played by several instruments, bringing all polyphonic aspects, which are lacking in the present study. Regarding the perspective, we could have further explored the impact of the musical expertise of the listeners in the feeling experiment in more detail. In a future study, we could use the video recordings of the violinist in order to test the interaction between the audio and the visual cues in the modulation of the musical expressivity, both in a perception and a feeling experiment. Finally, as we suggest below, it would be interesting in future researches to systematically study those relationships between dynamic and delayed or global judgments. 175

177 Acknowledgments We would like to thank Renaud Capuçon for performing the pieces used in this study as well as Lucas Tamarit and Christophe Mermoud for their help in recording them. 176

178 References Balkwill, L.-L., & Thompson, W.F. (1999). A cross-cultural investigation of the perception of emotion in music : psychophysical and cultural cues. Music Perception : An Interdisciplinary Journal, 17(1), doi : / Baltes, F.R., Avram, J., Mircea, M., & Miu, A.C. (2011). Emotions induced by operatic music: psychophysiological effects of music, plot, and acting : a scientist s tribute to Maria Callas. Brain and Cognition, 76 (1), doi : /j.bandc Bänziger, T., Mortillaro, M. & Scherer, K. (2012). Introducing the Geneva Multimodal Expression Corpus for experimental research on emotion perception. Emotion, 12(5), doi: /a Berlyne, D.E. (1971). Aesthetics and psychobiology. New York : Meredith Corporation. Bigand, E., Vieillard, S., Madurell, F., Marozeau, J., & Dacquet, A. (2005). Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition and Emotion, 19(8), doi : / Blood, A. J., & Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc Natl Acad Sci U S A, 98(20), doi: /pnas Blood, A. J., Zatorre, R. J., Bermudez, P., & Evans, A. C. (1999). Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nature Neuroscience, 2(4), doi: /7299 Chapin, H., Jantzen, K., Kelso, J. A. S., Steinberg, F., & Large, E. (2010). Dynamic emotional and neural responses to music depend on performance expression and listener experience. PLoS ONE, 5(12), e13812-e doi: /journal.pone Evans, P., & Schubert, E. (2006). Quantification of Gabrielsson s relationships between felt and expressed emotions in music. Paper presented at the 9th International Conference on Music Perception & Cognition, Bologna, Italy. Fabian, D., & Schubert, E. (2003). Expressive devices and perceived musical character in 34 performances of Variation 7 from Bach s Goldberg Variations. Musicae Scientiae, 7(Suppl. 1), doi: / S103 Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A.D., & Koelsch, S. (2009). Universal recognition of three basic emotions in music. Current Biology, 19, doi: /j.cub Gabrielsson, A. & Juslin, P. (1996). Emotional expression in music performance: between the performer's intention and the listener's experience. Psychology of Music, 24, Gabrielsson, A. (2001). Emotions in strong experiences with music. In P. Juslin & J. Sloboda (Eds.), Music and emotion: Theory and research (pp ). Oxford, England: Oxford University Press. Gabrielsson, A. (2002). Emotion Perceived and Emotion Felt: Same or Different? Musicae Scientiae, 5(1 suppl), doi: / s105 Gabrielsson, A., & Juslin, P. (2003). Emotional expression in music. In R. J. Davidson, H. H. Goldsmith, & K. R. Scherer (Eds.), Handbook of affective sciences (pp ). New York, NY: Oxford University Press. Gabrielson, A., & Lindström, E. (2010). The Role Of Structure In The Musical Expression Of Emotions. In P. Juslin & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications. New York: NY: Oxford University Press. 177

179 Juslin, P. (2000). Cue utilization in communication of emotion in music performance: relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance, 26, Juslin, P. (2001). Communicating emotion in music performance: a review and theoretical framework. In P. Juslin & J. Sloboda (Eds.), Music and emotion: Theory and research (pp ). Oxford, England: Oxford University Press. Juslin, P. (2013). What does music express? Basic emotions and beyond. Frontiers in Psychology, 4, 596. doi: /fpsyg Juslin, P., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance : different channels, same code? Psychological Bulletin, 129, Juslin, P., & Laukka, P. (2004). Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening. Journal of New Music Research, 33(3), doi: / Juslin, P. & Lindström, E. (2010). Musical expression of emotions: modeling listener s judgments of composed and performed features. Music Analysis, 29, doi: /j x Juslin, P., & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31(5), doi: /S X Juslin, P., Liljeström, S., Västfjäll, D., & Lundqvist, L.-O. (2010). How does music evoke emotions? Exploring the underlying mechanisms. In P. Juslin & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp ). New York: NY: Oxford University Press. Kallinen, K., & Ravaja, N. (2006). Emotion perceived and emotion felt: Same and different. Musicae Scientiae, 10(2), doi: Doi / Kawakami, A., Furukawa, K., Katahira, K., & Okanoya, K. (2013). Sad music induces pleasant emotion. Front Psychol, 4, 311. doi: /fpsyg Koelsch, S., Kilches, S., Steinbeis, N., & Schelinski, S. (2008). Effects of unexpected chords and of performer's expression on brain responses and electrodermal activity. PLoS ONE, 3(7), e2631. doi: /journal.pone Krumhansl, C. (1997). An exploratory study of musical emotions and psychophysiology. Canadian Journal of Experimental Psychology, 51(4), Lonsdale, A. J., & North, A. C. (2011). Why do we listen to music? A uses and gratifications analysis. Br J Psychol, 102(1), doi: / X Molnar-Szakacs, I. & Overy, K. (2006). Music and mirror neurons: from motion to 'e'motion. Social Cognitive and Affective Neuroscience, 1(3), doi.org/ /scan/nsl029 Overy, K. & Molnar-Szakacs, I. (2009) Being together in time : Musical experience and the mirror neuron system. MusicPerception, 26(5), doi: /mp Rickard, N. S. (2004). Intense emotional responses to music: a test of the physiological arousal hypothesis. Psychology of Music, 32(4), doi: / Rousseau, J.-J. ( ). Volume 9. Dictionnaire de musique in Collection complète des oeuvres, Genève, , vol. 9, in-4. Salimpoor, V. N., Benovoy, M., Larcher, K., Dagher, A., & Zatorre, R. J. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nat Neurosci, advance online publication. 178

180 Salimpoor, V. N., Benovoy, M., Longo, G., Cooperstock, J. R., & Zatorre, R. J. (2009). The rewarding aspects of music listening are related to degree of emotional arousal. PLoS ONE, 4(10), e7487-e7487. Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them? Journal of New Music Research, 33(3), doi: / Scherer, K. R., & Zentner, M. (2001). Emotional effects of music: Production rules. In P. N. Juslin & J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp ). New York: Oxfor University Press. Schubert, E. (2004). Modeling perceived emotion with continuous musical features. Music Perception, 21, doi: /mp Schubert, E. (2013). Emotion felt by the listener and expressed by the music: a literature review and theoretical investigation. Frontiers in Psychology, 4. doi: /fpsyg Sloboda, J.A. (2000). Individual differences in music performance. Trends in Cognitive Sciences, 4, Thompson, W.F., & Robitaille, T. (1992). Can composers express emotion through music? Empirical studies of the arts, 10, doi: /NBNY-AKDK-GW58-MTEL Trost, W., Ethofer, T., Zentner, M., & Vuilleumier, P. (2011). Mapping aesthetic musical emotions in the brain. Cerebral Cortex, 22(12), doi: /cercor/bhr353 Van Zijl, A. G. W., & Luck, G. (2013). The Sound Of Sadness: The Effect Of Performers' Emotions On Audience Ratings. Paper presented at the 3rd International Conference on Music & Emotion (ICME3), Jyväskylä, Finland. Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of music: Characterization, classification, and measurement. Emotion, 8(4), doi: / Zentner, M., & Eerola, T. (2010). Self-report Measures and Models. In P. Juslin & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications. New York: NY: Oxford University Press. 179

181 As used in study 1b Supplemental Materials Appendix A Translation of the description of the GEMS Dimensions For the next piece, we will ask you to judge to what extent the emotion of Wonder is being expressed by the music, i.e., to what extent the music expresses or sounds happy, amazed, dazzled, and allured. For the next piece, we will ask you to judge to what extent the emotion of Transcendence is being expressed by the music, i.e., to what extent the music expresses or sounds inspired, spiritual, and transcendent. For the next piece, we will ask you to judge to what extent the emotion of Tenderness is being expressed by the music, i.e., to what extent the music expresses or sounds affectionate, sensual, tender, and softened up. For the next piece, we will ask you to judge to what extent the emotion of Nostalgia is being expressed by the music, i.e., to what extent the music expresses or sounds dreamy, melancholic, nostalgic, and sentimental. For the next piece, we will ask you to judge to what extent the emotion of Peacefulness is being expressed by the music, i.e., to what extent the music expresses or sounds calm, relaxed, serene, soothed, and meditative. For the next piece, we will ask you to judge to what extent the emotion of Power is being expressed by the music, i.e., to what extent the music expresses or sounds energetic, triumphant, fiery, strong, and heroic. For the next piece, we will ask you to judge to what extent the emotion of Joyful Activation is being expressed by the music, i.e., to what extent the music expresses or sounds animated, stimulated, dancing, amused, and joyful. For the next piece, we will ask you to judge to what extent the emotion of Tension is being expressed by the music, i.e., to what extent the music expresses or sounds nervous, agitated, tense, impatient, and irritated. For the next piece, we will ask you to judge to what extent the emotion of Sadness is being expressed by the music, i.e., to what extent the music expresses or sounds sad and sorrowful. 180

182 As used in study 2b For the next piece, we will ask you to judge the emotion of Wonder, i.e., to what extent the music makes you feel happy, amazed, dazzled, and allured. For the next piece, we will ask you to judge the emotion of Transcendence, i.e., to what extent the music makes you feel inspired, spiritual, and transcendent. For the next piece, we will ask you to judge the emotion of Tenderness, i.e., to what extent the music makes you feel affectionate, sensual, tender, and softened up. For the next piece, we will ask you to judge the emotion of Nostalgia, i.e., to what extent the music makes you feel dreamy, melancholic, nostalgic, and sentimental. For the next piece, we will ask you to judge the emotion of Peacefulness, i.e., to what extent the music makes you feel calm, relaxed, serene, soothed, and meditative. For the next piece, we will ask you to judge the emotion of Power, i.e., to what extent the music makes you feel energetic, triumphant, fiery, strong, and heroic. For the next piece, we will ask you to judge the emotion of Joyful Activation, i.e., to what extent the music makes you feel animated, stimulated, dancing, amused, and joyful. For the next piece, we will ask you to judge the emotion of Tension, i.e., to what extent the music makes you feel nervous, agitated, tense, impatient, and irritated. For the next piece, we will ask you to judge the emotion of Sadness, i.e., to what extent the music makes you feel sad and sorrowful. 181

183 Appendix B The method of dynamic judgments was developed using a Flash interface, allowing for the recording of dynamic judgments in real time. During the judgments, participants used a graphic interface to judge the intensity of one specific emotion through time (e.g., Nostalgia). The width of the graph was 1000 pixels (corresponding to a duration of 4 min, 16 s) and the height was 300 pixels ( pixels, 17 in.). Participants had direct visual feedback of the judgments they were making in the graphic interface by moving a computer mouse up and down as time advanced automatically (if necessary, they could scroll). Measurements were made every 250 ms. The x-axis represented time, while the y-axis represented the intensity of the emotion expressed by music/felt by the listener (e.g., Peacefulness) through a continuous scale marked by three levels of intensity: low, medium, and high. The main instruction was Rate to what extent the music expresses/you feel [dimension of interest], including the main items describing the dimensions. Before beginning the experiment, the participants had to do a training trial to become familiar with the procedure. Figure 1. Screen shot of the dynamic Flash interface (in French) for the task of dynamic judgments in the perceived experiment, here with the dimension of Peacefulness with the instruction: Rate to what extent the music expresses: peacefulness, a calm, relaxed, serene, soothing, and meditative style. 182

184 Figure 2. Screen shot of the dynamic Flash interface (in French) for the task of dynamic judgments in the feeling experiment, here during the training trial, with the instruction: Rate to what extent you feel moved/affected by the music. 183

185 3.4. Etude 4 Concert Hall vs. Laboratory Room: Dynamic Judgments of Musical Emotions Influenced by the Listening Context Kim Thibault de Beauregard and Didier Grandjean Faculty of Psychology and Educational Sciences and Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, Geneva, Switzerland. Abstract In this experiment and based on the Geneva Emotion Musical Scales model, we investigated the impact of the listening context on the evaluation of emotions expressed by music. In collaboration with a professional Italian quartet - the Quartetto di Cremona - we compared the dynamic judgements made during a live concert vs. in a laboratory room. The results show firstly that the method of dynamic judgments is efficient in an ecological context and comfort therefore its utilization compared to static measurements; secondly, the results demonstrate that the attribution of emotions is more intense during a live performance with interesting differences related to musical excerpts. Keywords: live performance, dynamic judgments, listening context 184

186 Theoretical framework Music is evanescent and exists only in the moment of its perception that has to reconstruct its unity in time with all expectations related to the cultural musical background of individuals. But music is also and above all an active and interactive social behaviour. Anthropological studies and behavioural observations in young children have highlighted that music enhances social cohesion, coordination and cooperation within social groups (Geissmann, 2000). For instance, in some mental disorders, e.g. Williams s syndrome, there is a strong connection between musicality and sociability (Huron, 2003). Children with this syndrome are not able to interact in a normal way with their environment, suffering of a lack of basic social norms, but are often very talented in music and use it to interact with others. For the members of a group, the simultaneous experience of the expressive properties of music is likely to be a fundamental aspect of the ability to act in society empathetically (Brattico, Brattico & Jacobsen, 2010). As pointed out by Freeman (2000), the music appreciation could be the positive emotional result of social bond creation process. Likewise, Grewe et al. (2010) note that the fascination for music may be related to emotional communication system because music seems to be able to initiate or strengthen social ties between individuals through an emotional resonance and shared emotional experiences. With the growth and progress of technologies, there is a development of an egocentric phenomenon in listening to music. The appearance of headphones and all derivatives of music listening means leads today to a permanent access to music (Thompson, Graham & Russo, 2005). It is now possible to listen to music from a phone, a stereo, a computer, alone or in groups, in nightclubs or at home. The majority of studies and articles deal with the positive aspects of music but it is also conceivable to experience hearing saturation about the exaggerated use of music in public places such as supermarkets, malls, stores and hotels (Quignard, 1997). 185

187 Livingstone and Thompson (2010) evoke the connection between emotions that can be expressed by the gestures of musical performance and emotions expressed by the auditory content that accompanies it. Indeed, it must not neglect the emotional importance of a multimodal manifestation of music: accompanying the acoustic signal, there are body gestures, facial signals and kinesthetic elements. Vines et al. (2004) conducted an experiment in which thirty music lovers saw, heard, or saw and heard a musical performance. According to the results, auditory and visual flows have conveyed the same structural and musical information. Furthermore, the gestural aspects of the performance reflected the musical phrasing and strengthened the musical experience of tension, key concept of the musical emotionality (Koelsch, 2015; Juslin, Liljeström, Västfjäll & Lundqvist, 2010; Lerdahl & Krumhansl, 2007). In both vocal and instrumental music, facial expressions of the performers convey a variety of emotional and structural cues for listeners (Thompson, Graham & Russo, 2005). The fact that the emotional experience is enhanced when multiple receptor channels are solicited has also been demonstrated in the cinematic context, with the close relationship between the soundtrack, the script and the image of a film (Vitouch, 2001). Livingstone and Thompson (2010) underlie also that the observation of multiple channels through the system of audio-visual mirror neurons (Rizzolatti, Fadiga, Gallese & Fogassi, 1996; Gallese, 2006) of the listener s results in an enlargement of emotional experience, because there is a greater activation of the emphatic channels. Indeed, a key point in the study of musical emotions is that there is an appraisal at different levels during musical experience (Scherer & Coutinho, 2013). According to Scherer (2001), the evaluative processes can occur at three different levels: a sensory-motor reflex system, a schematic (learned preferences/aversions), and a conceptual (estimates derived from personal experience) level. In the music field, Scherer and Zentner (2001) have proposed a certain number of evaluative processes, named production rules. According to these authors, the musical experience is the result of the 186

188 interaction between the musical structure, the quality of the performance, the expertise of the musicians, the emotional state of the listeners and the contextual features. This last aspect is especially interesting for the present study because it concerns all the aspects linked to the performance and/or listening situation. The contextual features highlight the importance of the location of a performance (street, home, party, church, concert hall), the material surrounding the listener/performer (wood, metal, stone, glass), the music transmission mode (without technical support/live performance, loudspeakers, headphones). All these features may have an influence on the subjective perception of the listeners. The music serves to give an artistic dimension to the time and the mystery that surrounds aesthetic emotions is revealed by the drama paradox. Individuals feel very strong emotions when they see, listen or read fictitious scenarios; they can feel empathy listening to music and can cry at the theatre. More recently, Koelsch (2015) also proposed a list of seven evaluative processes present in the music experience: i) the perceptual features (e.g., loudness, timbre, dissonance); ii) the contextual features (e.g. situation, type of event, location); iii) the performance and symbolic features (individuals memories linked to the music); iv) the music score and the musical structure; v) the quality of performance; vi) the affective functions (feelings, emotional regulation); the social functions (e.g. national anthem). Koelsch (2015) insists on the fact that these different evaluative processes are, at least to a certain degree, orthogonal and independent of one another. Taken together, these proposals and experiments lead to a common finding, which highlights the importance of the listening context. A challenge in the study of emotions in general and aesthetic emotions in particular concerns the ecological validity. Does the fact of attending a live performance have an impact on the attribution of emotional characteristics to the music? Lamont (2011) has for instance demonstrated that there are a greater percentage of intense experiences with music during a concert. Another important aspect of live performance concerns the musical expressivity (Timmers, Marolt, Camurri & Volpe, 2006). 187

189 There are two relevant types of musical expressivity to study the influence of performance in the emotional process of decoding and recognition : on one hand, the «academic» mode, also called «metronomic», characterized by a «cold» reading of the score, very technical and scholar, without any modulation parameters usually used in order to account for expressiveness ; on the other hand, the «emphatic» mode, characterized by an exaggeration of emotional expression already present in the basic course of music. It follows then that the more expressive the performance, the more intense the judgments of perceived emotion should be. Rickard (2004) demonstrated a greater number of chills and skin conductance responses to emotionnaly powerful music compared to simply arousing or relaxing music. In a study investigating the role of musical expectancy of the same piano sonatas, the electrodermal response to certain chords was found to be stronger with the expressive compared to the non-expressive version (Koelsch, Kilches, Steinbeis & Schelinski, 2008). A fmri study comparing «expressive» and so-called «mechanical» performances have demonstrated that expressive piano performances increase activity in emotional processing related brain areas and even more so in musicians (Chapin, Jantzen, Kelso, Steinberg & Large, 2010). As pointed out by Nagel, Kopiez, Grewe & Altenmüller (2007), there are different ways of investigating the emotions related to music, such as self-report, questionnaires and adjective scales, but all of these approaches are static and therefore unable to demonstrate the complexity of the unfolding of musical emotions. The works of Emery Schubert (2001; 2004) have been among the first to take into account this characteristic of time and to use continuous measurements. This method allow experimenters to record the judgments of emotions expressed by music in real time and then to follow the changes of perception and attribution over time. 188

190 The majority of studies on music and emotion propose to judge musical excerpts in terms of valence and arousal (Vieillard et al., 2008; Chapin, Jantzen, Kelso, Steinberg, & Large, 2010) or in terms of basic emotions (Fritz et al., 2009; Juslin, 2000). However, one might suppose that musical emotions are more complex or subtle and then these approaches might not be the best to understand emotions related to music. In this context, Zentner, Grandjean and Scherer (2008) proposed a new approach for the study of emotions in music. They made a set of experiments enabling them to propose a factorial model of the most relevant emotional terms for the understanding of emotions related to music. These studies gave rise to a nine factorial model of emotions induced by music: the GEMS (Geneva Emotion Music Scale). In a fourth study, the authors confirmed the nine-dimensional structure of the model and demonstrated that this new framework is more appropriated than the two traditional models of emotion (namely the basic emotion model and the dimensional emotion model). The GEMS model currently represents the most effective attempt for studying the emotions related to music. The main aim of the present study is to test whether the context of listening (here live performances and the same recorded performances presented in laboratory with headphones) impacts on the attribution of the intensity of emotions expressed through music according the GEMS dimensions. Methods Materials and participants As part of the SIEMPRE European project, we collaborated with the famous Italian quartet Il Quartetto di Cremona for this experiment. We took advantage of the presence of the quartet for a concert at the Saint-Germain Church in Geneva to organize an ecological settlement. Classical music lovers were recruited via advertisements posted on a database of the University of Geneva and were paid 50 Swiss francs for their participation. Twelve 189

191 participants (9 women and 3 men) (M = years, SD = years) took part in this study during the concert and 28 participants (24 women and 4 men) (M= years, SD= 1.82 years) took part in the laboratory condition context at the University of Geneva. Because of the ecological aspect of the experiment, we could not increase the number of participants for the live performance context. In the back of the Church, we installed 12 laptops (HP 260- a101nf) and mouses as cursors on a table for the task of dynamic judgment during the live performance. The participants were in front of the Quartetto di Cremona during the entire task. The method of dynamic judgment that we used has been developed using a Flash interface allowing us to record the dynamic judgments in real-time. During the emotional judgment, the participants used a graphic interface to judge the intensity of one specific emotion through time (e.g. Nostalgia). The width of the graph was 1000 pixels (corresponding to duration of 4 16) and the height 300 pixels (1280 *1024 pixels, 17 inches). Participants had direct visual feedback in the graphic interface of the judgments they were making by moving a computer mouse up and down as time advances automatically (if necessary the graphwindow scrolled). Measurements are made every 250 milliseconds. The x-axis represented time, while the y-axis represented the intensity of the emotion expressed by music (e.g. Peacefulness) through a continuous scale marked by three levels of intensity: low, medium, and high. The main instruction was: Rate to what extent the music expresses [dimension of interest] announced by the experimenter including the main items describing the dimensions (Figure 1). 190

192 Figure 1. Example of the screen of the Flash interface in French for the task of dynamic judgments, here with the dimension of Peacefulness with the instruction: Rate how the music is expressing peacefulness, music style calm, relaxed, serene, soothed and meditative. Before the beginning of the concert, the participants had to achieve a training trial in order to become familiar with the procedure. We recorded the entire concert and the program was two String Quartets: the String quartet n4 in C major, Sz 91 by Bela Bartok and the String quartet n3 in A major, op.41, by Robert Schumann (Figure 2). B.Bartok : quartet n 4 in C major Sz 91 Musical excerpt Allegro Prestissimo Non troppo lento Allegretto pizzicato Allegro molto R. Schumann : quartet n 3 op.41 Musical excerpt Andante espressivo-allegro molto moderato Assai agitato Adagio molto Finale-Allegro molto vivace GEMS dimension Power Wonder Sadness Tension Tension GEMS dimension Wonder Power Peacefulness Joyful activation Figure 2. Musical pieces played during the concert and their GEMS dimensions. For the laboratory condition context, we took 7 musical movements that have been played during the concert at the Saint-Germain Church, i.e. the pieces that presented the best Cronbach s Alpha between participants who attended the concert (Figure 3). 191

193 Figure 3. Cronbach s Alpha of the dynamic judgments made during live performance (Saint- Germain Church). Therefore, the Bartok s movements evaluated on Sadness and Wonder GEMS dimensions during the concert have not been evaluated in the laboratory condition. The sessions took place in a computer room at the University of Geneva and headphones (Sennheiser model HD 201) were used for the listening part of the task. The two studies have been accepted by the local ethical committee of the University of Geneva and before the beginning of the experiments, all participants filled out a consent in which the experiment, the data processes, and the utilisation of the data for publications were described. Results In order to investigate if the emotions were effectively evaluated as more intense during the live performance vs. the laboratory condition context, we analysed the mean averages of the dynamic judgments. Figures 4 and 5 illustrate the great agreement between participants regarding the emotion expressed by music. The lines represent the normalized participant s judgments. The red line represents the average. The y-axis represents the intensity of the emotion expressed by music (e.g. Peacefulness) and the x-axis, the time. 192

194 Figure 4. Evolution profile of judgments of the emotion expressed by music in terms of Power during the live performance: Schumann, String Quartet n 3 in A major op.41 - Assai agitato (II). Figure 5. Evolution profile of judgments of the emotion expressed by music in terms of Peacefulness during the live performance: Schumann, String Quartet n 3 in A major op.41- Adagio molto (III). In order to test the significant differences between the two conditions, we performed systematic statistical analysis cluster corrected using Fieldtrip functions under Matlab and usually used for comparisons of time series of event-related electrophysiological components (Oostenveld, Fries, Maris, & Schoffelen, 2011). For this analysis we used a 100 time frames (25 seconds) baseline correction in order to take into account the important variance at the beginning of the judgment. For each musical excerpt all judgments performed during the live performance (12 participants) were compared 193

195 to the judgment performed on the recordings in a laboratory using headphones (28 participants). We performed 500 permutations in order to estimate significant clusters for each musical excerpt. Figure 6 depicts the results for the comparison of the two contexts for two musical excerpts. The average judgments made during the live performance/concert are represented in red and in black for the laboratory condition context. The results show clearly that the attribution of emotion is more intense during the concert vs. laboratory condition context. A 194

196 B Figure 6. Averaged judgments of two musical excerpts and significant differences based on permutation tests. (A) The upper panel depicts the averaged z-scores for Bartok, String quartet n4 in C major, Sz 91, Allegro (I) dynamic judgment during the live performance (in red) vs. laboratory condition (in black), evaluated on the Power dimension. Blue lines correspond to significant differences (cluster corrected) between the two dynamic judgments. The lower panel depicts the statistical comparisons and the threshold-corrected p-value at.05. (B) The upper panel depicts the averaged z-scores for the Schumann, Schumann, String quartet n 3 in A major op.41, Adagio molto (III) dynamic judgment during the live 195

197 performance (in red) vs. laboratory condition (in black), evaluated on the Peacefulness dimension. Blue lines correspond to significant differences (cluster corrected) between the two dynamic judgments. The lower panel depicts the statistical comparisons and the threshold-corrected p-value at.05. The other results are available in Appendix A. Conclusion The aims of the present study were i) to test the method of dynamic judgment in an ecological context, i.e. concert live and ii) to compare the average dynamic judgments during live performance vs. laboratory condition. The first objective of this study was a success despite the imposing informatic devices. The participants showed a great agreement among their dynamic emotional judgments. Indeed, the Cronbach s Alpha revealed rather satisfactory scores except for the two movements (Prestisisimo and Non troppo lento) of the Bartok s quartet, evaluated respectively on the Wonder and the Sadness GEMS dimensions. The attribution of these two GEMS dimensions is debatable but it was quite obvious that it would not be possible to ask to the participants to still evaluate the musical pieces on the same GEMS dimensions for this quartet in particular. The second objective of this study was the comparison between dynamic judgments made during a live performance vs. dynamic judgments made in a laboratory condition. As pointed out by Scherer and Zentner (2001), the emotional experience related to music is the result of the combination of structural features, performance, listener and contextual features. More specifically, regarding the last factor, the authors insist on the importance of the place (church, hall, outside), the type of the event (wedding, funeral, party), listening without interruption or disturbed, type of listening (television, radio, headphones). Our results showed that in general, the intensity of emotion perceived in music was higher during the live performance, supporting the proposition made 196

198 by Scherer and Zentner (2001). These results are quite dependent on the musical excerpts and in the current case, we were somehow forced to adapt to the program of the concert. Moreover, there are a multitude of factors which have an impact and that we could not control, as the complicity between musicians, their gestures (related to the concept of mimicry), the lights, the materials, the aesthetic features of the place. Regarding the limits and perspectives of this study, we can highlight again the important and imposing informatic devices because it was therefore not possible to have a larger sample with more than 12 participants in the concert context. For future research, the use of tablets would be more convenient and maybe less disturbing for the participants. A second limit concerns precisely the fatigue for the participants, due to the length of the concert. Regarding the perspective, these results should encourage future researches to focus on live experiments and even more for the study of subjective feeling, because this kind of experiments allow us to better understand the complexity of the emotional phenomena related to music. Secondly, one of the main advantages that one can have from this study is that the musical pieces were fully evaluated and this accurate point will allow scientists to better understand for example the novelty aspects or the global expectancies, often difficult to capture because of the duration of the musical excerpts in laboratory context, as a key feature in the attribution of emotional characteristics to the music. 197

199 REFERENCES Brattico, E., Brattico, P. & Jacobsen, T. (2010). Les origines du plaisir esthétique de la musique : examen de la literature existante. In I. Deliège, O. Vitouch, & O. Ladinig (eds), Musique et évolution : théorie, débats, synthèses, p Editions Mardaga. Chapin, H., Jantzen, K., Scott Kelso, J.A., Steinberg, F., Large, E. (2010). Dynamic Emotional and Neural Responses to Music Depend on Performance Expression and Listener Expression. PLoS ONE 5, (12) e Freeman, W.J. (2000). A neurobiological role of music in social bonding. In N.L. Wallin, B. Merker & S. Brown (eds), The origins of music, p Cambridge, MA: MIT Press. Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A.D., & Koelsch, S. (2009). Universal Recognition of Three Basic Emotions in Music. Current Biology, 19, Gallese, V. (2006). Intentional attunement: a neurophysiological perspective on social cognition and its disruption in autism. Brain Research, 1079, Geissmann, T. (2000). Gibbon songs and human music from an evolutionary perspective. In N.L. Wallin, B. Merker & S. Brown (eds), The origins of music, p Cambridge, MA: MIT Press. Grewe, O., Björn, K., Kopiez, R. & Altenmüller, E. (2010). Chilss in different sensory domains : frisson elicited by acoustical, visual, tactile and gustatory stimuli. Psychology of Music, 39 (2), Huron, D. (2003). Is music an evolutionary adaptation? In I. Peretz & R. Zatorre (eds), The cognitive neuroscience of music, p Oxford University Press. Juslin, P.(2000). Cue utilization in communication of emotion in music performance: relating performance to perception. Journal of Experimental Psychology: Human Perception and Performance, 26, Juslin, P., Liljeström, S., Västfjäll, D., & Lundqvist, L.(2010). How does music evoke emotions? : Exploring the underlying mechanisms. In Juslin, P., & Sloboda, J.Handbook of music and emotion: theory, research, applications (pp ). Oxford, England: Oxford University Press. Koelsch, S., Kilches, S., Steinbeis, N. & Schelinski, S. (2008). Effects on unexpected chords and of performer s expression on brain responses and electrodermal activity. PLoS ONE 3 (7) : e2631. Koelsch, S.(2015). Music-evoked emotions: principles, brain correlates, and implications for theory. Annals of the New York Academy of Sciences, 1337, Lerdahl, F., & Krumhansl, C. (2007). Modelling tonal tension. Music Perception, 24, Livingstone, S. & Thompson, W. (2010). Emergence de la musique et théorie de l esprit. In I. Deliège, O. Vitouch, & O. Ladinig (eds), Musique et évolution : théorie, débats, synthèses, p Editions Mardaga. Nagel, F., Kopiez, R., Grewe, O., & Altenmüller, E.(2007). EMuJoy: Software for continuous measurement of perceived emotions in music. Behavior Research Methods, 39,

200 Quignard, P. (1997). La haine de la musique. Ed : Gallimard. Rickard, N.S. (2004). Intense emotional responses to music : a test of the physiological arousal hypothesis. Psychology of Music, 32(4), Rizzolatti, G., Fadiga, L., Gallese, V. & Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3, Scherer, K. (2001). Appraisal considered as a process of multilevel sequential checking. In K. Scherer, A. Schorr, & T. Johnstone (eds), Appraisal processes in emotion: theory, methods, research, p New York: Oxford Universiy Press. Scherer, K.R., & Zentner, M. (2001). Emotion effects of music: Production rules. In P. Juslin & J. Sloboda (Eds.). (2001). Music and emotion: Theory and research (pp ). Oxford University Press. Scherer, K. & Coutinho, E. (2013). How music creates emotion: a multifactorial process approach. In T. Cochrane, B. Fantini & K. Scherer (eds), The emotional power of music, p Oxford University Press. Schubert, E. (2001) Continuous measurement of self-report emotional response to music. In P. Juslin & J. Sloboda (Eds.), Music and emotion: Theory and research, pp Oxford, England: Oxford University Press. Schubert, E. (2004). Modeling Perceived Emotion with Continuous Musical Features. Music Perception, 21, (2004) Thompson, W., Graham, P., & Russo, F. (2005). Seeing music performance: visual influences on perception and experience. Semiotica, 156, Vines, B., Krumhansl, C., Wanderley, M., Nuzzo, R., & Levitin, D. (2004). Performance gestures of musicians: what structural and emotional information do they convey? In A. Camurri & G. Volpe (eds), Gesture-based communication in human-computer interaction, vol.2915/2004), p Berlin/Heidelberg: Springer. Vieillard, S., Peretz, I., Gosselin, N., Khalfa, S., Gagnon, L., & Bouchard, B.(2008). Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition & Emotion, 22, Vitouch, O. (2001). When your ear sets the stage: musical context effects in film perception. Psychology of Music, 29, Zentner, M., Grandjean, D., & Scherer K.R. (2008). Emotions evoked by the sound of music: Characterization, Classification and Measurement. Emotion, 4 (8),

201 Appendix A Appendix Results of the averaged judgments of the two musical excerpts (live performance vs. laboratory condition) and significant differences based on permutation tests. Bartok, Allegretto pizzicato - Tension Bartok, Allegretto pizzicato - Tension Bartok, Allegro molto - Tension Bartok, Allegro molto - Tension Schumann, Andante espressivo-allegro molto - Wonder Schumann, Andante espressivo-allegro molto - Wonder Schumann, Assai agitato - Power Schumann, Assai agitato - Power Schumann, Finale Allegro molto vivace - JA Schumann, Finale Allegro molto vivace - JA 200

L E C O U T E P r i n c i p e s, t e c h n i q u e s e t a t t i t u d e s

L E C O U T E P r i n c i p e s, t e c h n i q u e s e t a t t i t u d e s L E C O U T E P r i n c i p e s, t e c h n i q u e s e t a t t i t u d e s L E C O U T E P r i n c i p e s, t e c h n i q u e s e t a t t i t u d e s Stéphane Safin Psychologue - Ergonome Lucid Group -

Plus en détail

PLAN D ÉTUDES DU PIANO

PLAN D ÉTUDES DU PIANO PLAN D ÉTUDES DU PIANO INTRODUCTION La formation permet aux élèves d acquérir progressivement une autonomie musicale et instrumentale liée au développement artistique de leur personnalité à travers la

Plus en détail

PROGRAMME D ETUDE LECTURE/DECHIFFRAGE PIANO

PROGRAMME D ETUDE LECTURE/DECHIFFRAGE PIANO PROGRAMME D ETUDE LECTURE/DECHIFFRAGE PIANO Avril 2012 Commissariat à l enseignement musical Téléphone : 247-86629/86618/86638/86611 2, rue Sosthène Weis Téléfax: 26 12 32 15 L-2722 Luxembourg e-mail :

Plus en détail

QUELQUES PROPOSITIONS POUR EVALUER LES PRATIQUES MUSICALES AU CYCLE 3. Didier Louchet CPEM

QUELQUES PROPOSITIONS POUR EVALUER LES PRATIQUES MUSICALES AU CYCLE 3. Didier Louchet CPEM QUELQUES PROPOSITIONS POUR EVALUER LES PRATIQUES MUSICALES AU CYCLE 3 Didier Louchet CPEM Les textes officiels L éducation musicale s appuie sur des pratiques concernant la voix et l écoute : jeux vocaux,

Plus en détail

Application en classe de la progression des apprentissages en musique 1 er cycle du secondaire

Application en classe de la progression des apprentissages en musique 1 er cycle du secondaire Application en classe de la progression des apprentissages en musique 1 er cycle du secondaire Quelques définitions des termes utilisés Nommer : Désigner par un nom. Identifier : Déterminer la nature de

Plus en détail

Règlement d études et d examens concernant la formation de musiciens et musiciennes d Eglise non professionnels

Règlement d études et d examens concernant la formation de musiciens et musiciennes d Eglise non professionnels Règlement d études et d examens concernant la formation de musiciens et musiciennes d Eglise non professionnels du 23 novembre 2005 La commission d examens de musique d Eglise et la direction de l Ecole

Plus en détail

ACOUSTIQUE 3 : ACOUSTIQUE MUSICALE ET PHYSIQUE DES SONS

ACOUSTIQUE 3 : ACOUSTIQUE MUSICALE ET PHYSIQUE DES SONS Matériel : Logiciel winoscillo Logiciel synchronie Microphone Amplificateur Alimentation -15 +15 V (1) (2) (3) (4) (5) (6) ACOUSTIQUE 3 : ACOUSTIQUE MUSICALE ET PHYSIQUE DES SONS Connaissances et savoir-faire

Plus en détail

Introduction. Le recrutement est un processus incontournable dans la vie de toute entreprise.

Introduction. Le recrutement est un processus incontournable dans la vie de toute entreprise. Introduction Le recrutement est un processus incontournable dans la vie de toute entreprise. Recruter présente un enjeu stratégique et sociétal pour l entreprise en termes de 2 : 1. Productivité : le choix

Plus en détail

Les 6 sous-stades. La période sensori-motrice. Le stade réflexe. Coordination main/bouche. Du réflexe au schème: exemple du réflexe de succion

Les 6 sous-stades. La période sensori-motrice. Le stade réflexe. Coordination main/bouche. Du réflexe au schème: exemple du réflexe de succion Les 6 sous-stades La période sensori-motrice La construction de l intelligence à partir des sens, de l action et des déplacements I. Stade réflexe du premier mois II. Stade des réactions circulaires primaires

Plus en détail

MARION TILLOUS. SOUTENANCE 09.07.09. Madame, messieurs,

MARION TILLOUS. SOUTENANCE 09.07.09. Madame, messieurs, MARION TILLOUS. SOUTENANCE 09.07.09. Madame, messieurs, Je vous remercie de votre présence aujourd hui et de l attention que vous avez bien voulu porter à mon travail. Je remercie particulièrement Francis

Plus en détail

M2S. Formation Management. formation. Animer son équipe Le management de proximité. Manager ses équipes à distance Nouveau manager

M2S. Formation Management. formation. Animer son équipe Le management de proximité. Manager ses équipes à distance Nouveau manager Formation Management M2S formation Animer son équipe Le management de proximité Manager ses équipes à distance Nouveau manager Coacher ses équipes pour mieux manager Déléguer et Organiser le temps de travail

Plus en détail

Pour l examen pédagogique de la SSPM en vue de l obtention du diplôme de professeur de flûte à bec

Pour l examen pédagogique de la SSPM en vue de l obtention du diplôme de professeur de flûte à bec VADE-MECUM Pour l examen pédagogique de la SSPM en vue de l obtention du diplôme de professeur de flûte à bec Rédigé par Marianne Lüthi Niethammer, 1999. Remarque préliminaire : Ce vade-mecum est un commentaire

Plus en détail

L écoute ritualisée au cycle 3

L écoute ritualisée au cycle 3 L écoute ritualisée au cycle 3 Documents d application des programmes La sensibilité, l imagination, la création Éducation artistique école élémentaire Ministère de la Jeunesse, de l Éducation nationale

Plus en détail

Psychoacoustique. VI. Localisation. VI.2 Latéralisation: différences temporelles (ITDs) VI.1 Position du problème. VI.

Psychoacoustique. VI. Localisation. VI.2 Latéralisation: différences temporelles (ITDs) VI.1 Position du problème. VI. Psychoacoustique VI. I. Rappels d acoustique II. Méthodes psychophysiques III. Anatomie et fonctionnement du système auditif IV. Caractéristiques élémentaires V. Attributs perceptifs VI. VII. VI.1 Position

Plus en détail

Danseur / Danseuse. Les métiers du spectacle vivant

Danseur / Danseuse. Les métiers du spectacle vivant Les métiers du spectacle vivant Filière : artistique Domaine : art chorégraphique Appellations voisines : artiste chorégraphique Autres métiers du domaine : chorégraphe, maître de ballet, notateur Danseur

Plus en détail

BABEL LEXIS : UN SYSTÈME ÉVOLUTIF PERMETTANT LA CRÉATION, LE STOCKAGE ET LA CONSULTATION D OBJETS HYPERMÉDIAS

BABEL LEXIS : UN SYSTÈME ÉVOLUTIF PERMETTANT LA CRÉATION, LE STOCKAGE ET LA CONSULTATION D OBJETS HYPERMÉDIAS Quatrième colloque hypermédias et apprentissages 275 BABEL LEXIS : UN SYSTÈME ÉVOLUTIF PERMETTANT LA CRÉATION, LE STOCKAGE ET LA CONSULTATION D OBJETS HYPERMÉDIAS Anne-Olivia LE CORNEC, Jean-Marc FARINONE,

Plus en détail

L accompagnement pour piano

L accompagnement pour piano L accompagnement pour piano L'harmonisation polyphonique d'une mélodie, telle que vous l'avez pratiquée jusqu'à présent, demande quelques modifications de conception dès lors que l'on veut la transformer

Plus en détail

CURSUS MUSIQUES TRADITIONNELLES

CURSUS MUSIQUES TRADITIONNELLES CURSUS MUSIQUES TRADITIONNELLES Tableau de suivi de CURSUS DEPARTEMENT DE MUSIQUES TRADITIONELLES Le mode de notation s'éffectue sur /20, la moyenne de validation d'un UV étant 10/20. 1er CYCLE Acquisitions

Plus en détail

DIU Soins Palliatifs et d Accompagnement.

DIU Soins Palliatifs et d Accompagnement. DIU Soins Palliatifs et d Accompagnement. Centre - Pays de Loire CHRU Tours COMMUNICATION «Conflits internes et Cohérence personnelle» SOMMAIRE Introduction page 3 Communication Page 4 Les paramètres d

Plus en détail

Fiche de synthèse sur la PNL (Programmation Neurolinguistique)

Fiche de synthèse sur la PNL (Programmation Neurolinguistique) 1 Fiche de synthèse sur la PNL (Programmation Neurolinguistique) La programmation neurolinguistique (PNL) fournit des outils de développement personnel et d amélioration de l efficacité relationnelle dans

Plus en détail

Quand la peur nous prend, qu est-ce qu elle nous prend? Vivre la crainte, l inquiétude, la panique, l affolement ou la terreur; la peur est

Quand la peur nous prend, qu est-ce qu elle nous prend? Vivre la crainte, l inquiétude, la panique, l affolement ou la terreur; la peur est Quand la peur nous prend, qu est-ce qu elle nous prend? Vivre la crainte, l inquiétude, la panique, l affolement ou la terreur; la peur est une émotion à la fois si commune et si unique que la langue française

Plus en détail

LA MUSIQUE A L ECOLE PRIMAIRE

LA MUSIQUE A L ECOLE PRIMAIRE LA MUSIQUE A L ECOLE PRIMAIRE Ce que préconisent les programmes de l éducation nationale. I. 1 Le cycle des apprentissages premiers 1) les activités vocales 2) l activité d écoute 3) activités avec des

Plus en détail

L art de la reconnaissance en gestion

L art de la reconnaissance en gestion L art de la reconnaissance en gestion Sophie Tremblay Coach & Stratège Dans votre parcours professionnel, quelle est la personne qui vous a offert la reconnaissance qui vous a le plus marqué? Quelles sont

Plus en détail

UE11 Phonétique appliquée

UE11 Phonétique appliquée UE11 Phonétique appliquée Christelle DODANE Permanence : mercredi de 11h15 à 12h15, H211 Tel. : 04-67-14-26-37 Courriel : christelle.dodane@univ-montp3.fr Bibliographie succinte 1. GUIMBRETIERE E., Phonétique

Plus en détail

ETES-VOUS PRET.ES A ALLER MIEUX?

ETES-VOUS PRET.ES A ALLER MIEUX? ETES-VOUS PRET.ES A ALLER MIEUX? Projet formulé à partir de ma pratique de terrain: Constats Public défavorisé Plus le niveau d instruction est bas plus plus le risque est grand de développer des troubles

Plus en détail

FORMATION CONTINUE SUR L UTILISATION D EXCEL DANS L ENSEIGNEMENT Expérience de l E.N.S de Tétouan (Maroc)

FORMATION CONTINUE SUR L UTILISATION D EXCEL DANS L ENSEIGNEMENT Expérience de l E.N.S de Tétouan (Maroc) 87 FORMATION CONTINUE SUR L UTILISATION D EXCEL DANS L ENSEIGNEMENT Expérience de l E.N.S de Tétouan (Maroc) Dans le cadre de la réforme pédagogique et de l intérêt que porte le Ministère de l Éducation

Plus en détail

PROGRAMME FUGUE. Les horaires vous seront signalés une fois les inscriptions closes. Total des heures : 14

PROGRAMME FUGUE. Les horaires vous seront signalés une fois les inscriptions closes. Total des heures : 14 PROGRAMME FUGUE Les deux jours se composent d une approche corporelle et mentale, alternant l étude des «manières de faire» (techniques) et des «manières d être» (états intérieurs et manifestations extérieures)

Plus en détail

Transformez votre relation au monde!

Transformez votre relation au monde! Transformez votre relation au monde! Formations certifiantes PNL QUEST interactive 2013-2014 Qu est-ce que la PNL? La PNL (Programmation Neuro-linguistique) est une discipline développée dans les années

Plus en détail

LES CARTES À POINTS : POUR UNE MEILLEURE PERCEPTION

LES CARTES À POINTS : POUR UNE MEILLEURE PERCEPTION LES CARTES À POINTS : POUR UNE MEILLEURE PERCEPTION DES NOMBRES par Jean-Luc BREGEON professeur formateur à l IUFM d Auvergne LE PROBLÈME DE LA REPRÉSENTATION DES NOMBRES On ne conçoit pas un premier enseignement

Plus en détail

Didier Pietquin. Timbre et fréquence : fondamentale et harmoniques

Didier Pietquin. Timbre et fréquence : fondamentale et harmoniques Didier Pietquin Timbre et fréquence : fondamentale et harmoniques Que sont les notions de fréquence fondamentale et d harmoniques? C est ce que nous allons voir dans cet article. 1. Fréquence Avant d entamer

Plus en détail

Pistes d intervention pour les enfants présentant un retard global de développement

Pistes d intervention pour les enfants présentant un retard global de développement Pistes d intervention pour les enfants présentant un retard global de développement Pistes d intervention pour les enfants présentant un retard global de développement, MELS, novembre 2011 Page 1 Document

Plus en détail

Questionnaire pour connaître ton profil de perception sensorielle Visuelle / Auditive / Kinesthésique

Questionnaire pour connaître ton profil de perception sensorielle Visuelle / Auditive / Kinesthésique Questionnaire pour connaître ton profil de perception sensorielle Visuelle / Auditive / Kinesthésique BUT : Découvrir ton profil préférentiel «Visuel / Auditif / Kinesthésique» et tu trouveras des trucs

Plus en détail

Institut Informatique de gestion. Communication en situation de crise

Institut Informatique de gestion. Communication en situation de crise Institut Informatique de gestion Communication en situation de crise 1 Contexte Je ne suis pas un professionnel de la communication Méthode empirique, basée sur l(es) expérience(s) Je suis actif dans un

Plus en détail

Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can we Exploit It?

Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can we Exploit It? Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can we Exploit It? Karl F. MacDorman 1 The Uncanny Valley : Le terme d Uncanny Valley a été défini par Masahiro Mori, chercheur

Plus en détail

Charte de protection des mineurs

Charte de protection des mineurs «Mes enfants, soyez joyeux!» Charte de protection des mineurs Au sein de l Académie Musicale de Liesse Mise à jour août 2014 L ensemble des adultes intervenant au sein de l Académie Musicale de Liesse

Plus en détail

L indice de SEN, outil de mesure de l équité des systèmes éducatifs. Une comparaison à l échelle européenne

L indice de SEN, outil de mesure de l équité des systèmes éducatifs. Une comparaison à l échelle européenne L indice de SEN, outil de mesure de l équité des systèmes éducatifs. Une comparaison à l échelle européenne Sophie Morlaix To cite this version: Sophie Morlaix. L indice de SEN, outil de mesure de l équité

Plus en détail

Compétences souhaitées à la fin des trois cycles de l enseignement spécialisé (musique)

Compétences souhaitées à la fin des trois cycles de l enseignement spécialisé (musique) Compétences souhaitées à la fin des trois cycles de l enseignement spécialisé (musique) Ipmc Institut de pédagogie musicale et chorégraphique Avril 1993 1 Avant-propos 1. Ce texte a été rédigé tout d abord

Plus en détail

Bandes Critiques et Masquage

Bandes Critiques et Masquage Bandes Critiques et Masquage A. Almeida Licence Pro Acoustique et Vibrations Octobre 2012 Au Menu Au programme 1 Observations du masquage 5 Application du masquage 2 Conséquences du Masquage 3 Interprétation

Plus en détail

L ACQUISITION DU LANGAGE CHEZ LE TOUT PETIT EN VIE COLLECTIVE INSTITUTIONNELLE

L ACQUISITION DU LANGAGE CHEZ LE TOUT PETIT EN VIE COLLECTIVE INSTITUTIONNELLE N 220 - ROUFIDOU Irini L ACQUISITION DU LANGAGE CHEZ LE TOUT PETIT EN VIE COLLECTIVE INSTITUTIONNELLE Pendant notre recherche du D.E.A., nous avons étudié l acquisition du langage chez le tout petit en

Plus en détail

Le parcours professionnel des chômeurs de longue durée en Suisse

Le parcours professionnel des chômeurs de longue durée en Suisse Le parcours professionnel des chômeurs de longue durée en Suisse Cet article présente les premiers résultats d un projet de recherche qui étudie le parcours professionnel de personnes confrontées au chômage

Plus en détail

SÉMINAIRE INTELLIGENCE EMOTIONNELLE

SÉMINAIRE INTELLIGENCE EMOTIONNELLE 21 ET 22 AOÛT 2010 Nuvilly-Suisse Contact Cornelia Roulet Tél. 078/787.17.81 chevalcoaching@orangemail.ch Eliane Bernard Coach structurel et facilitatrice équin «grow» Dominique Pellet Coach PNL «dpcoaching»

Plus en détail

Evaluation Agi Son 2005. Agi son 2005. Evaluation de la campagne de sensibilisation aux risques auditifs liés à l écoute de musiques amplifiées

Evaluation Agi Son 2005. Agi son 2005. Evaluation de la campagne de sensibilisation aux risques auditifs liés à l écoute de musiques amplifiées Agi son Evaluation de la campagne de sensibilisation aux risques auditifs liés à l écoute de musiques amplifiées 1 Introduction p. 3 Méthode p.4 1. Le public p.5 A. Caractéristique des personnes interrogées

Plus en détail

Qu'est-ce que l'écoute?

Qu'est-ce que l'écoute? École maternelle Imaginer sentir créer Qu'est-ce que l'écoute? Août 2007 eduscol.education.fr/ecole L ÉCOUTE «L oreille a deux portes : le tympan et l oreille interne. Cette dernière seule est porteuse

Plus en détail

Le conseil d enfants La démocratie représentative à l école

Le conseil d enfants La démocratie représentative à l école Le conseil d enfants La démocratie représentative à l école Le conseil d école des enfants est un moment privilégié durant lequel les enfants deviennent acteurs au sein de leur école, en faisant des propositions

Plus en détail

Note d orientation : La simulation de crise Établissements de catégorie 2. Novembre 2013. This document is also available in English.

Note d orientation : La simulation de crise Établissements de catégorie 2. Novembre 2013. This document is also available in English. Note d orientation : La simulation de crise Établissements de catégorie 2 This document is also available in English. La présente Note d orientation s adresse à toutes les caisses populaires de catégorie

Plus en détail

5172, des Ramiers Québec QC G1G 1L3 (418) 622-1593 c.sanfacon@videotron.ca

5172, des Ramiers Québec QC G1G 1L3 (418) 622-1593 c.sanfacon@videotron.ca 5172, des Ramiers Québec QC G1G 1L3 (418) 622-1593 LES TECHNIQUES D INTERVENTION Accepter mes propres émotions, éviter l affrontement, respecter l image du jeune. Ce n est pas toujours évident d intervenir

Plus en détail

Analyse des bruits de clavier d ordinateur

Analyse des bruits de clavier d ordinateur Analyse des bruits de clavier d ordinateur Introduction 1 Enregistrement des bruits de clavier 2 Analyse des bruits de clavier 3 Analyse du niveau de pression acoustique vs. temps 4 Sonie vs. temps 4 Acuité

Plus en détail

A. BONNEFOND Maître de conférences en neuroscience cognitive Laboratoire d imagerie et de neuroscience cognitive Université de Strasbourg

A. BONNEFOND Maître de conférences en neuroscience cognitive Laboratoire d imagerie et de neuroscience cognitive Université de Strasbourg Sommeil de courte durée, vigilance et travail de nuit A. BONNEFOND Maître de conférences en neuroscience cognitive Laboratoire d imagerie et de neuroscience cognitive Université de Strasbourg Sommeil de

Plus en détail

Repères historiques MUSIQUE ET VARIATION. Objectifs. Léonard de Vinci W.A.Mozart G.Bizet

Repères historiques MUSIQUE ET VARIATION. Objectifs. Léonard de Vinci W.A.Mozart G.Bizet MUSIQUE ET VARIATION Objectifs A être auditeur, interprète et créateur. A décrire, identifier et caractériser les éléments constitutifs du phénomène musical. A écouter sa production pour la corriger et

Plus en détail

eduscol Ressources pour la voie professionnelle Français Ressources pour les classes préparatoires au baccalauréat professionnel

eduscol Ressources pour la voie professionnelle Français Ressources pour les classes préparatoires au baccalauréat professionnel eduscol Ressources pour la voie professionnelle Ressources pour les classes préparatoires au baccalauréat professionnel Français Présentation des programmes 2009 du baccalauréat professionnel Ces documents

Plus en détail

Pourquoi et comment les collectivités locales associent d autres acteurs à la définition et à la mise en œuvre d actions publiques?

Pourquoi et comment les collectivités locales associent d autres acteurs à la définition et à la mise en œuvre d actions publiques? Pourquoi et comment les collectivités locales associent d autres acteurs à la définition et à la mise en œuvre d actions publiques? Rencontre professionnelle 4 Ont participé à cet atelier : David ALCAUD,

Plus en détail

Notes de lecture : Dan SPERBER & Deirdre WILSON, La pertinence

Notes de lecture : Dan SPERBER & Deirdre WILSON, La pertinence Notes de lecture : Dan SPERBER & Deirdre WILSON, La pertinence Gwenole Fortin To cite this version: Gwenole Fortin. Notes de lecture : Dan SPERBER & Deirdre WILSON, La pertinence. 2006.

Plus en détail

Pony Production. mise en scène : Stéphanie Marino. Texte et Interprètation : Nicolas Devort. création graphique : Olivier Dentier - od-phi.

Pony Production. mise en scène : Stéphanie Marino. Texte et Interprètation : Nicolas Devort. création graphique : Olivier Dentier - od-phi. création graphique : Olivier Dentier - od-phi.com et Pony Production mise en scène : Stéphanie Marino Texte et Interprètation : Nicolas Devort Résumé Colin fait sa rentrée dans un nouveau collège. Pas

Plus en détail

Référentiel d'activités professionnelles et référentiel de certification Diplôme d'état de professeur de musique

Référentiel d'activités professionnelles et référentiel de certification Diplôme d'état de professeur de musique Référentiel d'activités professionnelles et référentiel de certification Diplôme d'état de professeur de musique I Contexte du métier 1. Définition Les professeurs de musique diplômés d'état sont chargés

Plus en détail

2) Qu est-ce que la cohésion sociale et l inclusion?

2) Qu est-ce que la cohésion sociale et l inclusion? Chantier sur la cohésion sociale et l inclusion dans le cadre des Ateliers des savoirs partagés JUIN 2013 1) Mise en contexte Dans le cadre des Ateliers des savoirs partagés à Saint-Camille, 4 chantiers

Plus en détail

E-monitoring : intégrer l émotionnel dans votre «balanced scorecard»

E-monitoring : intégrer l émotionnel dans votre «balanced scorecard» E-monitoring : intégrer l émotionnel dans votre «balanced scorecard» Par Daniel Held, Dr. Es Sc. écon. et Benoit Moransais, lic. ès Sc. écon., associés du cabinet Qualintra SA, Genève Le «balanced scorecard»

Plus en détail

En quoi consistera ce jeu?

En quoi consistera ce jeu? 1 Projet Comité d Education à la Santé et à la Citoyenneté inter degré Création jeu par les élèves de l école du Gliesberg, de l école Martin Schongauer et du collège Hans Arp de Strasbourg Création d

Plus en détail

Préparer la formation

Préparer la formation Préparer Profédus propose des exemples variés de contenus d enseignement en éducation à la santé. Les fiches ne sont pas conçues en «prêt à penser» ; elles restent largement enracinées dans le contexte

Plus en détail

Pierre Couprie. «Analyser la musique électroacoustique avec le logiciel ianalyse» EMS08

Pierre Couprie. «Analyser la musique électroacoustique avec le logiciel ianalyse» EMS08 Pierre Couprie «Analyser la musique électroacoustique avec le logiciel ianalyse» EMS08 Electroacoacoustic Music Studies Network International Conference 3-7 juin 2008 (Paris) - INA-GRM et Université Paris-Sorbonne

Plus en détail

Programme de formation

Programme de formation Programme de formation Symbos a mis au point avec ses experts un programme en deux temps. l ABC du coaching en neuf jours APPROFONDISSEMENT par l acquisition des méthodes en neuf jours complémentaires.

Plus en détail

Le menu du jour, un outil au service de la mise en mémoire

Le menu du jour, un outil au service de la mise en mémoire Le menu du jour, un outil au service de la mise en mémoire Type d outil : Outil pour favoriser la mise en mémoire et développer des démarches propres à la gestion mentale. Auteur(s) : Sarah Vercruysse,

Plus en détail

P2: Perception auditive

P2: Perception auditive P2: Perception auditive Daniel Pressnitzer Laboratoire des Systèmes Perceptifs, CNRS & Département d études cognitives, Ecole normale supérieure 29 rue d Ulm, 75230 Paris cedex 05 daniel.pressnitzer@ens.fr

Plus en détail

CARACTERISTIQUES DE L APPROCHE GESTALT EN ORGANISATION

CARACTERISTIQUES DE L APPROCHE GESTALT EN ORGANISATION CARACTERISTIQUES DE L APPROCHE GESTALT EN ORGANISATION Société des Coaches Gestaltistes Coordinateur projet: Marius Moutet INTRODUCTION Un coaching d individus ou d équipe inspiré de la posture humaniste

Plus en détail

Ces formations se font uniquement «Sur mesure» - Nous contacter. II - Techniques de vente «Avancées» - 6 ou 7 modules selon le concept vente

Ces formations se font uniquement «Sur mesure» - Nous contacter. II - Techniques de vente «Avancées» - 6 ou 7 modules selon le concept vente FORMATIONS COMMERCIALES MANAGEMENT Ces formations se font uniquement «Sur mesure» - Nous contacter I - Techniques de vente «Basic» - 6 modules II - Techniques de vente «Avancées» - 6 ou 7 modules selon

Plus en détail

REFERENTIELS DE COMPETENCE

REFERENTIELS DE COMPETENCE NSIGNMNT SONDAIR ARTISTIQU A HORAIR RDUIT RFRNTILS D OMPTN DOMAIN D LA MUSIQU TABL DS MATIRS - Préface de la Ministre Françoise DUPUIS - 2 - - Le mot des fédérations - 3 - - Socles de compétence (.S.A.H.R.)

Plus en détail

PROJET EDUCATIF 1/ INTRODUCTION AU PROJET EDUCATIF : BUT, PUBLIC VISE ET DUREE DU PROJET

PROJET EDUCATIF 1/ INTRODUCTION AU PROJET EDUCATIF : BUT, PUBLIC VISE ET DUREE DU PROJET PROJET EDUCATIF 1/ INTRODUCTION AU PROJET EDUCATIF : BUT, PUBLIC VISE ET DUREE DU PROJET 2/LES INTENTIONS EDUCATIVES 3/ LES VALEURS PRINCIPALES 4/ LES ACTEURS EDUCATIFS, LEUR CHAMPS D INTERVENTION ET LES

Plus en détail

Rousseau Nadia. Abécédaire

Rousseau Nadia. Abécédaire Rousseau Nadia Projet DataPolis Abécédaire janvier Avril 2014 Master 1 Création Management Multimédia Laboratoire Arts Plastiques Encadrement : Pierre Braun & Nicolas Thély Universite Rennes 2 Analyse

Plus en détail

SCIENCES DE L ÉDUCATION

SCIENCES DE L ÉDUCATION UniDistance 1 Centre d Etudes Suisse Romande Formation universitaire SCIENCES DE L ÉDUCATION En collaboration avec L Université de Bourgogne à Dijon Centre de Formation Ouverte et A Distance CFOAD UniDistance

Plus en détail

ima est un langage universel conçu pour optimiser la communication et les contacts.

ima est un langage universel conçu pour optimiser la communication et les contacts. Audit des Ressources Humaines ATELIER D UNE DEMI-JOURNEE Introduction ima est un langage universel conçu pour optimiser la communication et les contacts. ima signifie Identifier, Modifier, Adapter : les

Plus en détail

GRAVER LA PAIX Projet de création artistique collective dans le cadre des Rencontres de Genève Histoire et Cité Construire la Paix (14-16.05.

GRAVER LA PAIX Projet de création artistique collective dans le cadre des Rencontres de Genève Histoire et Cité Construire la Paix (14-16.05. GRAVER LA PAIX Projet de création artistique collective dans le cadre des Rencontres de Genève Histoire et Cité Construire la Paix (14-16.05.2015) LE PROJET DÉTAILLÉ Présentation générale Graver la Paix

Plus en détail

Intelligence Artificielle et Robotique

Intelligence Artificielle et Robotique Intelligence Artificielle et Robotique Introduction à l intelligence artificielle David Janiszek david.janiszek@parisdescartes.fr http://www.math-info.univ-paris5.fr/~janiszek/ PRES Sorbonne Paris Cité

Plus en détail

Sergiu Celibidache: La musique n est rien

Sergiu Celibidache: La musique n est rien Université de Nantes Année universitaire 2012-2013 Sergiu Celibidache: La musique n est rien Samy Rupin Licence de philosophie Philosophie de la musique Sous la direction de Patrick Lang 1 Introduction

Plus en détail

with the support of EFMET cooperation partners:

with the support of EFMET cooperation partners: Recommandations à la Commission Européenne sur le rôle de l enseignement musical et de la formation musicale professionnelle dans le nouveau programme européenne pour la culture with the support of EFMET

Plus en détail

Les compétences émotionnelles au cœur de la performance individuelle et collective

Les compétences émotionnelles au cœur de la performance individuelle et collective Quand psychologie et émotions riment avec recherche et application Les compétences émotionnelles au cœur de la performance individuelle et collective Lisa Bellinghausen Psychologue du travail / Chercheuse

Plus en détail

Programmes du collège

Programmes du collège Bulletin officiel spécial n 6 du 28 août 2008 Programmes du collège Programmes de l enseignement d éducation musicale Ministère de l Éducation nationale Éducation musicale Préambule 1. Un contexte en constante

Plus en détail

Une conférence-débat proposée par l Institut National de la Recherche Agronomique

Une conférence-débat proposée par l Institut National de la Recherche Agronomique Economies d'énergies dans les procédés agro-alimentaires : l'optimisation coût/qualité, un équilibre pas si facile à maîtriser Une conférence-débat proposée par l Institut National de la Recherche Agronomique

Plus en détail

basée sur le cours de Bertrand Legal, maître de conférences à l ENSEIRB www.enseirb.fr/~legal Olivier Augereau Formation UML

basée sur le cours de Bertrand Legal, maître de conférences à l ENSEIRB www.enseirb.fr/~legal Olivier Augereau Formation UML basée sur le cours de Bertrand Legal, maître de conférences à l ENSEIRB www.enseirb.fr/~legal Olivier Augereau Formation UML http://olivier-augereau.com Sommaire Introduction I) Les bases II) Les diagrammes

Plus en détail

LES CONDITIONS D ACCÈS AUX SERVICES BANCAIRES DES MÉNAGES VIVANT SOUS LE SEUIL DE PAUVRETÉ

LES CONDITIONS D ACCÈS AUX SERVICES BANCAIRES DES MÉNAGES VIVANT SOUS LE SEUIL DE PAUVRETÉ 3. Les crédits 3.1 Les crédits en cours 3.1.1 Les ménages ayant au moins un crédit en cours Un peu plus du quart, 31%, des ménages en situation de déclarent avoir au moins un crédit en cours. Il s agit

Plus en détail

Modulo Bank - Groupe E.S.C Chambéry - prérequis à la formation - doc. interne - Ecoute active.doc Page 1

Modulo Bank - Groupe E.S.C Chambéry - prérequis à la formation - doc. interne - Ecoute active.doc Page 1 Généralités, l'écoute active : présentation et techniques... 3 Introduction... 3 Plan... 3 La présentation de l'écoute active... 4 Définition... 4 Fondement... 4 Application... 4 de l'écoute active...

Plus en détail

Autonomie et fragilités de la recherche académique Financements sur projet et reconfigurations du travail scientifique

Autonomie et fragilités de la recherche académique Financements sur projet et reconfigurations du travail scientifique L irrésistible ascension du capitalisme académique 18-19 avril 2013 Autonomie et fragilités de la recherche académique Financements sur projet et reconfigurations du travail scientifique Julien Barrier

Plus en détail

Guide d utilisation en lien avec le canevas de base du plan d intervention

Guide d utilisation en lien avec le canevas de base du plan d intervention Guide d utilisation en lien avec le canevas de base du plan d intervention Guide d utilisation en lien avec le canevas de base du plan d intervention À moins d indications contraires, toutes les définitions

Plus en détail

Chapitre 2 Les ondes progressives périodiques

Chapitre 2 Les ondes progressives périodiques DERNIÈRE IMPRESSION LE er août 203 à 7:04 Chapitre 2 Les ondes progressives périodiques Table des matières Onde périodique 2 2 Les ondes sinusoïdales 3 3 Les ondes acoustiques 4 3. Les sons audibles.............................

Plus en détail

Le Bon Accueil Lieu d art contemporain - Sound Art INTERFÉRENCES ATELIERS / EXPOSITION / CONCERT

Le Bon Accueil Lieu d art contemporain - Sound Art INTERFÉRENCES ATELIERS / EXPOSITION / CONCERT Le Bon Accueil Lieu d art contemporain - Sound Art INTERFÉRENCES ATELIERS / EXPOSITION / CONCERT 4 ATELIERS TOUT PUBLIC / 1 INSTALLATION SONORE ET CINETIQUE / 1 PERFORMANCE AUDIOVISUELLE - REVISITER DES

Plus en détail

COMMUNIQUÉ DE PRESSE. Exposition temporaire «Très toucher»

COMMUNIQUÉ DE PRESSE. Exposition temporaire «Très toucher» COMMUNIQUÉ DE PRESSE Exposition temporaire «Très toucher» Du 7 juillet au 7 décembre 2012 à L Arche des Métiers L ARCHE DES MÉTIERS - CCSTI de l Ardèche Place des Tanneurs BP 55-07160 LE CHEYLARD Tél.

Plus en détail

Le Crédit-bail mobilier dans les procédures collectives

Le Crédit-bail mobilier dans les procédures collectives Aimé Diaka Le Crédit-bail mobilier dans les procédures collectives Publibook Retrouvez notre catalogue sur le site des Éditions Publibook : http://www.publibook.com Ce texte publié par les Éditions Publibook

Plus en détail

BIEN SE CONNAÎTRE. Exercice 1 VOS CHAMPS D INTÉRÊT. Quels sont les champs d intérêt qui correspondent le mieux aux vôtres?

BIEN SE CONNAÎTRE. Exercice 1 VOS CHAMPS D INTÉRÊT. Quels sont les champs d intérêt qui correspondent le mieux aux vôtres? BIEN SE CONNAÎTRE Les 3 exercices suivants vous permettront d évaluer votre situation par rapport à l emploi. Mieux vous connaître vous aidera à préciser les offres d emploi qui correspondent à vos intérêts,

Plus en détail

Le modèle standard, SPE (1/8)

Le modèle standard, SPE (1/8) Le modèle standard, SPE (1/8) Rappel : notion de grammaire mentale modulaire Les composants de la grammaire : module phonologique, sémantique syntaxique Syntaxe première : elle orchestre la relation mentale

Plus en détail

ESSOURCES PÉDAGOGIQUES

ESSOURCES PÉDAGOGIQUES 2015 MATERNELLES CYCLE I / PS - MS ESSOURCES PÉDAGOGIQUES Introduction Je découvre par les sens MODULES À DÉCOUVRIR PENDANT LA VISITE La Cité des enfants de Vulcania est un lieu d éveil, de découvertes

Plus en détail

TESTIMONIAUX STAGIAIRES DESU PRATIQUES DU COACHING, UNIVERSITE PARIS 8

TESTIMONIAUX STAGIAIRES DESU PRATIQUES DU COACHING, UNIVERSITE PARIS 8 STAGIAIRES DESU PRATIQUES DU COACHING, UNIVERSITE PARIS 8 Gilles ARDITTI DESU Pratiques du coaching», promotion 2009 2010 Directeur de la Communication Financière ATOS Tout au long de cette formation,

Plus en détail

La crise économique vue par les salariés français

La crise économique vue par les salariés français La crise économique vue par les salariés français Étude du lien entre la performance sociale et le contexte socioéconomique Baggio, S. et Sutter, P.-E. La présente étude s intéresse au lien entre cette

Plus en détail

FORMATIONS COACHING EVOLUTIF - PNL - HYPNOSE - NON VERBAL - COMMUNICATION EFFICACE

FORMATIONS COACHING EVOLUTIF - PNL - HYPNOSE - NON VERBAL - COMMUNICATION EFFICACE FORMATIONS COACHING EVOLUTIF - PNL - HYPNOSE - NON VERBAL - COMMUNICATION EFFICACE Ecole de Coaching. Vous préparer à réaliser vos potentiels... Pour vous initier au coaching, développer vos connaissances,

Plus en détail

Les petits pas. Pour favoriser mon écoute. Où le placer dans la classe? Procédurier. Adapter les directives. Référentiel Présentation des travaux

Les petits pas. Pour favoriser mon écoute. Où le placer dans la classe? Procédurier. Adapter les directives. Référentiel Présentation des travaux Tombe facilement dans la lune (distraction interne) Compenser les déficits d attention des élèves ayant un TDAH : des moyens simples à proposer aux enseignants Line Massé Département de psychoéducation,

Plus en détail

Compte-rendu de Hamma B., La préposition en français

Compte-rendu de Hamma B., La préposition en français Compte-rendu de Hamma B., La préposition en français Badreddine Hamma To cite this version: Badreddine Hamma. Compte-rendu de Hamma B., La préposition en français. Revue française de linguistique appliquée,

Plus en détail

Cursus Jazz. 1 er cycle Certification : Attestation de fin de 1 er cycle Durée du cycle 4 ans maximum

Cursus Jazz. 1 er cycle Certification : Attestation de fin de 1 er cycle Durée du cycle 4 ans maximum Cursus Jazz 1 er cycle Certification : Attestation de fin de 1 er cycle Durée du cycle 4 ans maximum Cours instrumental (30 mn hebdomadaires) Cours de formation musicale Jazz (Au moins 1 an dans le cycle,

Plus en détail

MAITRISE DE LA CHAINE LOGISTIQUE GLOBALE (SUPPLY CHAIN MANAGEMENT) Dimensionnement et pilotage des flux de produits

MAITRISE DE LA CHAINE LOGISTIQUE GLOBALE (SUPPLY CHAIN MANAGEMENT) Dimensionnement et pilotage des flux de produits MAITRISE DE LA CHAINE LOGISTIQUE GLOBALE (SUPPLY CHAIN MANAGEMENT) Dimensionnement et pilotage des flux de produits Préambule La performance flux, quel que soit le vocable sous lequel on la désigne ( Juste

Plus en détail

Dans une étude, l Institut Randstad et l OFRE décryptent le fait religieux en entreprise

Dans une étude, l Institut Randstad et l OFRE décryptent le fait religieux en entreprise Communiqué de presse Dans une étude, l Institut Randstad et l OFRE décryptent le fait religieux en entreprise Paris, le 27 mai 2013 L Institut Randstad et l Observatoire du Fait Religieux en Entreprise

Plus en détail

Cours d Acoustique. Niveaux Sonores Puissance, Pression, Intensité

Cours d Acoustique. Niveaux Sonores Puissance, Pression, Intensité 1 Cours d Acoustique Techniciens Supérieurs Son Ière année Aurélie Boudier, Emmanuelle Guibert 2006-2007 Niveaux Sonores Puissance, Pression, Intensité 1 La puissance acoustique Définition La puissance

Plus en détail

Disparités entre les cantons dans tous les domaines examinés

Disparités entre les cantons dans tous les domaines examinés Office fédéral de la statistique Bundesamt für Statistik Ufficio federale di statistica Uffizi federal da statistica Swiss Federal Statistical Office EMBARGO: 02.05.2005, 11:00 COMMUNIQUÉ DE PRESSE MEDIENMITTEILUNG

Plus en détail

Toute reproduction de ce matériel pédagogique à des fins commerciales est interdite. Tous droits réservés. École de musique, Université de Sherbrooke.

Toute reproduction de ce matériel pédagogique à des fins commerciales est interdite. Tous droits réservés. École de musique, Université de Sherbrooke. Toute reproduction de ce matériel pédagogique à des fins commerciales est interdite. Tous droits réservés. École de musique, Université de Sherbrooke. Section 2A Présentation des programmes instrumentaux

Plus en détail