A prevalent conceptual metaphor is the association of the concepts of good and evil with brightness and darkness, respectively. Music cognition, like metaphor, is possibly embodied, yet no study has addressed the question whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. In three separate experiments, participants judged the brightness of a grey square that was presented after a short excerpt of emotional music. The results of Experiment 1 showed that short musical excerpts are effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this brightness judgment bias is indeed a robust effect. Altogether, our findings demonstrate a powerful role of musical emotion in biasing brightness judgment and that this bias is aligned with the metaphor viewpoint.
Citation: Bhattacharya J, Lindsen JP (2016) Music for a Brighter World: Brightness Judgment Bias by Musical Emotion. PLoS ONE 11(2): e0148959. doi:10.1371/journal.pone.0148959
Editor: Alessio Avenanti, University of Bologna, ITALY
Received: April 21, 2014; Accepted: January 26, 2016; Published: February 10, 2016
Copyright: © 2016 Bhattacharya, Lindsen. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data are available at: https://figshare.com/s/e877c76e9acb17710cf2.
Funding: The research was supported by the EPSRC, UK (Research Grant EP/H01294X) and by the European Commission (Grant Agreement No. 612022). This publication reflects the views only of the authors, and the funders cannot be held responsible for any use that may be made of the information contained therein. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Throughout history and across cultures, the concepts of good and evil are associated with brightness and darkness, respectively. Examples of such associations can be found in everyday phrases like ‘look on the bright side’ and ‘the forces of darkness’, depictions of heaven and hell, literary descriptions of happiness and depression, and rituals and attire related to weddings and funerals. This linking of one idea to another is known as a conceptual metaphor, which is thought to aid in understanding and communicating abstract concepts (e.g. good or evil) by applying knowledge of more concrete or physical concepts (e.g. brightness or darkness) .
Persistent metaphorical associations such as these have been shown to affect cognition and sensory experiences (see for a review ). For example, smiling faces appear to be brighter than frowning faces ), grey squares are perceived as brighter after evaluating positive words than negative words , or remembering an ethical deed make the immediate surroundings appear brighter than remembering an unethical deed  (but see also ). These findings suggest that metaphors link brightness judgment to emotion.
A stimulus that has been successfully and reliably used to induce emotions and affective states is music . More than 2500 years ago, the celebrated Chinese philosopher Confucius remarked, "Music produces a kind of pleasure which human nature cannot do without." Human civilizations have undergone a sea change since then. Yet after two millennia the statement stands as bold and true as ever. We spend billions of dollars annually on music, and in every society, music is experienced and consumed by most people in their everyday lives and in diverse situations [8, 9]. A considerable amount of music experience is deliberate , and emotions appear prominently in people’s reported motives for listening to music . The experience of music listening is mediated by many factors including musical and cultural background , musical preferences , familiarity , personality , and the social context . Yet there is recent evidence that certain basic emotions, like happy and sad, expressed by musical stimuli are universally recognized . There could be a systematic relationship between musical profiles (e.g., crescendos) and psychophysical autonomic responses , which are not necessarily under listener’s cognitive control.
Therefore, it is not too surprising that recent research shows that musical emotion could alter our decision making processes (see for a review, ). For example, ratings of the taste of wine reflected the emotional connotation of the background music . The emotional judgment of facial stimuli could be biased towards the direction of the emotional valences of the musical primes  or emotional judgment of complex visual stimuli were biased towards the direction of emotional arousals of the musical primes . Further, Riener and colleagues  showed that when participants were led to a music-induced sad mood, while standing at the bottom of a steep hill, they tend to overestimate the steepness of a hill in a similar manner to those made by participants under physical stress . Furthermore, in an affective priming paradigm using consonant and dissonant chords as priming stimuli and emotional words as target stimuli, Sollberger and colleagues  showed that the target words were evaluated faster and more accurately for affectively congruent prime-target pairs than for incongruent ones.
However, it is not yet known whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. Hence, we investigated the effect of musical emotion on brightness judgment. In three separate experiments, participants judged the brightness of a grey square that was presented after an excerpt of emotional music. Although in previous research music was often used as a background stimuli, and the exposure to music was relatively long (in the order of minutes), we assumed that even relatively short (in the order of seconds) musical excerpts can be used as effective emotional primes that could lead to influence brightness judgment. Based on the prevalent metaphorical mapping (positive = bright; negative = dark), we predicted that the emotional valence of the musical excerpt systematically affects brightness judgement, i.e. a positive music excerpt biases judgment towards brighter while a negative music excerpt biases judgment towards darker.
Materials and Methods
All participants gave written informed consent before the start of the experiment. The experimental protocol followed the guidelines of the declaration of the Helsinki and was approved by the Ethics Committee of the Department of Psychology at Goldsmiths.
For all three experiments, we used 56 short musical excerpts as primes that were previously validated by Vieillard and colleagues  conveying four target emotions: happiness, sadness, scary, and peacefulness. These four different types of musical primes could be discriminated along the two principal dimensions of Russell's circumplex model of affect : arousal and valence. Musical excerpts with a high rating on both valence and arousal were labelled as happy, a low rating on both valence and arousal was taken to indicate sadness, a high rating on valence combined with a low rating on arousal was considered to indicate peacefulness, and finally, a low rating on valence combined with a high arousal rating was thought to indicate scariness. There were 14 musical excerpts in each emotional category. The length of the segments was between 9 s and 17 s and was matched across categories. The primes were generated as MIDI and synthesized using a piano timbre.
Analysis of Musical Stimuli
As two auditory attributes, pitch and acoustic brightness, are conceptually related with the visual brightness, we performed two types of analysis of the musical primes. For pitch, we performed a midi-based analysis by the function readmidi.m available at kenschutte.com/midi. For acoustic brightness, we performed an acoustic analysis by the function mirbrightness.m available in the MIR Toolbox .
Twenty adults (11 females, mean age 27 years) with normal or corrected vision and normal hearing volunteered to take part in Experiment 1. None of the participants were professional musicians or music students.
On each of the 56 trials, participants were presented with two grey squares with identical dimensions, background and locations, and a musical prime. The first square was presented for 1 s, after which it was replaced by a fixation cross and the musical prime was played via headphones. After the musical excerpt was finished, participants were asked to rate the valence and arousal of the musical prime on two 7-point Likert scales (valence scale from 1: very unpleasant to 7: very pleasant; arousal scale from 1: very relaxing to 7: very stimulating) presented sequentially. Immediately afterwards, a second grey square was presented for 1 s. The participants were asked to judge whether the second square was brighter or darker than the first by a left or right key press. In the instructions, the participants were told that the change in brightness between the two squares was small but detectable (similar instruction as in ), while in reality the same square was presented twice. The order of the musical primes was randomized over participants.
A brightness judgment bias was calculated as the deviation of the average proportion of brighter/darker judgements from 0.5. Positive (negative) deviation from 0.5 would indicate that the second square was judged as brighter (darker) than the first one. The brightness judgment bias was analysed with a 2 (valence: positive vs. negative) x 2 (arousal: high vs. low) within-subjects ANOVA. This ANOVA was repeated twice; once based on the original classification of the primes by Vieillard and colleagues , and once based on the subjective ratings of valence and arousal of each participant in this experiment. The latter classification was done by using a median split on the individual ratings. Effects were considered statistically significant at p < 0.05. Further, all reported effect sizes (r) were calculated after Rosenthal [29, 30] as follows: . The r values were interpreted after Cohen  as follows: small effect, r = 0.10; medium effect, r = 0.30; large effect, r = 0.50.
Results and Discussion.
The second grey square was judged to be brighter more often following musical primes with positive valence, as compared to primes with negative valence, F(1,19) = 5.65, p < .03. The effect size was r = 0.48. This effect was mostly driven by the happy musical primes (Fig 1). There was no significant effect of arousal, F(1,19) = 0.97, p = .34, and no significant interaction between valence and arousal, F(1,19) = 2.71, p = .12.
Brightness judgment biases, separately for happy, peaceful, scary, and sad excerpts. Excerpts were categorized based on the objective intended emotion after  or based on the subjective ratings of each participant. Error bars indicate SE of the mean.
When the musical primes were categorized based on the subjective ratings of our participants, we observed a similar pattern (Fig 1). The grey squares were judged as brighter after musical primes were subjectively rated as positive valence, F(1,19) = 4.67, p < .05. The effect size was r = 0.44. No significant effect of subjectively rated arousal was observed, F(1,19) = 1.02, p = .33, and the interaction between valence and arousal was marginal, F(1,19) = 3.38, p = 0.08.
To further investigate the relation between subjective ratings of valence and arousal, and the bias in judged brightness, the average brightness judgment bias was calculated separately for every level on the 7-point Likert scales (Fig 2). Across participants, we observed a linear relationship: the higher the ratings on arousal and valence of musical primes, the higher the bias in brightness judgment. Further, this bias was most conspicuous at the extreme positive end of the scales, i.e. high values of arousal and valence.
Average brightness judgment bias separately for every level on the 7-point Likert scales of Valence and Arousal.
It should be noted here that on average the ratings of valence and arousal from our participants are in good agreement with the original classification into the four emotional categories as in ; see Table 1.
The table lists the average valence and arousal ratings for the four different types of musical excerpts (happy, sad, scary and peaceful) used as primes. (SE of the mean).
The results of Experiment 1 show that happy music makes a subsequently presented grey square to be judged brighter than the identical grey square presented earlier. Further, the brightness judgment bias remains robust irrespective of whether the musical excerpts were pre-categorized as happy or subjectively perceived to be happy. Our results are in accordance with previous studies suggesting that metaphors link brightness judgment to emotion [3–5].
In Experiment 1, musical primes were affectively evaluated before participants provided a brightness judgment. In order to investigate whether such explicit affective evaluation of the musical primes was necessary for the brightness judgment bias to occur, we performed a second experiment which was similar to Experiment 1 but without any emotional evaluation of musical primes.
Twenty human adults (16 females, mean age 27 years) with normal or corrected vision and normal hearing volunteered to take part in Experiment 2. None of the participants had taken part in Experiment 1 and none were professional musicians or music students.
The same procedure as in Experiment 1 was used, but without the affective evaluation of musical primes, i.e. the ratings of valence and arousal. Participants were shown the second grey square immediately after the music finished and there was no explicit affective evaluation of the musical primes. It should be stressed here that our participants were naive to the experimental aim(s) and the word 'emotion' did not appear on the information sheet given to them prior to the experiment in order to minimize any explicit affective evaluation of the musical excerpts.
Results and Discussion.
The second grey square was judged to be brighter more often following musical primes pre-categorized as having positive valence, as compared to primes having negative valence, F(1,19) = 7.67, p < .025. The effect size was r = .54. Similar to the results of Experiment 1, happy musical primes biased judgment towards increased brightness. In contrast to the previous results, sad primes clearly biased judgment in the opposite direction towards decreased brightness (Fig 3). The effect of arousal was marginally significant, F(1,19) = 4.04, p = .059 and the effect size was medium, r = 0.42 There was no significant interaction between valence and arousal, F(1,19) = 0.01, p = .94.
Brightness judgment biases, separately for happy, peaceful, scary, and sad excerpts. Error bars indicate SE of the mean.
The results of Experiment 2 mostly replicate those of Experiment 1: valence of a musical prime affects brightness judgment. Listening to short segments of happy music makes a grey square being judged brighter, while listening to sad music makes it to be judged darker. The explicit evaluation of the emotional content of the primes is in fact not necessary for this brightness judgment bias to occur, therefore, suggesting that the music induced brightness judgment bias may be an automatic effect. Merely listening to a short excerpt of happy or sad music affects subsequent brightness judgment.
In both Experiments 1 and 2, participants were presented with two grey squares, one before and one after the presentation of the musical prime, and they judged the brightness of the second grey square relative to the brightness of the first square. In order to investigate whether the brightness judgment bias is more related with the memory representation of the first grey square or with the brightness judgment of the second grey square, we performed a third experiment in which the participants rated a grey square presented after musical prime on an absolute scale.
Twenty human adults (13 females, mean age 21 years) with normal or corrected vision and normal hearing volunteered to take part in Experiment 3. No participants took part in the previous two experiments.
There were two phases in this experiment. In the first phase, the participants had to learn a grey scale with brightness varying from 1 (completely black) to 100 (completely white). In this learning phase, they were presented with squares of varying shades and were asked to judge a brightness rating between 1 and 100 on the grey scale they had previously seen. Participants received immediate feedback on their performance. Adequate learning was identified when performance in the last 10 trails was reasonably close (within ± 10 points) to true luminance level with a minimum of 30 trials. Once the learning criterion was met, in the second phase of this experiment participants were presented with musical primes, and only one square (but with shades varying across trials) was presented on each trial (56 in total) immediately after the musical prime was finished. The participants were asked to judge the brightness of the current square on the grey scale they had learnt earlier. Across trials, the brightness of the squares varied between 20 and 80 to allow for a bias towards both extremes of the spectrum. The order of the musical primes was randomized over participants.
A brightness judgment bias was calculated as the difference between the real level of brightness on the grey scale and the judged level of brightness as reported by the participants. At participant level, this bias value was normalized with respect to the overall mean bias score across all four musical primes. Positive scores indicate a bias towards the brighter judgment, while negative scores indicate a bias towards the darker judgment.
Results and Discussion.
The grey square was judged as brighter following a musical prime with positive valence and darker following a musical prime with negative valence (Fig 4), as indicated by the highly significant effect of valence, F(1,19) = 12.12, p < 0.005. The effect size was large, r = 0.62. There was no significant main effect of arousal, F(1,19) = 1.29, p = .27, and no significant interaction between valence and arousal, F(1,19) = 2.19, p = .155.
brightness judgment biases, separately for happy, peaceful, scary, and sad excerpts. Error bars indicate SE of the mean.
These results confirm our earlier findings of Experiments 1 and 2, but using a substantially different paradigm. Again, the valence of a short musical prime affects the perceived brightness of a subsequently presented grey square, with positive valence biasing judgment towards brighter and negative valence biasing towards darker responses. Importantly, the results of Experiment 3 rule out any effect of memory related processes in the brightness judgment, and unambiguously show that it is the judgment of a grey square presented physically at the time of judgement that is likely to be biased. Therefore, the reported brightness judgment bias may indeed be considered as a robust effect.
Across three experiments, we showed that relatively short musical excerpts could be used as effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. This influence is consistent with the prevalent metaphorical mapping (positive = bright; negative = dark). The results of Experiment 1 confirmed our prediction that even relatively short (~10–15 s) excerpts of music can communicate emotions strong enough to bias visual judgment in a systematic fashion. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this judgment bias is indeed a robust effect. Due to the nature of explicitly acquired judgments on visual brightness, we cannot establish whether the reported bias is a high-level conceptual effect or a low-level perceptual effect. However, the bias is consistent with metaphoric viewpoint and there are some suggestions that metaphors are not arbitrarily formed but rather grounded on low-level embodied processes [2, 32]; nevertheless, for confirming the latter perceptual account, electrophysiological study should investigate in future to find early neural components of visual processing in relation with the reported brightness bias.
Note that out of the four emotion categories (happy, peaceful, scary, and sad) studied here, musical excerpts associated with happy emotion produced the most consistent and robust brightness bias across the three experiments. Recently, Fritz and colleagues  showed that happy or joyful music is easier recognized as compared to scary and sad music, and also that happy emotions are universally recognized in music. If indeed happy music induces corresponding emotions in listeners more robustly and consistently than other emotions, this might explain our finding that the happy excerpts were most effective in biasing brightness judgment.
Unlike happy excerpts, the scary and sad musical excerpts were associated with most variable brightness judgment bias. This might be due to the fact that there is more confusion in recognizing scary and sad emotions in music . Although the brightness bias varied over experiments, post-hoc analyses showed that the differences for scary and sad primes between Experiments 1 and 2, which allow for a straightforward comparison, were not statistically significant (Sad: t(38) = 1.27, p = .211; Scary: t(38) = 0, p = 1). This indicates that the observed variation is most likely due to chance, and does not indicate systematic difference due to, for example, the differences in experimental procedure between Experiments 1 and 2.
Although the current results suggest a clear link between musical emotion and brightness judgment, they do not necessitate that felt emotions are a crucial link in this effect, i.e. there might be a direct, unmediated association between certain acoustical or musical features and brightness perception. In general, we believe that the link between musical emotion and brightness judgment is mediated by emotion, since it is the specific combinations of acoustic and musical features that typically make emotions in music recognizable , not the presence of one single acoustic feature. For example, high pitch has been associated with brighter shades by both humans [30, 34, 35] and chimpanzees . However, in our pool of musical excerpts, happy musical primes have comparable average pitch (midi encoding of pitch, 65.60, see Methods) to that of peaceful musical primes (62.76) and scary musical primes have comparable pitch range (53) to that of sad primes (49). Further, the acoustic brightness might conceptually prime visual brightness, and acoustic brightness of major chord is usually higher than of a minor chord. Though our happy musical primes have significantly higher acoustical brightness than of sad musical primes (t(1, 13) = 8.71, p < .01), they are not significantly different from those of peaceful musical primes (t(1,13) = 1.18, p = 0.26). What makes some music happy and other music conveying a different emotion is a certain combination of a pitch, pitch range with other parameters such as tempo, pitch variability, speed of tone attack, direction of contour, acoustic brightness etc. . Therefore, simple cross-modal correspondence is unlikely to be the principal mechanism of the reported brightness judgment bias, though we do believe that future experiments with closely controlled parameters for each emotional category could offer novel insight towards a better understanding of the musical emotion induced visual judgment bias. Noteworthy to add here that as opposed to how music could influence brightness judgment, a recent study has investigated how brightness could influence music listening experience : greater physiological changes are observed during listening to music in dimmer light condition than in the standard light condition.
Finally, our results need to be discussed in the wider framework of musical semantics, i.e. how music can give rise to meaning. Although it has long been disputed that music has meaning [37, 38], a growing body of recent research literature has provided evidence that musical information is perceived by human as meaningful , and engages neural mechanism underlying the processing of semantical information in the brain (see for a review, ). One of the possible mechanisms by which meaning is communicated by music is its emotional content [41, 42]. Musical emotion is fast recognized , and there are suggestions that this recognition almost automatically leads to the activation of concepts semantically related to the perceived emotion , akin to the spreading activation mechanism underlying priming [45, 46]. Cross-modal priming paradigms using music as primes have demonstrated that the N400, an event-related-potential component indexing semantic incongruity , is robustly elicited for mismatches in emotional meaning between music and language stimuli [44, 48–50]. We propose here that the meaning as emerged by the emotional content of a musical prime interacts with the concepts elicited metaphorically by the visual stimuli, and this interaction was found to be most robust for the happy music.
Altogether, our findings demonstrate a powerful role of musical emotion in modulating judgment of visual brightness response in a metaphor consistent fashion, the essence of which, was eloquently captured by Leonard Bernstein , who concluded, "… music is a totally metaphorical language."
The authors are thankful to O. Williamson and D. Ellery for their assistance with data collection. The results of this study were presented at the ICMPC-ESCOM 2012, Thessaloniki, Greece.
Conceived and designed the experiments: JB. Performed the experiments: JPL. Analyzed the data: JPL JB. Contributed reagents/materials/analysis tools: JPL JB. Wrote the paper: JB JPL.
- 1. Lakoff G, Johnson M. Metaphors we live by. Chicago: University of Chicago Press; 1980. xiii, 242 p. p.
- 2. Landau MJ, Meier BP, Keefer LA. A metaphor-enriched social cognition. Psychological bulletin. 2010;136(6):1045–67. Epub 2010/09/09. doi: 10.1037/a0020970 pmid:20822208.
- 3. Song H, Vonasch AJ, Meier BP, Bargh JA. Brighten up: Smiles facilitate perceptual judgment of facial lightness. Journal of Experimental Social Psychology. 2012;48(1):450–2. doi: 10.1016/j.jesp.2011.10.003
- 4. Meier BP, Robinson MD, Crawford LE, Ahlvers WJ. When "light" and "dark" thoughts become light and dark responses: affect biases brightness judgments. Emotion. 2007;7(2):366–76. Epub 2007/05/23. doi: 10.1037/1528-35126.96.36.1996 pmid:17516814.
- 5. Banerjee P, Chatterjee P, Sinha J. Is it light or dark? Recalling moral behavior changes perception of brightness. Psychol Sci. 2012;23(4):407–9. Epub 2012/03/08. doi: 10.1177/0956797611432497 pmid:22395128.
- 6. Brandt MJ, IJzerman H, Blanken I. Does Recalling Moral Behavior Change the Perception of Brightness? Social Psychology. 2014;45:246–52. doi: 10.1027/1864-9335/a000191
- 7. Juslin PN, Sloboda JA. Handbook of music and emotion: theory, research, applications. Oxford: Oxford University Press; 2011. xiv, 975 p. p.
- 8. Hargreaves DJ, North AC. The social psychology of music. Oxford; New York: Oxford University Press; 1997. xv, 319 p. p.
- 9. DeNora T. Music in everyday life. Cambridge: Cambridge University Press; 2000.
- 10. Sloboda J, Lamont A, Greasley A. Choosing to hear music: Motivation, process, and effect. In: Hallam S, Cross I, Thaut M, editors. The Oxford Handbook of Music Psychology. Oxford: Oxford University Press; 2009. p. 431–40.
- 11. Moore KS. A systematic review on the neural effects of music on emotion regulation: implications for music therapy practice. Journal of music therapy. 2013;50(3):198–242. Epub 2014/02/27. pmid:24568004. doi: 10.1093/jmt/50.3.198
- 12. Kemal Arikan M, Devrim M, Oran Ö, Inan S, Elhih M, Demiralp T. Music effects on event-related potentials of humans on the basis of cultural environment. Neurosci Lett. 1999;268(1):21–4. pmid:10400068 doi: 10.1016/s0304-3940(99)00372-9
- 13. Kreutz G, Ott U, Teichmann D, Osawa P, Vaitl D. Using music to induce emotions: Influences of musical preference and absorption. Psychology of Music. 2008;36(1):101–26. doi: 10.1177/0305735607082623.
- 14. Orr MG, Ohlsson S. Relationship between complexity and liking as a function of expertise. Music Perception. 2005;22(4):583–611. doi: 10.1525/mp.2005.22.4.583
- 15. Rentfrow PJ, Gosling SD. The Do Re Mi's of Everyday Life: The Structure and Personality Correlates of Music Preferences. Journal of Personality and Social Psychology. 2003;84(6):1236–56. pmid:12793587 doi: 10.1037/0022-35188.8.131.526
- 16. Baraldi FB. All the Pain and Joy of the World in a Single Melody: A Transylvanian Case Study on Musical Emotion. Music Perception. 2009;26(3):257–61. doi: 10.1525/mp.2009.26.3.257.
- 17. Fritz T, Jentschke S, Gosselin N, Sammler D, Peretz I, Turner R, et al. Universal Recognition of Three Basic Emotions in Music. Current Biology. 2009;19(7):573–6. doi: 10.1016/j.cub.2009.02.058. pmid:19303300
- 18. Bernardi L, Porta C, Casucci G, Balsamo R, Bernardi NF, Fogari R, et al. Dynamic interactions between musical, cardiovascular, and cerebral rhythms in humans. Circulation. 2009;119(25):3171–80. Epub 2009/07/02. pmid:19569263. doi: 10.1161/circulationaha.108.806174
- 19. Marin MM, Bhattacharya J. Music induced emotions: some current issues and cross-modal comparisons. In: Hermida J, Ferrero M, editors. Music Education: Nova Science Publishers, Inc.; 2010. p. 1–38.
- 20. North AC. The effect of background music on the taste of wine. British Journal of Psychology. 2012;103(3):293–301. doi: 10.1111/j.2044-8295.2011.02072.x. pmid:22804697
- 21. Logeswaran N, Bhattacharya J. Crossmodal transfer of emotion by music. Neuroscience letters. 2009;455(2):129–33. Epub 2009/04/17. doi: 10.1016/j.neulet.2009.03.044 pmid:19368861.
- 22. Marin MM, Gingras B, Bhattacharya J. Crossmodal transfer of arousal, but not pleasantness, from the musical to the visual domain. Emotion. 2012;12(3):618–31. Epub 2011/08/24. doi: 10.1037/a0025020 pmid:21859191.
- 23. Riener C, Stefanucci JK, Proffitt D, Clore GL. Mood and the perception of spatial layout. 44th Annual Meeting of the Psychonomic Society; Vancouver, BC, Canada2003.
- 24. Proffitt D. Embodied perception and the economy of action. Perspect Psychol Sci. 2006;1:110–22. doi: 10.1111/j.1745-6916.2006.00008.x. pmid:26151466
- 25. Sollberger B, Reber R, Eckstein D. Musical chords as affective priming context in a word-evaluation task. Music Perception. 2003;20(3):263–82. Peer Reviewed Journal: 2003-03056-003. doi: 10.1525/mp.2003.20.3.263
- 26. Vieillard S, Peretz I, Gosselin N, Khalfa S, Gagnon L, Bouchard B. Happy, sad, scary and peaceful musical excerpts for research on emotions. Cognition & Emotion. 2008;22(4):720–52. doi: 10.1080/02699930701503567
- 27. Russell JA. A circumplex model of affect. Journal of Personality and Social Psychology. 1980;39(6):1161–78. doi: 10.1037/h0077714
- 28. Lartillot O, Toiviainen P, editors. A Matlab toolbox for musical feature extraction from audio. International Conference on Digital Audio Effects; 2007.
- 29. Rosenthal R. Meta-analytic procedures for social research. Rev. ed. ed. Newbury Park, Calif.; London: Sage; 1991.
- 30. Ludwig VU, Adachi I, Matsuzawa T. Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees (Pan troglodytes) and humans. Proceedings of the National Academy of Sciences of the United States of America. 2011;108(51):20661–5. Epub 2011/12/07. doi: 10.1073/pnas.1112605108 pmid:22143791; PubMed Central PMCID: PMC3251154.
- 31. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. ed. Hillsdale N.J.: L. Erlbaum Associates; 1988.
- 32. Gibbs RW, Lima PLC, Francozo E. Metaphor is grounded in embodied experience. Journal of pragmatics. 2004;36(7):1189–210. doi: 10.1016/j.pragma.2003.10.009
- 33. Juslin PN, Laukka P. Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin. 2003;129(5):770–814. doi: 10.1037/0033-2909.129.5.770. pmid:12956543
- 34. Eitan Z, Timmers R. Beethoven's last piano sonata and those who follow crocodiles: cross-domain mappings of auditory pitch in a musical context. Cognition. 2010;114(3):405–22. Epub 2009/12/29. doi: 10.1016/j.cognition.2009.10.013 pmid:20036356.
- 35. Marks LE. On associations of light and sound: the mediation of brightness, pitch, and loudness. The American journal of psychology. 1974;87(1–2):173–88. Epub 1974/03/01. pmid:4451203. doi: 10.2307/1422011
- 36. Chong HJ. The effect of brightness on listener's perception of physiological state and mood of the music during listening. Music and Medicine. 2013;5(1):39–43. doi: 10.1177/1943862112470144
- 37. Stravinsky I. An autobiography. London: Calder and Boyars; 1975.
- 38. Kivy P. Music alone: philosophical reflections on the purely musical experience. Ithaca: Cornell University Press; 1990. xii, 226 p. p.
- 39. Koelsch S, Siebel WA. Towards a neural basis of music perception. Trends in cognitive sciences. 2005;9(12):578–84. Epub 2005/11/08. doi: 10.1016/j.tics.2005.10.001 pmid:16271503.
- 40. Koelsch S. Towards a neural basis of processing musical semantics. Physics of life reviews. 2011;8(2):89–105. Epub 2011/05/24. doi: 10.1016/j.plrev.2011.04.004 pmid:21601541.
- 41. Meyer LB. Emotion and meaning in music. Chicago; London: University of Chicago Press; 1956.
- 42. Arbib MAeoc. Language, music, and the brain: a mysterious relationship.
- 43. Bigand E, Filipic S, Lalitte P. The time course of emotional responses to music. Annals of the New York Academy of Sciences. 2005;1060:429–37. Epub 2006/04/07. doi: 10.1196/annals.1360.036 pmid:16597797.
- 44. Steinbeis N, Koelsch S. Affective priming effects of musical sounds on the processing of word meaning. Journal of cognitive neuroscience. 2011;23(3):604–21. Epub 2009/11/21. doi: 10.1162/jocn.2009.21383 pmid:19925192.
- 45. Collins AM, Loftus EF. A spreading-activation theory of semantic processing. Psychological Review. 1975;82(6):407–28. doi: 10.1037/0033-295x.82.6.407
- 46. Spruyt A, Hermans D, Houwer JD, Eelen P. On the nature of the affective priming effect: Affective priming of naming responses. Social Cognition. 2002;20(3):227–56. doi: 10.1521/soco.184.108.40.20606
- 47. Kutas M, Hillyard SA. Reading senseless sentences: brain potentials reflect semantic incongruity. Science. 1980;207(4427):203–5. Epub 1980/01/11. pmid:7350657. doi: 10.1126/science.7350657
- 48. Daltrozzo J, Schon D. Is conceptual processing in music automatic? An electrophysiological approach. Brain research. 2009;1270:88–94. Epub 2009/03/25. doi: 10.1016/j.brainres.2009.03.019 pmid:19306846.
- 49. Goerlich KS, Witteman J, Schiller NO, Van Heuven VJ, Aleman A, Martens S. The nature of affective priming in music and speech. Journal of cognitive neuroscience. 2012;24(8):1725–41. Epub 2012/03/01. doi: 10.1162/jocn_a_00213 pmid:22360592.
- 50. Goerlich KS, Witteman J, Aleman A, Martens S. Hearing feelings: affective categorization of music and speech in alexithymia, an ERP study. PloS one. 2011;6(5):e19501. Epub 2011/05/17. doi: 10.1371/journal.pone.0019501 pmid:21573026; PubMed Central PMCID: PMC3090419.
- 51. Bernstein L. The unanswered question: six talks at Harvard. Cambridge, Mass.; London: Harvard University Press; 1976.