Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

How Live Performance Moves the Human Heart

  • Haruka Shoda ,

    ¤ Current address: Ristumeikan Global Innovation Research Organization, Ristumeikan University, Nojihigashi 1-1-1, Kusastu, Shiga 525-8577, Japan

    Affiliation Department of Psychology, Hokkaido University, Sapporo, Hokkaido, Japan

  • Mayumi Adachi,

    Affiliation Department of Psychology, Hokkaido University, Sapporo, Hokkaido, Japan

  • Tomohiro Umeda

    Affiliation Nara Medical University, Kashihara, Nara, Japan

How Live Performance Moves the Human Heart

  • Haruka Shoda, 
  • Mayumi Adachi, 
  • Tomohiro Umeda


We investigated how the audience member’s physiological reactions differ as a function of listening context (i.e., live versus recorded music contexts). Thirty-seven audience members were assigned to one of seven pianists’ performances and listened to his/her live performances of six pieces (fast and slow pieces by Bach, Schumann, and Debussy). Approximately 10 weeks after the live performance, each of the audience members returned to the same room and listened to the recorded performances of the same pianists’ via speakers. We recorded the audience members’ electrocardiograms in listening to the performances in both conditions, and analyzed their heart rates and the spectral features of the heart-rate variability (i.e., HF/TF, LF/HF). Results showed that the audience’s heart rate was higher for the faster than the slower piece only in the live condition. As compared with the recorded condition, the audience’s sympathovagal balance (LF/HF) was less while their vagal nervous system (HF/TF) was activated more in the live condition, which appears to suggest that sharing the ongoing musical moments with the pianist reduces the audience’s physiological stress. The results are discussed in terms of the audience’s superior attention and temporal entrainment to live performance.


Live music performance offers a special experience that is impossible through speakers or a headphone. This unique experience, often described as “communication” or “interaction”, has been studied empirically. For example, “visual” aspects of live performance, even presented as a video without sound, help the audience differentiate the performer’s intended levels of expressivity [1] and emotions [2, 3], enhancing the observer’s physiological reactions [4]. We investigated the effect of live performance on the audience’s physiology, not through a video but through a live context. By doing so, we tapped into a biological aspect of a performer-to-audience communication.

Since the pioneering study by Krumhansl [5], researchers have investigated the psychophysiological responses in listening to a recorded music, particularly with regard to the listener’s emotional experiences [6]. Previous studies used a number of parameters such as heart rate (electrocardiogram), sweat (electrodermal activity), skin temperature, muscle tension (electromyogram), and salivary cortisol, which increase in accordance with the listener’s experience of emotional arousal [6, 7]. Such responses in the autonomic nervous system can be explained by the brain activation. Blood and Zatorre [8] explored the listener’s heart rate, muscle tension, and respiration rate, as well as the cerebral brain activity by using positron emission tomography (PET), in experiencing “chills” (i.e., “shivers-down-the-spine”). They showed that the cerebral blood flow involved in reward/motivation are activated in experiencing chills (e.g., ventral striatum, midbrain, amygdala, orbitofrontal cortex, ventral medial prefrontal cortex), resulting in the activation of the autonomic (particularly sympathetic) nervous system.

The listener’s physiological reactions in watching a video recording of the performance have also been explored. Chapados and Levitin [4] measured the electrodermal activities (EDA) while listeners (or observers) were exposed to a video of a clarinet player in three modalities: audition-only, vision-only, and their combination. Higher EDAs were evident in the bimodal than the unimodal condition, suggesting that listening to music while watching a performer may facilitate the “rewarding” functions in humans. According to social psychologists, however, the presence of others generally decreases physiological arousal. For example, the spontaneous presence of others attenuates the physiological reactivity (i.e., heart rate, blood pressure) in performing a cognitive task, suggesting that social context can reduce stress during a particular task [9].

Would such stress-reduction effects be observable in an audience during a live music performance, in which the audience and the performer physically share time and space? Or would the audience’s physiology be aroused during live performance as found during the exposure to a performer through a recorded video? The purpose of our study was to test these hypotheses by comparing the audience’s physiological experiences in these two listening contexts: live (i.e., with performer) and recorded music contexts (i.e., through speakers without performer). More specifically, we assessed the audience’s heart rate and its variability across two ecologically valid listening contexts. It is common among music lovers that they went to a concert and later buy a live recording of the same concert that they enjoyed so much. We tried simulating this realistic situation in the study. The state-of-the-art technology enabled us to measure electrocardiograms of multiple audience members’ simultaneously without any cable connection to computers. Heart-rate measures, used most frequently of all physiological measures in music perception studies [6], enable us to assess both the sympathetic and the vagal nerve activities by computing spectral features of heart-rate time-series.



Seven pianists (2 men, 5 women, 24-40 years old, M = 30.57, SD = 6.46), who held a music degree in an undergraduate and/or graduate level, participated in this study. Each of 118 undergraduate and graduate students (53 men, 65 women) participated as an audience member for one of the live performances. Due to the limited number of heart-rate sensors (8-10 sensors for each session), the electrocardiograms were obtained from a total of 58 audience members, selected randomly from the participants who agreed with the physiological measurement. Of those, 21 data were excluded because the audience member failed to participate in both conditions (i.e., live and recorded contexts; see Procedure) or the insecure attachment of the sensor resulted in unreliable data. In the present study, we analyzed a sample of 37 audience members (16 men, 21 women, 18-26 years old, M = 20.59, SD = 2.06) who provided reliable data in all the conditions (i.e., six pieces in two listening contexts). The years of musical training (N = 37) outside of mandatory music education (i.e., 9 years of weekly classroom instruction including music appreciation) ranged from 0 to 19 years (M = 9.21, SD = 6.45); 25 had experienced piano performance for 1–19 years (M = 9.48, SD = 5.24). Written informed consent was obtained from every participant.

Musical Pieces

We chose six pieces: b minor Prelude (Well-Tempered Clavier, Book I, No. 24, BWV869) and G major Prelude (Well-Tempered Clavier, Book II, No. 15, BWV884) by J. S. Bach, Träumerei (Kinderszenen, Op. 15-7) and Aufschwung (Phantasiestüke, Op. 12-2) by R. Schumann, La fille aux cheveux de lin (Préludes Book 2, L. 123-4) and Arabesque No. 1 (Two Arabesques for Piano, L. 66-1) by C. Debussy. We shall call these pieces “B24”, “B15”, “Dreaming”, “Soaring”, “Girl”, and “Arabesque”, respectively. We selected these three composers (i.e., Bach, Schumann, Debussy) based on our previous study [10], in which the majority of pianists chose these composers’ pieces as representatives of each historical period (i.e., Baroque, Romanticism, French Modernism). Based on the tempo instruction on the score, faster (B15, Soaring, and Arabesque) and slower (B24, Dreaming, and Girl) pieces were selected for each composer. B24 and Soaring are written in minor, and the rest in major.


Experiments took place in a small auditorium with the maximum capacity of 114, equipped with a grand piano (GP-193, Boston). The piano was tuned professionally right before the live performance. The performances were recorded onto a multi-track recorder (R24, Zoom) using a microphone (NT4, Rode). A stereo speaker (WS-AT30, Panasonic), an amplifier (RX-V603, Victor), and a computer (MC505J/A, Apple) were used in playing the recorded sound to the audience in the recorded condition and the pink noise in resting phases of both conditions (see Procedure).

The audience’s electrocardiogram was measured by a heart-rate sensor (HRS-I, Win Human Recorder), attached on his/her left chest. The sensor is small and light enough (40 × 39 × 8 mm3, 14 g including a battery) not to disturb the audience’s listening experience. The sensors were synchronized with the timing of live performances by recording the onset time of electrocardiogram and the beginning of the first piece. The sampling frequency of electrocardiogram was 128 Hz and the data were recorded onto the sensor’s internal memory. We should acknowledge that this sampling frequency (i.e., 128 Hz) is less than the standard (> 250 Hz) [11]. However, the recent literature reports that the sampling frequency of 125 Hz is as valid as that of 1000 Hz [12], at least for healthy participants [13]. We also recorded the audience member’s bodily movement to examine whether the motion affect the electrocardiogram, by using the three-axis accelerometer equipped within the mobile sensor.


The experimental procedure in the present study was approved by the Committee for Research Ethics at the Graduate School of Letters, Hokkaido University, and the experiments conformed to the principles outlined in the Declaration of Helsinki. We adopted within-subject design as recommended by Potter and Bolls [14]: 2 (context)×3 (composer)×2 (tempo). All participants listened to six pieces (faster and slower pieces by Bach, Schumann, and Debussy) both on live (“live”) and from the speaker (“recorded”). Because we wanted to simulate a realistic situation in which one goes to a concert first and then encounters a recording of the exactly same live concert later, the order of the context was fixed: The audience participated in the recorded condition approximately 10 weeks after the live condition. We considered the insertion of 10 weeks to be enough to eliminate a possible mere-exposure effect (i.e., an effect that multiple exposures increase one’s preference), which can disappear in one month [15, 16].

In the live condition, we assigned 12 to 20 audience members to each of the live performance (of which 3-7 members were the target for the present study), so that they had a good view of the pianist. First, in order for the audience to be relaxed, they listened to a six-minute ocean-wave-like pink noise from the speakers that repeated five-second crescendo and five-second decrescendo (“resting phase”). The maximum sound level was 55.00 dB(A), which was measured at the center position of the audience by a sound-level meter (DT-8852, Mk Scientific). After the resting phase, the audience listened to the pianist’s live performances for six pieces. Two-minute break was inserted between the pieces. The order of the six pieces was determined by a block design. Namely, two pieces of one composer’s were performed first, followed by two pieces of another composer’s, and so on. In addition, the order of each composer’s two pieces was consistent based on the tempo. If the first piece was the faster one, the third and the fifth pieces were also the faster ones, and vice versa. Both the order of the composer and that of the tempo were counterbalanced among the pianists.

We instructed the audience to attend to both the sound and the pianist during the live performance. At the end of the live condition, the pianist and the audience provided demographic information by responding to a questionnaire, including the years of musical training and daily experiences in listening to and/or performing music.

Approximately 10 weeks after the live condition, participants came back to the same auditorium for the recorded condition. Following the resting phase, they listened to the same six performances, audio-recorded during the live condition, from the speakers. The sound level was checked by the aforementioned sound-level meter and adjusted if necessary so as to be consistent with the corresponding live performance. The presentation order was the same as the live condition, and the two-minute blank was inserted between the pieces.

The live and the recorded conditions lasted approximately 70 and 50 minutes, respectively. The live condition took longer because of the detailed explanation of the electrocardiogram sensor and a questionnaire for the participant’s background information.


First, we extracted the peak-to-peak intervals (RR interval) by the accessory software of HRS-I (Win Human Recorder), by which we calculated the mean heart rate (“HR”, 60/(mean RR intervals)). Then, the RR intervals were transformed into a continuous time-series with a sampling frequency of 4 Hz using cubic spline interpolation. We estimated the vagal and the sympathetic nerve activities by computing the Welch’s power spectrum densities at high-frequency (HF, 0.15–0.40 Hz), low-frequency (LF, 0.04–0.15 Hz), and total bands (TF, 0.04–0.40 Hz), respectively. The segment length of 512 samples (i.e., 128 s) with 50% overlap was used for the estimation because an approximately 2-minute recording is needed to address the LF component [11]. The very low frequency (VLF) (< 0.04 Hz) domain was not assessed in the present study because VLF computed from short-term recordings (≤ 5 minutes) are unreliable [11]. The HF/TF and the LF/HF ratios were obtained from these measures as indices of vagal nerve activity and sympathovagal balance, respectively [11, 17]. The LF/HF ratio was transformed into the natural logarithm in order for the data to be normally distributed. The vagal nerve activity functions as the defense reaction against an organism’s stress, or an index for stress reduction, represented by decreased heart rate and blood pressure. The sympathetic nervous system, often called as “fight and flight” nervous system, causes anxiety, manifested as shortness of breath and increased heart rate. The balance between these opposing neural mechanisms (sympathovagal balance) is often used as an index of mental stress [17, 18]. The parameters were calculated by Matlab 2015b (Mathworks).


Preliminary Analyses

There are a few variables that could influence the audience’s heart-rate activities during two listening contexts. We conducted preliminary analyses for such variables.

The heart-rate sensor used in the present study was equipped with a function measuring the three-dimensional acceleration of one’s bodily movement. If listening contexts influenced the audience’s bodily movement, this aspect could interfere with the audience’s physiological reactions. To explore such a possibility, we calculated the mean acceleration of each audience member’s for each piece. We performed a paired t-test for each piece by using Bonferroni’s correction, showing no significant differences between the live and the recorded conditions (Table 1). This indicates that the acceleration of the audience’s bodily movement did not differ between the listening contexts.

Table 1. The mean values, the standard deviations, and 95% confidence intervals of audience’s acceleration (mG) for each piece in each condition (N = 37).

Results of paired t-tests between the listening conditions were shown in the right columns.

Next we examined effects of the audience member’s age and extracurricular musical training by computing Pearson’s correlation coefficients with each of the electrocardiogram parameters (i.e., HR, HF/TF, ln(LF/HF)) (see S1 Table). Because the analyses were performed 14 times for each parameter (i.e., (6 pieces + the resting phase) × 2 listening contexts), the significance level was adjusted by Bonferroni’s correction (Overall α = 0.10, subset α = 0.10/14 ≈ 0.007). Results showed no significant correlations. We also examined effects of the audience’s and the performer’s sex, showing neither significant main effects nor interaction (α = 0.007, see S2 Table). Finally we examined effects of the individual performer on the audience’s heart-rate activities, but no significant effects were found (α = 0.007, see S3 Table). These analyses indicate that the age, the degree of musical training, the sex of either the audience or the performer, and the performer’s individuality did not affect the heart-rate activities in the present study.

There were not significantly different heart-rate activities during the resting phase between the live and the recorded conditions (see the first row for each parameter in Table 2): t(36) = 0.99, p = 0.83, (HR), t(36) = 0.40, p = 0.34, (HF/TF), t(36) = 0.44, p = 0.34, (ln(LF/HF)). In the following analyses, therefore, we used pooled raw data, rather than difference values from the measurements during the resting phase, though both cases generated similar tendencies according to our inspection.

Table 2. The mean values, the standard deviations, and 95% confidence intervals of HR (a), the HF/TF ratio (b), and the natural logarithm of the LF/HF ratio (c) for each piece in each condition (N = 37).

Effects of listening context, composer, and tempo on the audience’s heart rate and heart-rate variability

Table 2 shows the mean, the standard deviation, and the 95% confidence interval for each parameter per piece between the two listening conditions. Because heart-rate data for each participant generally correlate between experimental conditions, the sphericity assumption in within-subject analysis of variance are not satisfied [14]. In the analyses reported below, we corrected the degrees of freedom by Greenhouse and Geisser’s method, as recommended by Keselman and Rogan [19].

We conducted a 2 (context) × 3 (composer) × 2 (tempo) within-subject analysis of variance for each parameter (Table 3). For HR, only a two-way interaction between listening context and tempo was significant. Fig 1 shows the mean values for the two-way interaction. The post-hoc t-tests (using Shaffer’s modified sequentially rejective Bonferroni procedure) showed that HR was significantly greater for the faster (M = 77.46, SD = 12.69) than the slower pieces (M = 76.17, SD = 12.45) in the live condition (p = 0.005, ), but not in the recorded condition (p = 0.25, ). This suggests that the audience’s heartbeat changes along with the tempo of music only during live performance.

Table 3. The results of 2 (context) × 3 (composer)× 2 (tempo) within-subject analyses of variance for HR (a), the HF/TF ratio (b), and the natural logarithm of the LF/HF ratio (c).

The degrees of freedom were adjusted by Greenhouse-Geisser method.

Fig 1. The mean values of the audience’s heart rate (beats per minute) for the two-way interaction between the listening context and the tempo.

Error bars indicate standard errors. The p-value indicates a significant difference confirmed by the post-hoc t-test.

The HF/TF ratio showed a significant main effect of context (Table 3). HF/TF in the live condition was significantly greater than that in the recorded condition (see Table 2). This suggests that the audience’s vagal nerve is activated more in the live than the recorded condition.

The LF/HF ratio was less in the live than the recorded condition (see Table 2); main effect of context was approaching significant (Table 3). This appears to imply that the contribution of the symapthetic against the vagal nerve activity tended to decrease in the live condition.


The present results showed that during live performance, the vagal nerve activity (i.e., HF/TF) increased and the sympathovagal balance (i.e., LF/HF) tended to decrease regardless of the piece. Thus, the pianist’s live performance appears to have led the audience’s nerve activities toward induction of relaxation or reduction of anxiety. This finding implies that sharing time and place with a performer is not awkward but normal (or spontaneous) for the audience, supporting that such a social context facilitates stress reduction during a cognitive task [9] and music listening [20]. From another point of view, a high vagal activity is associated with the stable visual attention to objects (e.g., [21]). The higher vagal activity in the live than in the recorded condition may also imply that the audience was more attentive to live performance.

In contrast with the present study, Chapados and Levitin [4] reported an increase of the autonomic nerve activity in experiencing the video of Stravinsky’s clarinet piece. This discrepancy between the two studies may derive from the availability of a performer’s body movements. More specifically, the body movements of pianists (centered at the piano chair) are more limited than the whole body movements of clarinetists, so that the audience’s visual attention to the performance can be more stable for the piano than for the clarinet performance. If the vagal nerve is activated by “stable” visual attention (e.g., [21]), it is understandable that the piano and the clarinet live performances generate opposite heart-rate responses.

We also showed that the audience’s heart rate changed in accordance with the tempo of music only in the live condition. One of the features of biological oscillators such as heart beat and respiration is to synchronize with, or to be entrained by, external inputs [22, 23]. Some researchers have shown that the listener’s heart beat tends to be entrained by the tempo of music (e.g., [24]) while others have reported no specific relations between them (e.g., [25]). The recent review [26] shows that there is no evidence for entrainment of the heart rate to musical beats in listening to music, at least, via sound without visual information. The present result indicates that the listening context contributes another factor to be considered in this controversy: Live listening facilitates entrainment between tempo and heart rate while listening to recorded music does not.

There are at least two possible reasons for the present contextual effect. First, the audience’s heart rate may be influenced by the performer’s body movement that tends to reflect the tempo of music [27]. Second, the live context may have generated a social interaction allowing the performer and the audience to share the same musical moments, something similar to a conversation in which the heart rates of speakers tend to be synchronized [28]. This “sharing the musical moments” may have resulted in the audience’s heart rates varying in accordance with the tempo of music, which, in turn, could explain why live performances sound more artistic and expressive, as well as why their affective nuances perceived by the audience are closer to those interpreted by the performer than recorded ones [29].

In sum, we have revealed effects of live performance on the audience’s physiological reaction. The audience’s vagal nerve is activated in the live context, suggesting that live performance reduces stress and induces attention in the audience as compared with the recorded performance. The physiological entrainment by musical tempo can be facilitated only during live performance. These contextual effects, however, need to be interpreted with caution, for we sacrificed the controlled design by prioritizing ecological validity. We cannot deny that all the contextual effects on the audience’s heart-rate activities in the present study are confounded with a few variables such as the order of listening context, the repeated presentation of the same performance, and the sound quality (live versus loud speakers). In order to verify the causal function of “live” performance, we need to sort them out through a series of controlled experiments. For example, replication of the present study by utilizing between-subjects design is one way to tap into the issue of the order effect. Installing two controlled groups—one listening to live performances twice and the other to recorded performances twice—may be another way to resolve the aforementioned confounding variables. Using both an acoustic and an electronic instrument in live performance may be effective in examining an effect of the sound quality.

Other aspects of music listening that were missing from the present study were the audience’s subjective impressions about their own stress levels as well as valence and arousal of performances, which might be related to heart-rate activities [26]. Moreover, studying the audience’s actual movement such as their head movement (rather than its accelerations) may help us understand the nature of visual attention during live performance in more depth [30]. Relations among performance parameters (e.g., tempo, dynamics, the pianist’s body movement), the audience’s subjective and behavioral reactions, and the audience’s physiological responses need to be mapped together in order to capture the entire picture of the performer-and-audience communication in live performance.

Supporting Information

S1 Table. Pearson’s correlation coefficients (r) between each parameter and each of the participant’s age and the years of musical training.

All the correlations were insignificant (df = 35, α = 0.007 with Bonferroni’s correction).



S2 Table. The results of 2 (audience sex) × 2 (performer sex) between-subjects analyses of variance for each parameter (α = .007).



S3 Table. The results of one-way between-subjects analyses of variance that examined effects of the individual performer on the electrocardiogram parameters for each piece (α = .007).




We are grateful for all the pianists and the audiences for their participations in the present study. We thank Kenji Watanabe (Tokyo National University of Fine Arts and Music), Kenichiro Takahashi (Sapporo University), Shoko Fukai (Hokkaido University of Education), and Ayumi Inoue (Hokkaido University) for their supports in recruiting participants. We also thank Nami Koyama, Noriko Ito, Yosuke Tani, Kazuma Takiuchi, Chihiro Suzuki, Ayumi Sasaki, Ding Xingxing, Huo Xinyang, and Chen Lingjing for their assistance in the live experiment. We are also grateful for Gary Vasseur for proofreading of this manuscript and two anonymous reviewers who provided valuable comments for the improvement of this manuscript.

Author Contributions

Conceived and designed the experiments: HS MA. Performed the experiments: HS. Analyzed the data: HS TU. Contributed reagents/materials/analysis tools: HS MA TU. Wrote the paper: HS MA TU.


  1. 1. Davidson JW. Visual perception of performance manner in the movements of solo musicians. Psychology of Music. 1993;21(2):103–113. doi: 10.1177/030573569302100201.
  2. 2. Ohgushi K, Hattori M. Emotional communication in performance of vocal music. In: Pennycook, B, Costa-Giomi, E, editors. Proceedings of the Fourth International Conference on Music Perception and Cognition. Montreal, Canada: McGill University; 1996. pp. 269–274.
  3. 3. Vines BW, Krumhansl CL, Wanderley MM, Levitin DJ. Cross-modal interactions in the perception of musical performance. Cognition. 2006;101(1):80–113. doi: 10.1016/j.cognition.2005.09.003. pmid:16289067
  4. 4. Chapados C, Levitin DJ. Cross-modal interactions in the experience of musical performances: Physiological correlates. Cognition. 2008;108(3):639–651. doi: 10.1016/j.cognition.2008.05.008. pmid:18603233
  5. 5. Krumhansl CL. An exploratory study of musical emotions and psychophysiology. Canadian Journal of Experimental Psychology. 1997;51(4):336–353.
  6. 6. Hodges DA. Psycho-physiological measures. In: Juslin PN, Sloboda J, editors. Handbook of music and emotion: theory, research, applications. Oxford University Press; 2011. pp. 279–312.
  7. 7. Rickard NS. Intense emotional responses to music: a test of the physiological arousal hypothesis. Psychology of Music. 2004;32(4):371–388. doi: 10.1177/0305735604046096.
  8. 8. Blood AJ, Zatorre RJ. Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proceedings of the National Academy of Sciences. 2001;98(20):11818–11823. doi: 10.1073/pnas.191355898.
  9. 9. Phillips AC, Carroll D, Hunt K, Der G. The effects of the spontaneous presence of a spouse/partner and others on cardiovascular reactions to an acute psychological challenge. Psychophysiology. 2006;43(6):633–640. doi: 10.1111/j.1469-8986.2006.00462.x. pmid:17076820
  10. 10. Shoda H, Adachi M. Effects of the musical period on the pianist’s body movement: its time-series relationships with temporal expressions. In: Demorest SM, Morrison SJ, Campbell PS, editors. Proceedings of the 11th International Conference on Music Perception and Cognition. Seattle, WA: ICMPC11; 2010. pp. 843–848.
  11. 11. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. Heart rate variability: standards of measurement, physiological interpretation and clinical use. Circulation. 1996;93(5):1043–1065.
  12. 12. Ellis RJ, Zhu B, Koenig J, Thayer JF, Wang Y. A careful look at ECG sampling frequency and R-peak interpolation on short-term measures of heart rate variability. Physiological Measurement. 2015;36(9):1827–1852. doi: 10.1088/0967-3334/36/9/1827. pmid:26234196
  13. 13. Abboud S, Barnea O. Errors due to sampling frequency of the electrocardiogram in spectral analysis of heart rate signals with low variability. In: Computers in Cardiology. IEEE; 1995. pp. 461–463.
  14. 14. Potter RF, Bolls P. Psychophysiological measurement and meaning: cognitive and emotional processing of media. New York, NY: Routledge; 2012.
  15. 15. Ito N, Adachi M. Melody no senzai kioku: chuibunkatu-kadai o mochiita kentou [Exploration of implicit memory for melody by means of divided attention task]. In: Proceedings of the Fall Meeting of the Japanese Society for Music Perception and Cognition; 2012. pp. 15–20.
  16. 16. Peretz I, Gaudreau D, Bonnel AM. Exposure effects on music preference and recognition. Memory & Cognition. 1998;26(5):884–902. doi: 10.3758/BF03201171.
  17. 17. Nakahara H, Furuya S, Obata S, Masuko T, Kinoshita H. Emotion-related changes in heart rate and Its variability during performance and perception of music. Annals of the New York Academy of Sciences. 2009;1169(1):359–362. doi: 10.1111/j.1749-6632.2009.04788.x. pmid:19673808
  18. 18. Kristal-Boneh E, Raifel M, Froom P, Ribak J. Heart rate variability in health and disease. Scandinavian Journal of Work, Environment & Health. 1995;21(2):85–95. doi: 10.5271/sjweh.15.
  19. 19. Keselman HJ, Rogan JC. Repeated measures F tests and psychophysiological research: controlling the number of false positives. Psychophysiology. 1980;17(5):499–503. doi: 10.1111/j.1469-8986.1980.tb00190.x. pmid:7465719
  20. 20. Garunkstiene R, Buinauskiene J, Uloziene I, Markuniene E. Controlled trial of live versus recorded lullabies in preterm infants. Nordic Journal of Music Therapy. 2014;23(1):71–88. doi: 10.1080/08098131.2013.809783.
  21. 21. Porges SW. Autonomic regulation and attention. In: Campbell BA, Hayne H, Richardson R, editors. Attention and information processing in infants and adults: Perspectives from human and animal research. Hillsdale, NJ: Lawrence Erlbaum Associates; 1992. pp. 201–223.
  22. 22. Glass L. Synchronization and rhythmic processes in physiology. Nature. 2001;410(6825):277–284. doi: 10.1038/35065745. pmid:11258383
  23. 23. Larsen PD, Galletly DC. The sound of silence is music to the heart. Heart. 2006;92(4):433–434. doi: 10.1136/hrt.2005.071902. pmid:16339810
  24. 24. Bernardi L, Porta C, Sleight P. Cardiovascular, cerebrovascular, and respiratory changes induced by different types of music in musicians and non-musicians: the importance of silence. Heart. 2006;92(4):445–452. doi: 10.1136/hrt.2005.064600. pmid:16199412
  25. 25. Iwanaga M, Kobayashi A, Kawasaki C. Heart rate variability with repetitive exposure to music. Biological Psychology. 2005;70(1):61–66. doi: 10.1016/j.biopsycho.2004.11.015. pmid:16038775
  26. 26. Koelsch S, Jäncke L. Music and the heart. European Heart Journal. 2015;36(44):3043–3049. doi: 10.1093/eurheartj/ehv430. pmid:26354957
  27. 27. Shoda H, Adachi M. The role of a pianist’s affective and structural interpretations in his expressive body movement: a single case study. Music Perception. 2012;29(3):237–254.
  28. 28. Watanabe T, Okubo M, Kuroda T. Analysis of entrainment in face-to-face interaction using heart-rate variability. In: Proceedings of the Fifth IEEE International Workshop on Robot and Human Communication; 1996. pp. 141–145.
  29. 29. Shoda H, Adachi M. Effects of the listening context on the audience’s perceptions of artistry, expressiveness, and affective qualities in the piano performance. In: Cambouropoulos E, Tsougras C, Marvromatis P, Pastiadis K, editors. Proceedings of the 12th International Conference on Music Perception and Cognition and the 8th Triennial Conference of the European Society for Cognitive Sciences of Music; 2012. pp. 925–929.
  30. 30. Zangemeister WH, Stark L. Types of gaze movement: variable interactions of eye and head movements. Experimental Neurology. 1982;77(3):563–577. doi: 10.1016/0014-4886(82)90228-X. pmid:7117463