Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Building bridges to emotion: Developing a standardized film-based emotion elicitation tool for Iranian culture

  • Milad Yousefi,

    Roles Data curation, Formal analysis, Writing – original draft

    Affiliation Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran

  • Jamal Amani Rad

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    j.amanirad@leeds.ac.uk

    Affiliation Choice Modelling Centre, University of Leeds, Leeds, United Kingdom

Abstract

Emotion elicitation through culturally relevant stimuli is crucial for psychological research that seeks to explore affective processes within specific populations. This study aimed to develop and validate a film-based emotion elicitation tool for Iranian culture. A comprehensive database of short video clips was selected to evoke distinct emotional states, including happiness, tenderness, fear, anger, sadness, and disgust, alongside neutral clips as controls. To validate the database, the emotional responses of 300 Iranian participants were assessed using key dimensions of arousal and valence, positive and negative affective states, gender differences, and mixed emotions. The results indicated that all emotional stimuli elicited significantly higher arousal levels compared to neutral clips, with fear-inducing clips generating the highest arousal levels. In terms of valence, positive emotional films, such as those inducing happiness and tenderness, were significantly associated with higher pleasantness, while anger elicited the lowest valence scores, indicating its strong negative impact. Additionally, the video clips effectively differentiated between positive and negative affective states, with clear statistical significance observed across all comparisons (), showing that videos designed to evoke positive emotions (e.g., happiness) and negative emotions (e.g., fear) successfully achieved these outcomes across the participant group. Gender differences were also examined, with women generally showing higher levels of emotional arousal than men, particularly in response to happiness, tenderness, sadness, and disgust, though the overall effect sizes were small. Finally, the study delved into the complexity of mixed emotions, where participants often experienced simultaneous conflicting emotions, such as happiness and sadness, challenging traditional discrete emotion frameworks. The findings affirm the cultural relevance and efficacy of the developed video clip database in eliciting a wide range of emotional responses, making it a valuable tool for future psychological studies in Iranian contexts. This study underscores the importance of culturally specific stimuli in emotion research and provides a robust resource for exploring the emotional landscape within Iranian culture.

Introduction

Over the past several decades, the field of emotion research has experienced a significant surge, particularly in its exploration of the intricate relationships between emotion, cognition, behavior, personality, and physiology. The importance of employing emotion-regulatory strategies for various cognitive processes is being recognized significantly. Such recognition has resulted in higher dependency on the laboratory paradigms that use emotional elicitation stimuli to modify, mimic, or induce emotional contexts for comprehensive research. Such a methodology is used extensively across the social sciences with a specific concentration on psychology, for investigation of different fields, e.g., emotional mimicry and contagion, mood dynamics, socio-cultural and intrapersonal processes, emotional regulation, and prosocial behavior [16]. To artificially induce emotional changes for these studies, a variety of techniques have been developed, including the use of emotional words [7,8], English texts [912], emotional images [1317], faces [1820], video clips [2137], music [3842], personal recollection [39,4346], imagination [4750], and virtual reality [51,52].

Despite the efficacy of all these methods in eliciting discrete/continuous mood states, there is a growing trend towards the use of emotional audiovisual materials or film clips. This trend reflects their effectiveness in eliciting the desired subjective emotional states for research purposes, making them one of the most user-friendly techniques in the laboratory setting. These clips, serving as an optimal artificial model of reality, are dynamic, multi-modal, and replete with contextual information that facilitates the understanding of characters’ emotional states [29,5355]. Their complexity, coupled with a blend of explicit and implicit affective and cognitive features, effectively engages the audience’s auditory and visual senses [5658], leading to an emotional experience akin to real life, where emotions evolve over time [53,59]. This method effectively addresses potential ethical and practical concerns related to the manipulation of emotions [60]. Moreover, given the ubiquity of television and film viewing and the global target audience of most contemporary films, it is assumed that the video presentation induction procedure is not likely to be largely influenced by gender and other differences. Consequently, film clips provide a more ecologically valid alternative for emotion elicitation compared to other affect-induction techniques, such as static images and recalling previous emotional events [56,6164]. Furthermore, emotional film clips have demonstrated their capacity to induce robust and quantifiable subjective and physiological changes [6466]. Finally, meta-analyses of emotion induction further underscore the potency of film as one of the most effective ways of eliciting emotions [55,6769]. Together, these studies furnish some databases of film clips that are expected to consistently evoke specific responses from participants. However, the validity and relevance of the normative data associated with some clips are being re-evaluated due to societal shifts in preferences and norms over the past two decades (e.g., [70]), highlighting the need for a regularly updated archive for research utilization.

Recognizing the profound influence of personal, gender-specific, cultural, and linguistic factors on emotions (e.g., [53,7176]), particularly complex emotions (e.g., [31,32,7779]), it becomes imperative to validate emotional assessment tools across diverse cultures [80,81]. In other words, despite the development of a robust and reliable collection of video clips over several decades, their validity in different cultural contexts remains uncertain due to the aforementioned factors. Hence, it is both theoretically and practically significant to investigate the effectiveness of these video clips in evoking corresponding emotional responses in diverse cultures. This study aims to fill a significant gap in the literature by validating a database within the Iranian community, thereby addressing a wide spectrum of potential research inquiries – an endeavor yet to be accomplished. This initiative has led to the development of culture-sensitive emotional databases catering to non-English speaking audiences, a feat already achieved in other cultures and languages. For instance, Schaefer et al. [23] developed a database for French-speaking audiences, encapsulating emotions such as amusement, tenderness, anger, contentment, fear, sadness, and neutrality. Similarly, Xu [82] introduced several Chinese music videos expressing happiness, anger, contentment, fear, sadness, surprise, and neutrality. Hewig et al. [63] not only validated previous movie sets but also enriched the German sample with four neutral clips, encompassing emotions like happiness, amusement, contentment, fear, sadness, anger, and neutrality. This process of validation has been investigated in other cultures and languages as well, with further examples available in [24,60,8387]. Importantly, the Iranian specificity of the present database extends beyond language adaptation; candidate excerpts were screened against an explicit cultural/religious compatibility criterion applied to both visual content and dialogue, prior to final inclusion. This study, therefore, stands as a pioneering effort in expanding the horizons of emotional research within the Iranian community. Consequently, we have developed and evaluated of what is to the best of our knowledge the most comprehensive normative database of emotional film scenes within Iranian culture. This paper intends to detail the development of this database, including the tests that demonstrated its efficacy in inducing emotions in a laboratory environment.

The creation and validation of a database for video clips that elicit emotions should be firmly rooted in distinct theories of emotion. A vibrant theoretical debate has emerged concerning the intrinsic nature of human emotional responses. Theorists of discrete or basic emotion theories, such as Ekman [88,89] and Tooby & Cosmides [90], argue that emotions are short-lived, organized into a limited set of fundamental categories, and consist of distinct episodes that include various loosely connected responses—autonomic, behavioral, and experiential—that have evolved to facilitate our adaptation to specific environmental challenges. For instance, anger involves a set of responses developed to address goal obstruction [91], while sadness involves responses tailored to cope with loss [92]. These viewpoints have facilitated the development of sets of emotional film stimuli capable of eliciting distinct experiential states corresponding to basic emotions (e.g., happiness, surprise, fear, sadness, anger, and disgust). However, beyond emotional discreteness, research often requires stimuli validated on broader criteria, including films that assess the impact of varied emotional arousal and valence intensities on cognitive functions. Furthermore, the elicitation of mixed emotions – where multiple basic emotions are experienced simultaneously – is crucial for comprehensive studies [9395]. Dimensional or psychological constructionist theorists, such as Barrett [96], propose that emotional responses are more accurately understood through two underlying neurobiological continuous dimensions [9799] that respond to environmental stimuli, rather than through discrete basic emotions: hedonic valence (i.e., pleasantness or unpleasantness) and arousal level (i.e., low or high) of stimuli that shape our multi-dimensional responses (behavioral, autonomic, etc.). Hence, specific emotions such as anger, fear, or sadness are perceived as social constructs that originate from how individuals appraise or conceptualize the provoking event [100].

Thus, to construct a comprehensive collection of emotional stimuli, researchers should manipulate both these dimensions as well as basic emotions. This supports the selection and arrangement of a diverse assortment of film clips, which are instrumental for researchers with varying theoretical orientations or research objectives. For instance, researchers aiming to manipulate mood to study its impact on executive attention may find clips validated within a dimensional framework particularly useful. In contrast, those investigating the effects of fear on visual search and memory might prefer clips validated under a discrete emotions framework. Accordingly, we have examined the movie clip stimuli through a dual-dimensional lens. The first dimension involves the analysis of stimuli known to evoke specific or discrete emotional reactions, while the second dimension pertains to stimuli that align with dimensional or psychological constructionist perspectives. Moreover, our analysis categorizes stimuli that predominantly utilize a discrete emotions framework, yet acknowledges the intricacies involved in the simultaneous elicitation of related emotions.

Overview

As indicated, culture and language are pivotal elements that significantly influence the effectiveness of emotion induction techniques. Over recent decades, there has been a substantial upsurge in the attention given by Iranian researchers to the study of emotions and their profound impact on various facets of human behavior and its underlying processes, including individual, social, cognitive, and neural functions. This has led to a multitude of studies in this domain (e.g., see [101105]). However, it is noteworthy that mood induction methods such as images [106, 107, 108], words [109,110], and music [111114] have been predominantly used, overshadowing the use of video clips. Although a variety of video clips have been utilized in a handful of studies, their use has been rather limited and confined to specific research contexts. Notably, recent work by [26] has contributed to this area by not only collecting self-report data but also integrating neuropsychological approaches and collecting various neural data. However, our study differs significantly in several key aspects. We assessed a substantially larger sample size of 300 participants, which enhances the statistical power and validity of our findings. Additionally, we have strived to capture the unique cultural diversity of Iran, a multicultural society comprising Persian, Kurdish, Turkish, and other ethnic groups, to ensure a more representative and inclusive dataset. Furthermore, while [26] focused on locally Iranian-produced films, our study adopts a different approach by utilizing internationally recognized films with validated Persian subtitles, following methodologies similar to those used in cross-cultural emotion studies (e.g., [23]). This approach is based on the hypothesis that basic emotions are likely to be consistent across cultures, thus enabling a more reliable induction of emotions within the Iranian context. The specific differences and advantages of our approach, including the inclusion of the emotion “tenderness,” mixed emotions, gender differences, and the validation of the Discrete Emotions Scale (DES) for the first time in the Iranian community, will be further discussed in the Discussion section.

With this in mind, the primary objective of the present study is to develop and validate a reliable and comprehensive database of emotion-inducing video clips as a methodological tool for emotion research within the Iranian culture and community. This objective encompasses the following goals: 1. Selection from a fairly large collection of diverse video clips that span a broad spectrum of emotional dimensions. 2. Evaluation of the effectiveness of video clips utilizing various tools from both dimensional and discrete approaches. 3. Creation and validation of a video clip database encompassing seven emotions: neutral, sadness, happiness, fear, anger, tenderness, and disgust. 4. Identification of the most emotionally effective video clips. 5. Translation and validation of a discrete emotion questionnaire for the first time within the Iranian community. 6. Lastly, providing open access to the data, analyses, and codes via the Open Science Framework, facilitates flexible selection and usage by researchers worldwide.

The remainder of this paper is structured as follows: We first delve into the ’Methods’ section, where we illuminate the methodologies that anchor the foundation of this study. This includes an exhaustive explanation of the development of our video clip database, detailing the process of clip selection and collection, the method of use in a laboratory setting, and a thorough description of the process to evaluate the database’s effectiveness as emotional stimuli. This involves the use of tools to assess emotions in both discrete and continuous dimensions. The discrete dimension employs an extended version of the Differential Emotional Scale [23,115117], validated for the first time in Iranian culture in this paper, which aligns with a basic emotions approach. The continuous dimension uses the Self-Assessment Manikin [102,118,119], validated by Nabizadeh et al. [120] for the Iranian community, to evaluate the video clip database aligned with a dimensional approach to emotions – subjective arousal and pleasantness. This section also covers the experimental design, participant involvement, and the analytical procedures used. Next, we present the ’Results’ section, where we reveal the findings of our data analysis, addressing a wide array of research questions. This provides insights into the process of selecting and ranking the most effective video clips from the database, supplemented by results from the highest-scoring subgroups. This leads into the ’Discussion’ section, where we undertake a meticulous analysis of our findings. Finally, we conclude with the ’Conclusion and Future Works’ section, summarizing the key findings of the research and suggesting potential directions for future exploration.

Methods

Preliminary review of films and final selection by experts

The first step involved the careful compilation and examination of a diverse set of film excerpts, each corresponding to one of seven distinct emotional categories. These categories were chosen in alignment with the principles of the “basic emotion theory” and their established use in prior studies. The emotions under investigation in this study included neutral, sadness, happiness, fear, anger, tenderness, and disgust. The selection of these video excerpts was governed by a set of five stringent criteria:

  • The visual and verbal components of the films should align with the cultural and religious structures of Iranian society.
  • The films often fall into the category of emotions delineated in the “basic emotion theory.”
  • The verbal cues used in the films should be easily comprehensible and commonly found in everyday language.
  • A majority of the selected films should have been utilized in prior emotion-related research.
  • The elicited emotion should remain consistent throughout the duration of the film, thereby excluding films that evoke a mix of positive and negative emotions.
  • The film clips should be relatively brief, devoid of visible watermarks, logos, or mosaics, and should not be of the animated genre.

Although tenderness is not typically categorized as a basic emotion, its inclusion in this study was justified due to its acknowledgment as a distinct category of positive attachment-related emotions in prior studies [23,121123]. Furthermore, this emotion is effectively elicited by films. We therefore included tenderness to complement happiness with a conceptually distinct positive category that has also been considered in prior film-clip databases (e.g., [23]). By contrast, we did not designate surprise as a target category because it is often brief and valence-ambiguous and can rapidly transition into other emotions, making sustained induction within multi-second excerpts less compatible with our selection criterion that the elicited emotion should remain relatively consistent throughout a clip; nevertheless, surprise-related experience was still assessed via the DES item cluster (“surprised, amazed, astonished”).

In our study, we drew upon prior research on the development and validation of mood-induction films to compile an extensive collection of film excerpts. Initially, we extracted 130 movie excerpts from a vast pool of potential scenes from previous studies. Subsequently, a collaborative survey involving the study’s researchers and six research assistants (including four females and four males) led to the selection of four video clips for each emotional category. The final selection included films that were commonly chosen by the individuals involved. Our selection methodology was subjective, aligning with the approach adopted in previous studies [22,29,86]. The research assistants were instructed to select movie excerpts that could potentially induce specific emotions, including neutrality, sadness, happiness, fear, anger, tenderness, and disgust. To ensure a clear understanding of these subjective emotions, eight research assistants were trained in various emotional categories, drawing upon the theoretical psychological knowledge presented in the previous section. Their training was validated by assessing their understanding of several common materials from current databases. In total, 27 selected video clips were incorporated into the study, with four movies representing each emotion, except for the neutral emotion, which was represented by three finalized movies. However, the research group unanimously agreed that the movies associated with the “happiness” emotion from previous studies were not suitable for inducing the desired mood within the Iranian community. Consequently, four “happiness” movies were selected and introduced into the study for the first time by the researchers. The finalized video clips ranged in length from 16 to 354 seconds, averaging 162.67 s (SD = 97.80 s). All movies were in English, with Persian subtitles added. These subtitles were meticulously reviewed and approved by three cognitive linguistics experts. We opted for subtitles over dubbed versions for two reasons: firstly, to preserve the original texture of the movies, particularly the soundtrack, which plays a crucial role in mood induction; and secondly, because in Iranian society, watching foreign movies with subtitles is more prevalent than watching dubbed versions. The details of each emotional film are shown in Table 1.

Participants and ethics

A diverse cohort of three hundred native Iranians, comprising an equal distribution of males and females within the age range of 18 and 30 years (mean age = 24.44, SD = 3.66), voluntarily participated in the experiment through face-to-face sessions conducted at a dedicated behavioral laboratory. Comprehensive metadata, encompassing age, gender, and other demographic details of the participants, are readily accessible in the project’s open-source framework. All participants had either normal or corrected-to-normal visual and auditory functions. Prior to their participation, they were provided with exhaustive briefings pertaining to the research objectives and the experimental procedures. Following these briefings, they provided their consent to participate in the study by signing the requisite forms. The participants were assured of the confidentiality of their responses and were informed that they could withdraw from the study at any time. All participants completed the full experimental session and provided complete responses. Thus, no participants were excluded after enrollment and no data were discarded due to missing or incomplete responses. The task was programmed such that participants could proceed only after completing the required ratings, ensuring complete response records for all participants. This study received ethical approval from the Research Ethics Committees of Shahid Beheshti University (Approval ID: IR.SBU.REC.1399.066, Date: 2020-01-10). No participants under the age of 18 were included. Participant recruitment occurred between May 15, 2021 and August 20, 2021. Data collection (i.e., in-lab testing sessions) was completed within this period, and the full sample of 300 participants was tested over approximately 14 weeks, with sessions scheduled continuously across the recruitment window until completion. Demographic metadata (e.g., age and gender) are accessible via the project’s open-source framework. All data and programming codes supporting the findings of this study are available on the Open Science Framework (https://osf.io/2td5n/).

In our study, participants were carefully screened to ensure eligibility. Specifically, they were not receiving psychotropic treatment or using drugs, and they reported no history of psychological, psychiatric, or neurological disorders, consistent with DSM-5 criteria. To minimize potential bias due to depressive symptomatology in affective responding, participants completed the Beck Depression Inventory-II (BDI-II) [124] prior to the experiment. The BDI-II is a widely used self-report instrument assessing depressive symptoms over the past two weeks; following commonly used thresholds [125], individuals scoring were not enrolled in the study. The BDI-II has well-established psychometric properties [126] and has also been validated in Iranian samples, demonstrating good internal consistency and test-retest reliability [127].

Procedure

The experiment was conducted in a dedicated behavioral laboratory where participants worked independently (see Figs 1 and 2). Each participant was tested in the same controlled environment. Prior to the commencement of the experiment, the room’s lighting was dimmed and participants were given pre-recorded relaxation instructions. These instructions required participants to close their eyes, achieve full-body relaxation, including facial muscles, and engage in deep, regular breathing for a duration of approximately 120 seconds. Upon completion of the relaxation process, the participants promptly commenced the assigned task. Initially, detailed instructions about the test process and how to complete the instrument were displayed on the screen. Participants were encouraged to ask questions if they found any ambiguity in the instructions. A fixation dot was then centrally displayed for 500 ms before each movie was shown. Two neutral movies were consistently shown at the start of the task to familiarize the participants with the task and establish a baseline. The experimental stimuli, in the form of movies, were presented on a 14-inch screen with a resolution of 1366768 pixels. All clips were presented in full (i.e., they could not be skipped), and therefore each participant watched the complete set of stimuli before proceeding to the post-clip ratings. Each participant was positioned at a 90-degree arc, facing the screen, and provided with individual headphones to ensure an immersive experience. To minimize distractions, participants were instructed to remove any potential devices such as smartphones or smartwatches and to maintain their focus on the screen throughout the experiment. Following each film excerpt, participants were asked to complete computerized questionnaires to assess their emotional state. Based on previous studies’ recommendations [21,23], participants were instructed to (a) express their authentic emotions, rather than what they believed others might expect them to feel in response to the movies, (b) convey the immediate thrill they experienced while viewing the movie, instead of their overall mood throughout the day, (c) disclose if they recognized the movie from the clip, to exclude any clips from our database that had been previously viewed by a minimum of 5% of the participants. In the subsequent stage, participants completed a series of distraction trials. They were instructed to press key 1 or key 2 upon seeing a circle or square, respectively. Participants were then asked to take a deep breath, follow the initial relaxation instructions, and press the “space” button when they were ready to watch the next movie. This procedure, inclusive of the relaxation instructions, was repeated for each film excerpt. Finally, a neutral movie was shown to the participants for emotional recovery, a process that was consistent across all subjects. This was followed by a random display of the movies. The entire experiment was designed and executed using the Python-based Psychopy library on a Windows PC. The experiment was conducted in a controlled setting, free from external interruptions, with only the participant present in the laboratory. This approach ensured a consistent and controlled experimental environment.

thumbnail
Fig 1. Schematic representation of the experimental setup.

https://doi.org/10.1371/journal.pone.0343598.g001

thumbnail
Fig 2. Experimental procedure overview.

The procedure consisted of a preparation phase (personal information, Beck Depression Inventory, and relaxation instructions), followed by a computerized task. Each trial included (A) a fixation period (500 ms), (B) an emotional movie clip (up to 354 s), (C) self-report emotion ratings, and (D) a brief distraction task.

https://doi.org/10.1371/journal.pone.0343598.g002

It should be noted that the sequence of film presentation was meticulously counterbalanced, adhering to the following criteria: (a) Films sharing identical target emotions were not sequenced back-to-back. (b) Participants were prevented from viewing two films with analogous valence/arousal successively. (c) The order of video clip presentations was randomized. (d) To familiarize participants with the experimental process and establish a baseline, two neutral films were screened at the commencement of the experiment. (e) A neutral film was presented at the conclusion of the experiment to facilitate emotional recovery and the processing of negative stimuli. This approach ensured a balanced and unbiased exposure to the different emotional stimuli throughout the experiment.

Measures

Self-assessment Manikin.

The Self-Assessment Manikin (SAM), a non-verbal pictorial assessment technique that measures pleasure, arousal, and dominance, was introduced by [119]. In this study, we implemented the dimensions of valence and arousal, based on the dimensional structure of emotion as proposed by [129]. These dimensions are frequently used in research contexts [23,68,130,131]. Given its intercultural applicability, SAM is suitable for use across various cultures and countries [132135]. It serves as an alternative to verbal reporting scales. In our research, we used the computerized version of SAM, developed by [119], which employs a nine-degree scale to evaluate the dimensions of valence and arousal. Participants rated their level of pleasantness/happiness/amusement (coded as 9) or unpleasantness/sadness (coded as 1), as well as arousal (coded as 9) or calmness (coded as 1), after watching each movie during the experiment using a nine-point Likert scale [119]. The responses were based on the participants’ feelings during the movies, rather than what they perceived to be the correct response. The research tool, which uses graphic images to express different emotional states, was easy to use regardless of the participant’s educational level. The Persian version of the experiment was validated by [120] for the Iranian community. Reliability was assessed using a two-week-interval retest method and by calculating Cronbach’s alpha coefficient. The internal consistency of the researcher-developed tool was assessed using Cronbach‘s alpha, yielding values of 0.89 and 0.83 for the pleasantness and arousal dimensions, respectively. These results indicate an acceptable level of reliability.

Differential emotions scale.

In our endeavor to evaluate discrete emotional dimensions following the viewing of each video clip, we have, for the first time to our knowledge, developed and validated a Differential Emotions Scale in Persian within Iranian society, drawing upon the insights from previous studies [21,30,46,115,116]. Our choice of the DES was motivated by its widespread use as a self-report scale for capturing discrete emotional feelings [30,83,136,137]. Specifically, we adapted and validated this Persian version of the DES, which had previously been successfully employed in validating emotional film stimuli in various languages, including English [116,138] and French [30,83]. The DES comprises groups of emotional adjectives, including descriptors such as ‘interested, concentrated, alert,’ ‘joyful, happy, amused,’ ‘sad, downhearted, blue,’ ‘angry, irritated, mad,’ ‘fearful, scared, afraid,’ ‘anxious, tense, nervous,’ ‘disgusted, turned off, repulsed,’ ‘disdainful, scornful, contemptuous’, ‘surprised, amazed, astonished,’ ‘warm-hearted, gleeful, elated,’ ‘loving, affectionate, friendly,’ ‘guilty, remorseful,’ ‘moved,’ ‘ satisfied, pleased,’ ‘calm, serene, relaxed,’ and ‘ashamed, embarrassed.’ Participants were asked to rate the intensity of their emotions for each item on a 7-point scale (ranging from ‘not at all‘ to ‘very intense’) after viewing each film clip. To maintain the flow of this paper, we have detailed the methods and analyses used for the validation and reliability of this Persian version of DES in S1 Appendix. Additionally, the Persian version of the DES is freely accessible to researchers via the open science framework at https://osf.io/2td5n/.

Data analysis

The data analysis involved several statistical procedures to evaluate the effectiveness of the video clips in eliciting specific emotional responses across multiple dimensions. To assess differences in emotional responses elicited by different categories of video clips, within-subjects repeated measures analyses of variance (ANOVA) were performed. Following the ANOVA, post-hoc multiple pairwise comparisons were carried out using Bonferroni’s correction to identify specific differences between the emotional categories. The results of these post-hoc tests provided insights into which categories significantly differed from one another in terms of the emotional dimensions under study. Significance levels were set at various thresholds to capture a range of statistical strengths. All analyzes were performed using R, with effect sizes calculated to provide additional insights. Exploratory analyses also examined potential gender differences. This approach validated our database’s effectiveness in an Iranian cultural context.

Transparency and openness

We report how we determined our sample size, all data exclusions, all manipulations, and all measures in the study, in accordance with the Journal Article Reporting Standards (JARS; Appelbaum et al., 2018). All data, analysis code, and materials for this study are publicly available on the Open Science Framework (OSF) and can be accessed at https://osf.io/2td5n/. Data were analyzed using R, version 4.0.0 (R Core Team, 2020), and the package ggplot, version 3.2.1 (Wickham, 2016). This study‘s design and hypotheses were not preregistered.

Inclusivity in global research

Additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research is included in the Supporting information (S1 Checklist).

Results

The findings are focused on these key questions: Which categories of video clips elicited the most potent levels of emotional arousal? Which categories of video clips are most successful in inducing significant levels of emotional valence? Are the video clips capable of inducing distinct positive and negative affective states? Are we able to elicit differentiated emotional feeling states using the video clips? Are there any gender-based differences in emotional responses? What criteria should be used to select the most effective video clips from the database? In the following, we aim to thoroughly investigate each of these questions within the Iranian community and subsequently provide a separate report on the findings obtained for each one.

Which categories of video clips elicited the most potent levels of emotional arousal?

We first examined the self-reported arousal scale corresponding to each video clip across a range of emotional categories (see Fig 3). The statistical results of the impact of these categories on this particular arousal scale are shown in Fig 4, which encapsulates the metrics such as Bonferroni’s correction -values and effect sizes visually in a heatmap. The data revealed a strong main emotional effect (, ). Upon closer examination of Fig 4, we can observe that the Bonferroni post hoc test highlighted a profound level of significance in all pairwise comparisons associated with the neutral category, evidenced by a -value less than (additional details and corresponding effect sizes can be found in Fig 4). Furthermore, a significant difference was observed in all pairwise comparisons among emotional categories, except for the pairs HAPPINESS–SADNESS and SADNESS–TENDERNESS (see Fig 4 for more information). These findings suggest that all emotional movies elicited higher levels of self-reported arousal compared to neutral movies, with fear-inducing movies generating the most intense arousal levels, while clips from movies in the sadness category produced lower arousal levels than other negative emotional films, though still higher than neutral films. This underscores the differential impact of various emotional categories on arousal levels.

thumbnail
Fig 3. Subject-level analysis of continuous emotional dimensions.

This figure illustrates the analysis of emotional dimensions at the subject level. The horizontal axis denotes the valence dimension, reflecting the spectrum of positive to negative emotions. The vertical axis measures the arousal dimension, indicating the level of emotional activation or intensity. The mean and standard error are shown for each dimension.

https://doi.org/10.1371/journal.pone.0343598.g003

thumbnail
Fig 4. Self-reported emotional arousal levels across various emotional categories at the individual level, with a heatmap displaying Bonferroni pairwise comparison p-values and their corresponding effect sizes.

Significance codes: NS (non-significant), ‘****‘ (<0.0001), ‘***‘ (<0.001), ‘**‘ (<0.01), ‘*‘ (<0.05).

https://doi.org/10.1371/journal.pone.0343598.g004

Which categories of video clips are most successful in inducing significant levels of emotional valence?

Here, as our second question, we explored how different emotional video clips impact valence, another emotional dimension recognized through continuum theory, where a high valence rating signifies the pleasantness and a low valence rating indicates the unpleasantness of that particular category of movies. The measurement results for each target emotion category are graphically represented in Figs 3 and 5, which statistically demonstrates a strong main emotional impact (, ). As shown in the heatmap in Fig 5, in addition to all Bonferroni pairwise comparisons associated with the neutral genre being high significant at , the pairwise comparisons of all emotional genres (except for FEAR–DISGUST, and HAPPINESS–TENDERNESS pairs) are statistically significant (see Fig 5 for details and corresponding effect sizes). The findings suggest that both positive emotional film categories (i.e., HAPPINESS, which produces higher levels of valence, and TENDERNESS) were successful in creating stronger levels of self-reported valence compared to neutral and negative films. Furthermore, the study reveals that the ANGER category of video clips resulted in the lowest level of valence. Therefore, it can be concluded that emotional films in the positive and negative genres have been successful in extracting positive and negative emotions, respectively.

thumbnail
Fig 5. Subject-level self-reported emotional valence across various emotional categories, with a heatmap displaying Bonferroni pairwise comparison p-values and their corresponding effect sizes.

Significance codes: NS (non-significant), ‘****‘ (<0.0001), ‘***‘ (<0.001), ‘**‘ (<0.01), ‘*‘ (<0.05).

https://doi.org/10.1371/journal.pone.0343598.g005

Are the video clips capable of inducing distinct positive and negative affective states?

To test the discriminant induction validity of the positive-negative affective states across our categories of emotional video clips, we implemented a procedure to establish two robust/reliable measures. These measures, termed as positive and negative composite scores, represent Positive Affect (PA) and Negative Affect (NA) for each emotional condition. They were derived by averaging the classes of DES items. The positive composite score incorporated six items: “joyful, happy, amused”; “warm-hearted, gleeful, elated”; “loving, affectionate, friendly”; “interested, concentrated, alert”; “satisfied, pleased” “calm, serene, relaxed”. On the other hand, the negative composite score encompassed ten items: “fearful, scared, afraid”; “anxious, tense, nervous”; “moved”; “angry, irritated, mad”; “Ashamed, embarrassed” “sad, downhearted, blue”; “surprised, amazed, astonished”; “guilty, remorseful”; “disgusted, turned off, repulsed”; “disdainful, scornful, contemptuous”. Our results have strongly validated these two composite measures in the Iranian culture, mirroring findings from previous studies in other cultures (e.g., [23]). In particular, all Cronbach‘s alphas in our study exceeded 0.60, signifying a strong internal consistency for our scales, which aligns with Schmitt’s criterion [139] that deems a composite measure satisfactory with a value of 0.50, thereby highlighting the cross-cultural applicability of these composite measures. In response to these findings, our data analysis here is designed to address the following question: To what degree were the films categorized under positive and negative emotional states successful in eliciting the corresponding emotions? More precisely, we seek to determine whether the films, previously classified under positive and negative emotional categories by preceding studies, receive analogous ratings from Iranian participants. To this end, we conducted two separate statistical tests to examine the effect of the video clips category on PA and NA scores across all 27 video clips viewed by the participants.

The analysis of the data, with a focus on positive and negative composite scores, reveals a significant main effect of video clip categorization on PA (, ) and NA (, ), independently. We executed the corresponding post-hoc tests using Bonferroni pairwise comparisons to evaluate if the ratings for negative and positive affects differed between positive and negative video clips. Regarding the scores for positive affect, all positive video clips received significantly higher ratings compared to the negative video clips, evidenced by a -value less than (additional details and corresponding effect sizes can be found in Fig 6). Furthermore, the Bonferroni pairwise comparisons between all pairs of emotional categories, excluding ANGER–DISGUST and HAPPINESS–TENDERNESS, were statistically significant according to the post-hoc tests (see Fig 6 for more information). As depicted on the left side of Fig 6, the video clips from the HAPPINESS and TENDERNESS categories achieved the highest scores in terms of positive emotions. On the other hand, all negative video clips scored statistically higher than the positive video clips in terms of negative affect scores (, see Fig 6). As illustrated on the right side of Fig 6, video clips in the ANGER, FEAR, and DISGUST categories received the highest scores for the negative affect items. Additionally, the Bonferroni post-hoc pairwise comparison revealed a significant difference between each pair of emotions and all pairs within the negative genre (see Fig 6 for detailed information and corresponding effect sizes).

thumbnail
Fig 6. Subject-level self-reported positive (displayed on the left) and negative (shown on the right side) composite scores (DES-based) across various emotional categories, with a heatmap displaying Bonferroni pairwise comparison p-values and their corresponding effect sizes.

Significance codes: NS (non-significant), ‘****‘ (<0.0001), ‘***‘ (<0.001), ‘**‘ (<0.01), ‘*‘ (<0.05).

https://doi.org/10.1371/journal.pone.0343598.g006

Are we able to elicit differentiated emotional feeling states using the video clips?

Perhaps it can be said that the most important feature that should be thoroughly examined in the context of emotional validation studies similar to this study is the elicitation of distinctly different emotional feeling states using the desired video clips. Due to the large sample size, this aspect can be investigated here based on the deep analysis of the DES. Specifically, it can explore the fundamental hypothesis of whether self-reported emotional profiles are modulated by video clip categorization. From a statistical perspective, an analysis of variance on repeated measures of 7 16 results revealed a strong and significant interaction between video clip categorization and DES elements, clearly supporting the proposed hypothesis (, ). Detailed statistical results on discrete emotional regulation by each induced video clip category are presented in Figs 7 and 8, including corrected -values (using the Bonferroni method) and effect sizes visually in heatmaps. A closer examination of the heatmaps in Figs 7 and 8 reveals that most Bonferroni pairwise comparisons between the DES items were highly significant at (see Figs 7 and 8 for more details and corresponding effect sizes). These findings confirm the anticipated differentiation between emotional states. To provide further insight, the results can be analyzed by first targeting the specific DES item for each discrete emotional state. For instance, ANGER corresponded to DES item 5: “angry, irritated, mad,” FEAR to item 2: “fearful, scared, afraid,” TENDERNESS to item 12: “loving, affectionate, friendly,” SADNESS to item 9: “sad, downhearted, blue,” HAPPINESS to item 8: “joyful, amused, happy,” DISGUST to item 14: “disgusted, turned off, repulsed,” and NEUTRAL to item 16: “calm, serene, relaxed.” Subsequently, we conducted a set of six Bonferroni pairwise comparisons for each emotional category of video clips to examine the differences between the target state (associated with the chosen DES item) and each non-target state, as detailed in Figs 7 and 8. For example, in the category of SADNESS video clips, we zoom in on the statistical results of the Bonferroni pairwise comparisons of the specific DES item (‘‘sad, downhearted, blue”) with the six non-target items, which are targets for the other six emotional categories (e.g., ANGER, TENDERNESS, etc.). The statistical analyses in Figs 7 and 8 clearly show that all comparisons are highly significant, thereby supporting the hypothesis and the expected differences between the target states.

thumbnail
Fig 7. Subject-level self-reported emotional profiles based on DES elements across various discrete emotional categories of video clips, with a heatmap displaying Bonferroni pairwise comparison p-values.

Significance codes: NS (non-significant), ‘****’ (<0.0001), ‘***’ (<0.001), ‘**’ (<0.01), ‘*’ (<0.05). Emotion abbreviations: A = anger, D = disgust, F = fear, H = happy, S = sadness, T = tenderness, N = neutral.

https://doi.org/10.1371/journal.pone.0343598.g007

thumbnail
Fig 8. Subject-level self-reported emotional profiles based on DES elements across various discrete emotional categories of video clips, with a heatmap displaying Bonferroni pairwise comparison p-values.

Significance codes: NS (non-significant), ‘****’ (<0.0001), ‘***’ (<0.001), ‘**’ (<0.01), ‘*’ (<0.05).

https://doi.org/10.1371/journal.pone.0343598.g008

Are there any gender-based differences in emotional responses?

Here, we have given particular attention to the role of gender in influencing continuous emotional dimensions, which include arousal, valence, positive and negative affects, and discrete emotions based on the DES elements. These were initially evaluated in Subsections 1 and 2 without gender as an additional factor. However, upon incorporating gender into our statistical analyses, we found that while it does not significantly influence valence (, ), it may have a significant main effect on the emotional arousal scale within the Iranian population (, ), see Fig 9 for more details. To further investigate this effect, we conducted post-hoc tests using Bonferroni pairwise comparisons. Our observations revealed a significant difference between women and men in their emotional arousal responses during the presentation of video clips designed to induce feelings of HAPPINESS, TENDERNESS, SADNESS, and DISGUST (see Fig 9 for more details and corresponding effect sizes). Specifically, women probably exhibited significantly higher levels of emotional arousal compared to men. However, it is crucial to highlight that despite these significant differences, the corresponding effect sizes were relatively small, suggesting a negligible impact (see Fig 9).

thumbnail
Fig 9. Gender-based differences in continuous emotional dimensions, which include arousal, valence, and positive and negative affects across various emotional categories of video clips, with Bonferroni pairwise comparison p-values and their corresponding effect sizes.

Left panel: The effect of gender on arousal and valence. Right panel: Gender differences in positive & negative emotions. Significance codes: NS (non-significant), ‘****‘ (<0.0001), ‘***‘ (<0.001), ‘**‘ (<0.01), ‘*‘ (<0.05).

https://doi.org/10.1371/journal.pone.0343598.g009

On the other hand, the investigation of the effect of gender on positive and negative emotions based on the composite DES scores showed no significant main effect on positive emotions (, ), but identified a significant main effect on negative emotions (, ), see Fig 9 for more details. More specifically, based on Bonferroni pairwise comparisons, it was observed that gender creates a significant difference on the negative emotion scale during the viewing of a set of video clips inducing feelings of SADNESS, FEAR, ANGER, and DISGUST (see Fig 9 for more details). However, we believe that given the extremely low effect sizes (), this effect can be completely negligible.

In the final gender analysis, a significant main effect was found when examining the impact of gender on the DES items (, ). Based on Bonferroni pairwise comparisons, it was observed that gender significantly impacts various DES items under different emotional conditions induced by video clips (see Fig 10 for details). Specifically, gender significantly influenced:

thumbnail
Fig 10. Gender-based differences in discrete emotions based on the DES elements across various emotional categories of video clips, with Bonferroni pairwise comparison p-values.

Significance codes: NS (non-significant), ‘****‘ (<0.0001), ‘***‘ (<0.001), ‘**‘ (<0.01), ‘*‘ (<0.05).

https://doi.org/10.1371/journal.pone.0343598.g010

  • DES 1 during the viewing of clips eliciting feelings of DISGUST (, ) and FEAR (, ),
  • DES 2 during clips inducing feelings of ANGER (, ), DISGUST (, ), FEAR (, ), HAPPINESS (, ), SADNESS (, ), and TENDERNESS (, ),
  • DES 3 during the viewing of video clips inducing feelings of ANGER (, ), DISGUST (, ), HAPPINESS (, ), and SADNESS (, ),
  • DES 5 during the viewing of video clips inducing feelings of DISGUST (, ), FEAR (, ), HAPPINESS (, ), and SADNESS (, ),
  • DES 6 during clips inducing feelings of SADNESS (, ),
  • DES 7 during clips eliciting feelings of ANGER (, ), DISGUST (, ), and HAPPINESS (, ),
  • DES 8 during clips inducing feelings of HAPPINESS (, ),
  • DES 9 during the viewing of video clips inducing feelings of ANGER (, ), DISGUST (, ), and FEAR (, ),
  • DES 10 during the viewing of NEUTRAL clips (, ),
  • DES 11 during clips eliciting feelings of TENDERNESS (, ),
  • DES 12 and DES 13 during clips inducing feelings of HAPPINESS (DES 12: , ; DES 13: , ),
  • DES 14 and DES 15 during the viewing of video clips inducing feelings of ANGER (DES 14: , ; DES 15: , ) and FEAR (DES 14: , ; DES 15: , ).

Based on the results presented here, we observe that around 30% of all possible cases demonstrated statistical significance (see Fig 10), indicating the impact of gender differences on DES items. However, it is essential to note that our investigation into the question in Subsection 4, was primarily focused on six out of the 16 possible DES items, which were considered as the main indicators of discrete emotions. Upon examining these six items, it was found that none of them reached statistical significance at a reasonable level of significance, such as . Consequently, the gender differences observed in this study do not contribute any bias or effects to the analyses once the gender factor is excluded.

What criteria should be used to select the most effective video clips from the database?

In mood induction studies, the selection of the most effective mood-inducing tools is of paramount importance. Researchers often formulate their questions and hypotheses around specific emotional variables, recognizing that not all variables are equally relevant. To address this, within the extensive collection of mood-induction video clips thoroughly examined here, we classified and rated them based on various distinct discrete and dimensional emotional variables. Beyond ranking these mood-inducing clips according to the self-reported experiences of individuals at a particular emotional state, we have also developed a 10-point rating system for these clips, taking into account all factors such as valence, arousal, each of the seven discrete emotions, and the balance of negative/positive affects. This methodological approach provides researchers with the capability to efficiently identify the most impactful video clips on a specific emotional dimension within our database.

The emotional rating system developed was grounded on a comprehensive set of 25 fundamental criteria, utilizing a video clip-level analysis lens. The procedure was as follows: First, a measurement matrix for all video clips in our database was computed based on the mean and standard deviation of individuals’ emotional variable scores related to arousal, valence, PA, and NA for each video clip. It is important to note that for the calculation of the mean and standard deviation of PA and NA scores across individuals for each video clip, each video clip was first evaluated on all 16 DES items. Then, based on the method described in Subsection 3, two video clip-level coefficients for the positive and negative composite scores were derived from the averaged DES item scores. In the second step, we tried to complete this four-criteria system by adding a six-criteria class named with discreteness coefficients each corresponding to the emotional categories (ANGER, FEAR, HAPPINESS, DISGUST, SADNESS, and TENDERNESS), with the explicit aim of classifying video clips based on the distinct levels of emotional states they elicit. To calculate each of the six discreteness coefficient criteria, the mean DES score of the scale targeting one specific emotion was subtracted from the averaged mean scores of the scales targeting the other five emotions. Finally, we completed the ten-criterion system mentioned thus far by adding a fifteen-criterion class named with mixed feelings (MF) coefficients, aiming to classify video clips based on a quantitative estimate of a more complex structure of human emotions, i.e., the mixed/composite emotions elicited pairwise by each video clip. Mathematically, there are various approaches to quantifying an estimation criterion for mixed feelings, ranging from simple additive, multiplicative, and minimum methods to more complex approaches such as dimensional scaling and machine learning algorithms like Principal Component Analysis (PCA), Independent Component Analysis (ICA), Multidimensional Scaling (MDS), etc. For the sake of simplicity and to maintain focus on the primary objective of this paper, we employed a multiplicative approach where the degree of the MF score is calculated based on the product of two elicited emotional scores (e.g., FEAR and DISGUST) associated with the same video stimulus. This formula is such that higher scores indicate stronger mixed feelings. If either emotion is low, the product—and consequently the MF coefficient—is low, and if both emotions are high, the product is higher, indicating stronger mixed emotions.

Thus, our emotional rating system is complete with the 25 fundamental criteria discussed above, providing a comprehensive framework for the classification and analysis of emotional responses elicited by video clips. The measurement matrix, which corresponds to all video clips in our database with the 25 fundamental criteria, is available as a table on the Open Science Framework (https://osf.io/2td5n/). As the final step in our video clip-level analysis, we identified the highest-ranking video clips based on 25 final criteria, separately for each criterion, to indeed have the video clips that induce the highest level of the targeted emotional variable. To ensure the robustness of the shortlists, we conducted several additional statistical analyses, including testing the hypothesis that each video clip in the shortlist for a specific criterion significantly differs from the average of the entire video clip database. The results were all significant with a -value less than , indicating a very high level of efficiency for each criterion for each video clip in the shortlist. Additionally, within each shortlist, we performed an additional outlier analysis based on the conventional criterion of 1.5 times the interquartile range (IQR) from the first (Q1) and third quartiles (Q3) to identify any specific video clips that were significantly less or more effective than others in the shortlist, but no such cases were found, and all results were consistent (see Fig 11). Finally, the shortlists obtained for each of the 25 final criteria can be seen in detail in Figs 12 and 13.

thumbnail
Fig 11. Shortlist distribution across criteria.

This figure illustrates the distribution of shortlisted scores according to various criteria. The horizontal axis denotes the different classification criteria, while the vertical axis indicates the standardized scores, or Z-scores, applicable to all criteria.

https://doi.org/10.1371/journal.pone.0343598.g011

thumbnail
Fig 12. Top-ranked video clips by criterion.

This figure displays the video clips that rank highest, known as the shortlists, for each specific criterion each labeled with the emotional criterion’s score and its rank within the category. On the left, clips are aligned with continuous dimensional emotional variables, while on the right, they correspond to discrete, distinct emotional variables. These clips are selected to elicit the strongest response in the targeted emotional variable.

https://doi.org/10.1371/journal.pone.0343598.g012

thumbnail
Fig 13. Top-ranked video clips by mixed feelings criteria.

This figure displays the video clips that rank highest, known as the shortlists, for each specific MF coefficient, each labeled with the MF’s score and its rank within the category. These clips are selected to elicit the strongest response in the mixed/composite emotional variable.

https://doi.org/10.1371/journal.pone.0343598.g013

Discussion

The current study aimed to develop and validate the effectiveness of a comprehensive and newly developed emotional video clip database for emotion induction across Iranian culture and society according to different validity criteria. This initiative fills a significant gap in emotion research, as previous efforts have predominantly focused on Western populations, often overlooking the intricate cultural factors that shape emotional experiences. By evaluating the effectiveness of 27 carefully selected video clips, this study provides a robust tool for emotion researchers within and beyond Iran, contributing to the growing body of literature on culturally sensitive emotion induction techniques [2224,60,8487]. To that end, several questions were analyzed and the validation process involved a multifaceted analysis based on several validity criteria, e.g., general arousal, valence, seven criteria for emotional discreteness (ANGER, FEAR, HAPPINESS, DISGUST, SADNESS, TENDERNESS, and NEUTRAL), two dimensions of positive and negative effects based on DES scores, mixed-feeling scores (as measured by the DES), and gender differences across these dimensions.

The validation of emotional stimuli in this study was meticulously designed to address both the breadth and depth of emotional experiences. In particular, our approach to validating emotional stimuli was grounded in a robust theoretical framework, incorporating both basic emotion theories and dimensional models. By drawing on two major theoretical approaches to emotion—the basic emotion approach [89,115,140] and the dimensional approach [16,130]—we ensured that our assessment captured the multifaceted nature of emotional responses. The basic emotion approach, which categorizes emotions into distinct types such as happiness, sadness, anger, and fear, has been a widely accepted framework in emotion research [89,115]. In contrast, the dimensional approach, which emphasizes the continuous nature of emotional experiences along axes such as arousal and valence, offers a more nuanced understanding of how emotions are experienced and reported [16,130]. This dual approach ensured a comprehensive evaluation of the emotional efficacy of the video clips. Given the importance of culture and language in emotion induction, the study’s focus on an Iranian sample addresses a gap in existing research, where such databases are often developed with little consideration for cultural specificity. Previous studies, such as those by [2224,60,8487], have emphasized the role of culturally relevant stimuli in emotion induction, yet few have systematically developed a database within a specific cultural context. Our findings are consistent with previous studies that have demonstrated the utility of combining these approaches to provide a comprehensive assessment of emotional states [23,24,60,63,8487].

One of the key contributions of this study is its focus on the cultural specificity of emotion induction techniques. Culture and language play crucial roles in shaping how individuals experience and express emotions, as evidenced by cross-cultural research in emotion psychology [141,142]. These factors are often underrepresented in emotion research, which tends to prioritize universality over cultural variability [142,143]. In the Iranian context, where cultural and linguistic diversity is profound, it was essential to create a database that could reflect and accommodate this diversity. In this context, the use of video clips, unlike static images, words, or music, offers a rich, dynamic medium where emotions can be elicited through the intricate interplay of audiovisual elements, narrative content, and social cues. This makes them particularly effective for capturing the complexity of emotional experiences within a specific cultural framework, as demonstrated by our findings. Previous research, including studies like those by [2224,60,8487], has shown that emotional stimuli can have different effects depending on cultural context. Our findings align with these studies, suggesting that while some emotions, such as fear, may be universally experienced, others, such as tenderness, may be more culturally specific [23,142].

The ethnic, cultural, and linguistic diversity of Iran presented both a challenge and a unique opportunity for this research. With a sample of 300 individuals representing Persian, Kurdish, and Turkish speakers, we were able to assess how these diverse groups respond to emotional stimuli. It is important to note that while these participants come from different linguistic backgrounds, they are all fluent in Persian, which is the official language of Iran and is widely used in schools, official media, and other formal settings. This approach not only enhances the ecological validity of our findings but also provides insights into the potential universality or specificity of emotional responses across different cultural groups within Iran. Similar findings have been reported in studies examining emotion induction in culturally diverse populations [143,144]. The inclusion of non-Persian clips in our study in an Iranian context allowed us to explore cross-cultural similarities and differences in how foreign stimuli are perceived in the Iranian context. Our findings align with previous cross-cultural studies [23,24,60,8487,145], indicating that the selected clips effectively induced the intended emotional states among Persian participants.

The results demonstrated that the clips used within the subgroups were more effective in eliciting the intended emotional states during testing, as detailed in the previous section. The video clips successfully triggered distinct emotional responses, showing significant distinctions between positive and negative affects, while also maintaining a high degree of discreteness in evoking the seven target emotional states within each category of video clip. Our results indicated that all emotional video clips, particularly those designed to elicit fear, were highly effective in generating higher levels of self-reported arousal compared to neutral films. Fear-inducing video clips, in particular, produced the most intense arousal levels (), underscoring the potency of fear as an emotion that can transcend cultural boundaries. This is consistent with the findings of [2224,86], who noted that fear is one of the most reliably elicited emotions across different populations. Positive emotions such as happiness and tenderness also resulted in significantly higher valence scores compared to neutral and negative emotions, with anger-inducing films yielding the lowest valence levels. The majority of comparisons, grounded in these criteria, were significant and validated the anticipated differentiation between target states. However, the distinction between fear and anger was less pronounced during the anger induction phase, suggesting a potential overlap in the emotional experiences of these two states. This finding highlights the complexity of emotional experiences and the potential for emotions to coexist or interact in ways that are not fully captured by traditional models [129,146]. Further analysis of the anger-inducing films revealed that only ’Once Were Warriors’ significantly differentiated between fear and anger, suggesting that some films may evoke mixed emotional responses, a phenomenon also reported in other studies [23,62]. This finding highlights the need for further research into the nuances of emotion induction, particularly in culturally diverse settings.

Gender differences in emotional responses were another focal point of this study. Consistent with a substantial body of research, including studies by [99,147], our findings indicated that women reported higher levels of emotional arousal than men, particularly in response to negative emotions. However, it is important to emphasize that these gender-related effects were very small in magnitude () and should therefore be interpreted cautiously as exploratory differences with limited practical significance, likely facilitated by the large sample size. This gender difference in emotional arousal is a well-documented phenomenon and has been attributed to a combination of neurobiological and sociocultural factors [99,147149]. Neurobiologically, it has been suggested that hormonal differences, particularly in the functioning of the amygdala, may underlie these gender differences in emotional reactivity [147,150,151]. Socioculturally, gender roles and expectations may influence how men and women experience and express emotions [99,148,149]. Moreover, our study revealed significant gender differences in the experience of specific negative emotions, particularly fear, sadness, anger, and disgust. Women were found to report higher levels of these discrete emotions compared to men, which aligns with several previous studies [22,23,99], who also reported heightened emotional responses in women, especially in response to fear and sadness. However, this finding contrasts with some research, such as the work by [62], which reported minimal gender differences in discrete negative emotions. The discrepancy between our results and those of other studies might be attributable to cultural differences, sample characteristics, or methodological variations in emotion elicitation and measurement. Furthermore, our study found no significant gender differences in emotional valence, suggesting that while women may experience certain emotions more intensely, the overall positivity or negativity of these emotions is not markedly different between men and women. This aligns with previous research by [22,24,62,86], who found similar patterns in emotional valence across genders. This pattern of findings highlights the complexity of gender differences in emotional responses and underscores the need for further research to explore these dynamics in diverse cultural and demographic contexts.

Another significant contribution of our study was the introduction of four novel video clips specifically designed to elicit happiness. These clips were carefully selected and rigorously tested to ensure they met the criteria for evoking a strong positive emotional response, outperforming traditional stimuli such as those used by [22,23] (note that in one of our early pilots, we tested these earlier clips within an Iranian context, but they failed to elicit the desired level of happiness, leading us to develop our own). Unlike the often-used but less ecologically valid happy stimuli, our video clips featured dynamic and relatable human interactions that resonate more naturally with viewers. This not only resulted in more consistent happiness ratings but also avoided the potential confounds of other emotional stimuli that may inadvertently induce mixed emotions. While our happiness clips were not directly compared with other established happiness stimuli from databases such as those by [22,23,29], they demonstrated superior ecological validity and compatibility with other emotional video clips used in this and similar studies. Future research should aim to further validate these clips across different cultural contexts to establish their robustness and generalizability. Nonetheless, the successful implementation of these happiness-inducing clips marks a valuable advancement in the toolkit available for emotion research.

A further noteworthy contribution of this study is its in-depth examination of mixed feelings, which refers to the simultaneous experience of contrasting emotions, such as happiness and sadness. Our findings demonstrate that several video clips, initially categorized under a single emotion, also induced significant mixed feelings in participants, particularly those aligned with complex emotional experiences like nostalgia or bittersweetness. This finding is critical as it underscores the complexity of human emotional responses, challenging the discrete emotions framework that has dominated emotion research for decades. From a theoretical perspective, such co-activation is compatible with contemporary cognitive-affective accounts that treat emotional experience as graded and multi-component rather than strictly categorical, such that a single episode can recruit partially overlapping affective and appraisal-related components (e.g., high arousal together with ambivalent valence) [96,152154]. In this sense, mixed feelings are not merely noise in validation, but an informative signature of complex affective episodes that normative databases should quantify and report [155,156]. Previous studies have either overlooked mixed feelings or treated them as anomalies (e.g., [94,95]), yet our results align with more recent research (e.g., [23,155,156]) that advocates for the inclusion of mixed emotions as a core component of emotional experience. The alignment of our findings with these contemporary studies emphasizes the necessity for emotion-eliciting tools to accommodate and accurately measure such complex emotional states. However, it is noteworthy that some clips failed to evoke the intended mixed feelings in the Iranian context, possibly due to cultural differences in emotional expression, a factor that has been underexplored in prior research (e.g., [157]). This highlights the need for continued refinement of emotion-eliciting databases to ensure their cross-cultural applicability.

A detailed comparison of our study with that of [26] reveals several critical distinctions and methodological advantages. One of the primary strengths of their work lies in its incorporation of neuropsychological measures, which provide valuable insights into the neural correlates of emotional processing. However, their study’s limited sample size poses constraints on the generalizability of their findings. In contrast, our study, with its larger sample size of 300 participants, offers a more robust and generalizable dataset, which is particularly important in the context of validation studies. Another critical difference is in the cultural representation within the study samples. While their research did not fully account for the cultural diversity within Iran, our study specifically aimed to include participants from various ethnic backgrounds, reflecting the multicultural nature of Iranian society. This approach ensures that our findings are more representative and applicable across the broader Iranian population. Additionally, while their study utilized locally produced Iranian films, we selected internationally recognized films with Persian subtitles. This choice was motivated by concerns that Iranian films might not effectively evoke basic emotions due to cultural and religious constraints, as commonly agreed upon by the Iranian population. Our hypothesis is that basic emotions are universally experienced and can be reliably induced using culturally neutral stimuli, as supported by prior research (e.g., [23]). Hence, using international films with appropriate Persian subtitles ensures that the emotional content is accurately conveyed, thereby maximizing the effectiveness of emotion induction. Moreover, our study introduces the exploration of “tenderness” as a discrete emotion, an aspect that was not covered in [26]. The inclusion of this emotion aligns with emerging trends in emotion research, which recognize the importance of a broader range of emotional experiences. Furthermore, we employed both the SAM and DES questionnaires to assess emotional responses, offering a comprehensive evaluation that includes both dimensional and discrete emotional measures. In contrast, [26] relied on the PANAS questionnaire, which has been critiqued in the literature for its ambiguity potentials due to overlapping variables (see [23]). Our study also took an additional step by validating the DES questionnaire for the first time within the Iranian community, contributing a valuable tool for future research in this field. Additionally, we examined mixed emotions and gender differences with a level of detail not fully addressed in the previous study, providing valuable insights into the complexity of emotional experiences. Lastly, while [26] collected data via a web-based approach, our study utilized a controlled laboratory environment, which allowed for greater control over external variables and thus increased the internal validity of our findings. In conclusion, while both studies make significant contributions to the field of emotion research, our work is distinguished by its larger and more representative sample, methodological comprehensiveness, and the introduction of novel emotional dimensions and validated tools.

Another contribution of this study is the development and validation of the DES in Persian, specifically within the Iranian community. To our knowledge, this is the first time that the DES has been adapted and rigorously tested for reliability and validity in this cultural context. S1 Appendix provide detailed insights into this process, where we conducted extensive statistical analyses, including reliability analysis, principal component analysis (PCA), and exploratory factor analysis (EFA). These analyses confirmed the DES as a robust tool for measuring emotional experiences in Iran, with a Cronbach’s alpha of 0.88 indicating strong internal consistency. The successful adaptation of the DES not only enhances the methodological rigor of emotion research within Iran but also offers a validated instrument for future studies exploring emotional dynamics in Persian-speaking populations.

While our study offers valuable insights into emotional responses elicited by video clips, several limitations should be acknowledged. The reliance on self-report measures in this study is a limitation that should be addressed in future research. While self-reports provide valuable insights into subjective emotional experiences, they may not fully capture the complexity of these experiences, particularly when it comes to the temporal dynamics of emotions [158,159]. Future studies should consider integrating psychophysiological indices (e.g., electrodermal activity and heart rate variability) and neurophysiological recordings (e.g., EEG/ERP), alongside neuroimaging and oculomotor methods (e.g., fMRI, MEG, and eye-tracking), to obtain a more comprehensive and multimodal characterization of emotional responses and to strengthen convergent validity beyond self-report [159,160]. Moreover, the participant sample was relatively homogeneous, consisting primarily of young adults from a specific cultural background. This limits the generalizability of our findings to more diverse populations, including different age groups and cultural backgrounds. Future research should aim to include a more representative sample to validate the findings across a broader demographic. Additionally, the use of video clips as stimuli, while effective for eliciting emotional responses, may not fully capture the range and complexity of emotions experienced in real-world scenarios. The standardized nature of these clips, while advantageous for experimental control, could affect the ecological validity of our results. Further research could explore the duration of emotional states induced by film clips, examining how long these emotions persist and what factors influence their retention or decay [145,161]. This line of inquiry is particularly relevant to distinguishing between short-lived emotional reactions and more prolonged mood states, a distinction that is critical for understanding the nature of affective experiences [159,161]. In addition, although we applied an explicit cultural/religious compatibility criterion to screen both the visual content and dialogue of candidate excerpts and presented all clips with professionally reviewed Persian subtitles, most source materials were drawn from internationally produced (predominantly Western) films. Therefore, even when the elicited affective profiles are robust, the depicted contexts, social scripts, and emotion display norms may not fully mirror everyday Iranian cultural settings, potentially limiting cultural-ecological generalizability. Importantly, related work in Persian stimulus development has begun to juxtapose locally produced Persian clips with non-Persian sets (e.g., English/French clips) primarily at the level of dimensional affect (valence/arousal) within a physiological-signal database [26]. Building on this direction, future work should more directly quantify stimulus-origin effects by systematically comparing locally produced Iranian clips against international clips within the same validation protocol, thereby isolating the contribution of culturally indigenous contexts beyond language adaptation alone. Another limitation is the lack of consideration for individual differences in emotional processing, such as personality traits, previous emotional experiences, or current mood states, which could significantly impact how participants respond to the film clips. Future studies could benefit from assessing these variables to better understand their influence on emotional responses. In addition, because all raw ratings and the derived clip-level measurement matrix are openly shared, future work can leverage these data as a benchmark for supervised machine learning/deep learning models that predict emotion categories (and mixed-emotion profiles), and can extend such predictive modeling by combining self-report with multimodal physiological recordings for automated emotion recognition. Finally, the cross-sectional design of this study, which captures emotional responses at a single point in time, may not reflect the dynamic nature of emotional experiences. Longitudinal studies could provide more comprehensive insights into how emotional responses evolve over time or with repeated exposure, offering a deeper understanding of the stability and variability of these responses across different contexts and over extended periods.

The establishment and validation of this database represent a significant contribution to the field of emotion research, particularly within the context of Iranian society. By including a diverse sample and considering gender differences, and creating a culturally sensitive tool that accommodates the linguistic and cultural diversity of Iran, this study offers a valuable resource for researchers interested in the cultural dimensions of emotion. The database is made accessible online, providing researchers with the opportunity to further validate and expand upon this work, thus contributing to a more comprehensive understanding of how emotions are experienced across different cultures.

Conclusion

This study successfully developed a culturally sensitive tool for emotional research in Iran, addressing a critical need in the field. Future studies should continue to explore the cultural dimensions of emotion, utilizing a variety of methodological approaches to deepen our understanding of affective processes across different societies.

Supporting information

S1 Appendix. Reliability and validity of the persian translation of the differential emaotions scale (DES) in the Iranian comminity.

https://doi.org/10.1371/journal.pone.0343598.s001

(PDF)

Acknowledgments

We would like to express our special thanks to Azad Hemmati for his invaluable input on the study design, as well as his expert advice on the translation and validation of the Differential Emotions Scale. Our gratitude also goes to Alexandre Schaefer and Lisanne Michelle Jenkins for their constructive comments during a critical phase of the project. Additionally, we are grateful to Zeinab Majidifard for her innovative suggestions regarding the visualization of the analysis results.

References

  1. 1. Bargh JA, Chen M, Burrows L. Automaticity of social behavior: direct effects of trait construct and stereotype-activation on action. J Pers Soc Psychol. 1996;71(2):230–44. pmid:8765481
  2. 2. Buck R. Nonverbal behavior and the theory of emotion: the facial feedback hypothesis. J Pers Soc Psychol. 1980;38(5):811–24. pmid:7381683
  3. 3. Petrides KV, Furnham A. Trait emotional intelligence: behavioural validation in two studies of emotion recognition and reactivity to mood induction. Eur J Pers. 2003;17(1):39–57.
  4. 4. Droit-Volet S, Fayolle SL, Gil S. Emotion and time perception: effects of film-induced mood. Front Integr Neurosci. 2011;5:33. pmid:21886610
  5. 5. Layous K, Nelson SK, Oberle E, Schonert-Reichl KA, Lyubomirsky S. Kindness counts: prompting prosocial behavior in preadolescents boosts peer acceptance and well-being. PLoS One. 2012;7(12):e51380. pmid:23300546
  6. 6. Cowen AS, Keltner D. What the face displays: Mapping 28 emotions conveyed by naturalistic expression. Am Psychol. 2020;75(3):349–64. pmid:31204816
  7. 7. Ortony A, Clore GL, Foss MA. The Referential Structure of the Affective Lexicon. Cognitive Science. 1987;11(3):341–64.
  8. 8. Bradley MM, Lang PJ. Affective norms for english words (ANEW): Instruction manual and affective ratings. The Center for Research in Psychophysiology, University of Florida; 1999.
  9. 9. Bradley MM, Lang PJ. Affective norms for english text (ANET): Affective ratings of text and instruction manual. Gainesville, FL: University of Florida; 2007.
  10. 10. Velten E Jr. A laboratory task for induction of mood states. Behav Res Ther. 1968;6(4):473–82. pmid:5714990
  11. 11. Verheyen C, Göritz AS. Plain Texts as an Online Mood-Induction Procedure. Social Psychology. 2009;40(1):6–15.
  12. 12. Slavova V. Towards emotion recognition in texts: A sound-symbolic experiment. IJCRSEE. 2019;7(2):41–51.
  13. 13. Lang PJ, Bradley MM, Cuthbert BN. International affective picture system (IAPS): Instruction manual and affective ratings. A-4. The Center for Research in Psychophysiology, University of Florida; 1999.
  14. 14. Marchewka A, Zurawski Ł, Jednoróg K, Grabowska A. The Nencki Affective Picture System (NAPS): introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behav Res Methods. 2014;46(2):596–610. pmid:23996831
  15. 15. Dan-Glauser ES, Scherer KR. The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance. Behav Res Methods. 2011;43(2):468–77. pmid:21431997
  16. 16. Lang PJ, Greenwald MK, Bradley MM, Hamm AO. Looking at pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology. 1993;30(3):261–73. pmid:8497555
  17. 17. Schneider F, Gur RC, Gur RE, Muenz LR. Standardized mood induction with happy and sad facial expressions. Psychiatry Res. 1994;51(1):19–31. pmid:8197269
  18. 18. Minear M, Park DC. A lifespan database of adult facial stimuli. Behav Res Methods Instrum Comput. 2004;36(4):630–3. pmid:15641408
  19. 19. Ekman P, Friesen WV. Measuring facial movement. J Nonverbal Behav. 1976;1(1):56–75.
  20. 20. Ebner NC, Riediger M, Lindenberger U. FACES--a database of facial expressions in young, middle-aged, and older women and men: development and validation. Behav Res Methods. 2010;42(1):351–62. pmid:20160315
  21. 21. Philippot P. Inducing and assessing differentiated emotion-feeling states in the laboratory. Cogn Emot. 1993;7(2):171–93. pmid:27102736
  22. 22. Gross JJ, Levenson RW. Emotion elicitation using films. Cognition & Emotion. 1995;9(1):87–108.
  23. 23. Schaefer A, Nils F, Sanchez X, Philippot P. Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition & Emotion. 2010;24(7):1153–72.
  24. 24. Ge Y, Zhao G, Zhang Y, Houston RJ, Song J. A standardised database of Chinese emotional film clips. Cogn Emot. 2019;33(5):976–90. pmid:30293475
  25. 25. Ismail S, Aziz SNMAA, Ibrahim NA, Khan SZ, Rahman MT, M A. Human-Centric Computing and Information Sciences. 2021;11(36):1–18.
  26. 26. Shalchizadeh F, Shamekhi S, Naghdi Sadeh R, Darvish A. Persian emotion elicitation film set and signal database. Biomedical Signal Processing and Control. 2022;72:103290.
  27. 27. Ekman P, Levenson RW, Friesen WV. Autonomic nervous system activity distinguishes among emotions. Science. 1983;221(4616):1208–10. pmid:6612338
  28. 28. Cabral JCC, Tavares P de S, Weydmann GJ, das Neves VT, de Almeida RMM. Eliciting Negative Affects Using Film Clips and Real-Life Methods. Psychol Rep. 2018;121(3):527–47. pmid:29298555
  29. 29. Jenkins LM, Andrewes DG. A New Set of Standardised Verbal and Non-verbal Contemporary Film Stimuli for the Elicitation of Emotions. Brain Impairment. 2012;13(2):212–27.
  30. 30. McHugo GJ, Smith CA, Lanzetta JT. The structure of self-reports of emotional responses to film segments. Motiv Emot. 1982;6(4):365–85.
  31. 31. Sato W, Noguchi M, Yoshikawa S. Emotion elicitation effect of films in a Japanese sample. Soc Behav Pers. 2007;35(7):863–74.
  32. 32. von Leupoldt A, Rohde J, Beregova A, Thordsen-Sörensen I, zur Nieden J, Dahme B. Films for eliciting emotional states in children. Behav Res Methods. 2007;39(3):606–9. pmid:17958174
  33. 33. Soleymani M, Lichtenauer J, Pun T, Pantic M. A Multimodal Database for Affect Recognition and Implicit Tagging. IEEE Trans Affective Comput. 2012;3(1):42–55.
  34. 34. Koelstra S, Muhl C, Soleymani M, Jong-Seok Lee, Yazdani A, Ebrahimi T, et al. DEAP: A Database for Emotion Analysis ;Using Physiological Signals. IEEE Trans Affective Comput. 2012;3(1):18–31.
  35. 35. Carvalho S, Leite J, Galdo-Álvarez S, Gonçalves OF. The Emotional Movie Database (EMDB): a self-report and psychophysiological study. Appl Psychophysiol Biofeedback. 2012;37(4):279–94. pmid:22767079
  36. 36. Abadi MK, Subramanian R, Kia SM, Avesani P, Patras I, Sebe N. DECAF: MEG-Based Multimodal Database for Decoding Affective Physiological Responses. IEEE Trans Affective Comput. 2015;6(3):209–22.
  37. 37. Samson AC, Kreibig SD, Soderstrom B, Wade AA, Gross JJ. Eliciting positive, negative and mixed emotional states: A film library for affective scientists. Cogn Emot. 2016;30(5):827–56. pmid:25929696
  38. 38. Sutherland G, Newman B, Rachman S. Experimental investigations of the relations between mood and intrusive unwanted cognitions. Br J Med Psychol. 1982;55(Pt 2):127–38. pmid:7104244
  39. 39. Jallais C, Gilet A-L. Inducing changes in arousal and valence: comparison of two mood induction procedures. Behav Res Methods. 2010;42(1):318–25. pmid:20160311
  40. 40. Mongrain M, Trambakoulos J. A Musical Mood Induction Alleviates Dysfunctional Attitudes in Needy and Self-Critical Individuals. J Cogn Psychother. 2007;21(4):295–309.
  41. 41. Koelsch S. Towards a neural basis of music-evoked emotions. Trends Cogn Sci. 2010;14(3):131–7. pmid:20153242
  42. 42. Logeswaran N, Bhattacharya J. Crossmodal transfer of emotion by music. Neurosci Lett. 2009;455(2):129–33. pmid:19368861
  43. 43. Lench H, Levine L. Effects of fear on risk and control judgements and memory: Implications for health promotion messages. Cognition & Emotion. 2005;19(7):1049–69.
  44. 44. Papa A, Bonanno GA. Smiling in the face of adversity: the interpersonal and intrapersonal functions of smiling. Emotion. 2008;8(1):1–12. pmid:18266511
  45. 45. Chentsova-Dutton YE, Tsai JL. Gender differences in emotional response among European Americans and Hmong Americans. Cognition & Emotion. 2007;21(1):162–81.
  46. 46. Schaefer A, Philippot P. Selective effects of emotion on the phenomenal characteristics of autobiographical memories. Memory. 2005;13(2):148–60. pmid:15847227
  47. 47. Schaefer A, Collette F, Philippot P, van der Linden M, Laureys S, Delfiore G, et al. Neural correlates of “hot” and “cold” emotional processing: a multilevel approach to the functional anatomy of emotion. Neuroimage. 2003;18(4):938–49. pmid:12725769
  48. 48. Vrana SR, Cuthbert BN, Lang PJ. Fear imagery and text processing. Psychophysiology. 1986;23(3):247–53. pmid:3749404
  49. 49. Lang PJ. Presidential address, 1978. A bio-informational theory of emotional imagery. Psychophysiology. 1979;16(6):495–512. pmid:515293
  50. 50. Wood JV, Saltzberg JA, Goldsamt LA. Does affect induce self-focused attention?. J Pers Soc Psychol. 1990;58(5):899–908. pmid:2348375
  51. 51. Diniz Bernardo P, Bains A, Westwood S, Mograbi DC. Mood Induction Using Virtual Reality: a Systematic Review of Recent Findings. J technol behav sci. 2020;6(1):3–24.
  52. 52. Somarathna R, Bednarz T, Mohammadi G. Virtual Reality for Emotion Elicitation – A Review. IEEE Trans Affective Comput. 2023;14(4):2626–45.
  53. 53. Kring AM, Gordon AH. Sex differences in emotion: expression, experience, and physiology. J Pers Soc Psychol. 1998;74(3):686–703. pmid:9523412
  54. 54. Fernández-Aguilar L, Navarro-Bravo B, Ricarte J, Ros L, Latorre JM. How effective are films in inducing positive and negative emotional states? A meta-analysis. PLoS One. 2019;14(11):e0225040. pmid:31751361
  55. 55. Westermann R, Spies K, Stahl G, Hesse FW. Relative effectiveness and validity of mood induction procedures: a meta-analysis. Eur J Soc Psychol. 1996;26(4):557–80.
  56. 56. Rottenberg J, Ray RD, Gross JJ. Emotion elicitation using films. In: Coan JA, Allen JJB, editors. Handbook of emotion elicitation and assessment. New York: Oxford University Press; 2007. p. 9–28.
  57. 57. Simons RF, Detenber BH, Cuthbert BN, Schwartz DD, Reiss JE. Attention to Television: Alpha Power and Its Relationship to Image Motion and Emotional Content. Media Psychology. 2003;5(3):283–301.
  58. 58. Uhrig MK, Trautmann N, Baumgärtner U, Treede R-D, Henrich F, Hiller W, et al. Emotion Elicitation: A Comparison of Pictures and Films. Front Psychol. 2016;7:180. pmid:26925007
  59. 59. Mifflin KA, Hackmann T, Chorney JM. Streamed video clips to reduce anxiety in children during inhaled induction of anesthesia. Anesth Analg. 2012;115(5):1162–7. pmid:23051880
  60. 60. Fernández Megías C, Pascual Mateos JC, Soler Ribaudi J, Fernández-Abascal EG. Validación española de una batería de películas para inducir emociones. Psicothema. 2011;23(4):778–85.
  61. 61. Gabert-Quillen CA, Bartolini EE, Abravanel BT, Sanislow CA. Ratings for emotion film clips. Behav Res Methods. 2015;47(3):773–87. pmid:24984981
  62. 62. Hagemann D, Naumann E, Maier S, Becker G, Lürken A, Bartussek D. The assessment of affective reactivity using films: Validity, reliability and sex differences. Personality and Individual Differences. 1999;26(4):627–39.
  63. 63. Hewig J, Hagemann D, Seifert J, Gollwitzer M, Naumann E, Bartussek D. BRIEF REPORT. Cognition & Emotion. 2005;19(7):1095–109.
  64. 64. Zupan B, Babbage DR. Film clips and narrative text as subjective emotion elicitation techniques. J Soc Psychol. 2017;157(2):194–210. pmid:27385591
  65. 65. Diener E, Napa Scollon C, Lucas RE. The Evolving Concept of Subjective Well-Being: The Multifaceted Nature of Happiness. Social Indicators Research Series. Springer Netherlands. 2009. p. 67–100. https://doi.org/10.1007/978-90-481-2354-4_4
  66. 66. Palomba D, Sarlo M, Angrilli A, Mini A, Stegagno L. Cardiac responses associated with affective processing of unpleasant film stimuli. Int J Psychophysiol. 2000;36(1):45–57. pmid:10700622
  67. 67. Devilly GJ, O’Donohue RP. A video is worth a thousand thoughts: comparing a video mood induction procedure to an autobiographical recall technique. Australian Journal of Psychology. 2021;73(4):438–51.
  68. 68. Fernández-Aguilar L, Ricarte J, Ros L, Latorre JM. Emotional Differences in Young and Older Adults: Films as Mood Induction Procedure. Front Psychol. 2018;9:1110. pmid:30018584
  69. 69. Gerrards‐Hesse A, Spies K, Hesse FW. Experimental inductions of emotional states and their effectiveness: A review. British J of Psychology. 1994;85(1):55–78.
  70. 70. Zupan B, Eskritt M. Eliciting emotion ratings for a set of film clips: A preliminary archive for research in emotion. J Soc Psychol. 2020;160(6):768–89. pmid:32419668
  71. 71. Scherer KR, Brosch T. Culture‐specific appraisal biases contribute to emotion dispositions. Eur J Pers. 2009;23(3):265–88.
  72. 72. Bagozzi RP, Wong N, Yi Y. The Role of Culture and Gender in the Relationship between Positive and Negative Affect. Cognition & Emotion. 1999;13(6):641–72.
  73. 73. Bianchin M, Angrilli A. Gender differences in emotional responses: a psychophysiological study. Physiol Behav. 2012;105(4):925–32. pmid:22108508
  74. 74. Fernández C, Pascual JC, Soler J, Elices M, Portella MJ, Fernández-Abascal E. Physiological responses induced by emotion-eliciting films. Appl Psychophysiol Biofeedback. 2012;37(2):73–9. pmid:22311202
  75. 75. Fischer AH, Rodriguez Mosquera PM, van Vianen AEM, Manstead ASR. Gender and culture differences in emotion. Emotion. 2004;4(1):87–94. pmid:15053728
  76. 76. Bradley MM, Codispoti M, Cuthbert BN, Lang PJ. Emotion and motivation I: Defensive and appetitive reactions in picture processing. Emotion. 2001;1(3):276–98.
  77. 77. Ekman P. Universals and cultural differences in facial expressions of emotion. In: Nebraska Symposium on Motivation, 1971. 207–83.
  78. 78. Ekman P, Friesen WV, O’Sullivan M, Chan A, Diacoyanni-Tarlatzis I, Heider K, et al. Universals and cultural differences in the judgments of facial expressions of emotion. J Pers Soc Psychol. 1987;53(4):712–7. pmid:3681648
  79. 79. Jack RE, Garrod OGB, Yu H, Caldara R, Schyns PG. Facial expressions of emotion are not culturally universal. Proc Natl Acad Sci U S A. 2012;109(19):7241–4. pmid:22509011
  80. 80. Liang YC, Hsieh S, Weng CY, Sun CR. Taiwan corpora of Chinese emotions and relevant psychophysiological data - standard Chinese emotional film clips database and subjective evaluation normative data. Chinese Journal of Psychology. 2013;55:601–21.
  81. 81. Mesquita B, Haire A. Emotion and Culture. Encyclopedia of Applied Psychology. Elsevier. 2004. p. 731–7. https://doi.org/10.1016/b0-12-657410-3/00393-7
  82. 82. Xu P, Huang Y, Luo Y. Establishment and assessment of native Chinese affective video system. Chinese Mental Health Journal. 2010;24:551–61.
  83. 83. Fernández-Aguilar L, Latorre JM, Martínez-Rodrigo A, Moncho-Bogani JV, Ros L, Latorre P, et al. Differences between young and older adults in physiological and subjective responses to emotion induction using films. Sci Rep. 2020;10(1):14548. pmid:32883988
  84. 84. İyilikci EA, Boğa M, Yüvrük E, Özkılıç Y, İyilikci O, Amado S. An extended emotion-eliciting film clips set (EGEFILM): assessment of emotion ratings for 104 film clips in a Turkish sample. Behav Res Methods. 2024;56(2):529–62. pmid:36737582
  85. 85. Alghowinem S, Alghuwinem S, Alshehri M, Al-Wabil A, Goecke R, Wagner M. Design of an Emotion Elicitation Framework for Arabic Speakers. Lecture Notes in Computer Science. Springer International Publishing. 2014. p. 717–28. https://doi.org/10.1007/978-3-319-07230-2_68
  86. 86. Deng Y, Yang M, Zhou R. A New Standardized Emotional Film Database for Asian Culture. Front Psychol. 2017;8:1941. pmid:29163312
  87. 87. Khanh TLB, Kim S-H, Lee G, Yang H-J, Baek E-T. Korean video dataset for emotion recognition in the wild. Multimed Tools Appl. 2020;80(6):9479–92.
  88. 88. Ekman P. Expression and the nature of emotion. Approaches to Emotion. 1984;3(19):319–44.
  89. 89. Ekman P. Are there basic emotions?. Psychol Rev. 1992;99(3):550–3. pmid:1344638
  90. 90. Tooby J, Cosmides L. The evolutionary psychology of emotions and their relationship to internal regulatory variables. In: Lewis M, Haviland-Jones JM, Barrett LF, editors. Handbook of emotions. 3rd ed. New York: Guilford Press; 2008.
  91. 91. Carver CS, Harmon-Jones E. Anger is an approach-related affect: evidence and implications. Psychol Bull. 2009;135(2):183–204. pmid:19254075
  92. 92. Bonanno GA, Goorin L, Coifman KG. Sadness and grief. In: Lewis M, Haviland-Jones JM, Feldman Barrett L, editors. The Handbook of Emotion. 3rd ed. New York: Guilford Press; 2008.
  93. 93. Hemenover SH, Schimmack U. That’s disgusting! …, but very amusing: Mixed feelings of amusement and disgust. Cognition and Emotion. 2007;21(5):1102–13.
  94. 94. Ersner-Hershfield H, Mikels JA, Sullivan SJ, Carstensen LL. Poignancy: mixed emotional experience in the face of meaningful endings. J Pers Soc Psychol. 2008;94(1):158–67. pmid:18179325
  95. 95. Schimmack U. Pleasure, displeasure, and mixed feelings: Are semantic opposites mutually exclusive?. Cognition and Emotion. 2001;15(1):81–97.
  96. 96. Barrett LF. Are emotions natural kinds?. Perspectives on Psychological Science. 2006;1(1):28–58.
  97. 97. Davidson RJ. Prolegomenon to the structure of emotion: Gleanings from neuropsychology. Cognition and Emotion. 1992;6(3–4):245–68.
  98. 98. Davidson RJ. Parsing affective space: Perspectives from neuropsychology and psychophysiology. Neuropsychology. 1993;7(4):464–75.
  99. 99. Brody LR, Hall JA. Gender and emotion in context. In: Lewis M, Haviland-Jones JM, editors. Handbook of Emotions. 3rd ed. New York, NY: Guilford Publications. 2000.
  100. 100. Barrett LF, Gendron M, Huang Y-M. Do discrete emotions exist?. Philosophical Psychology. 2009;22(4):427–37.
  101. 101. Namaziandost E, Rezai A, Heydarnejad T, Kruk M. Emotion and cognition are two wings of the same bird: Insights into academic emotion regulation, critical thinking, self-efficacy beliefs, academic resilience, and academic engagement in Iranian EFL context. Thinking Skills and Creativity. 2023;50:101409.
  102. 102. Bitaneh M, Zare H, Alizadehfard S, Erfani N. Effect of social cognition and interaction training on facial emotion recognition in the elderly with mild cognitive impairment. Salmand: Iranian Journal of Ageing. 2022;17(3):446–59.
  103. 103. Keshtiari N, Kuhlmann M. The effects of culture and gender on the recognition of emotional speech: Evidence from Persian speakers living in a collectivist society. International Journal of Society, Culture & Language. 2016;4(2):71–86.
  104. 104. Jandaghian M, Setayeshi S, Razzazi F, Sharifi A. Music emotion recognition based on a modified brain emotional learning model. Multimed Tools Appl. 2023;82(17):26037–61.
  105. 105. Moradi A, Mehrinejad SA, Ghadiri M, Rezaei F. Event-Related Potentials of Bottom-Up and Top-Down Processing of Emotional Faces. Basic Clin Neurosci. 2017;8(1):27–36. pmid:28446947
  106. 106. Riegel M, Moslehi A, Michałowski JM, Żurawski Ł, Horvat M, Wypych M, et al. Nencki Affective Picture System: Cross-Cultural Study in Europe and Iran. Front Psychol. 2017;8:274. pmid:28316576
  107. 107. Zamani N. Is international affective picture system (IAPS) appropriate for using in Iranian culture, comparing to the original normative rating based on a North American sample. Eur psychiatr. 2017;41(S1):S520–S520.
  108. 108. Hosseini FA, Shaygan M, Jahandideh Z. Positive imagery in depressive suicidal patients: A randomized controlled trial of the effect of viewing loved ones’ photos on mood states and suicidal ideation. Heliyon. 2023;9(11):e22312. pmid:38058624
  109. 109. Keshtiari N, Kuhlmann M, Eslami M, Klann-Delius G. Recognizing emotional speech in Persian: a validated database of Persian emotional speech (Persian ESD). Behav Res Methods. 2015;47(1):275–94. pmid:24853832
  110. 110. Mohamad Nezami O, Jamshid Lou P, Karami M. ShEMO: a large-scale validated database for Persian speech emotion detection. Lang Resources & Evaluation. 2018;53(1):1–16.
  111. 111. Fathabadi M, Alipour A, Aghayousefi A. The assessment of the effect of positive and negative mood induction using Iranian music on cardiovascular reactions. Journal of Psychology New Ideas. 2021;8(12):1–13.
  112. 112. Hadavi S, Kuroda J, Shimozono T, Leongómez JD, Savage PE. Cross-cultural relationships between music, emotion, and visual imagery: A comparative study of Iran, Canada, and Japan [Stage 1 Registered Report]. Center for Open Science. 2023. http://dx.doi.org/10.31234/osf.io/26yg5
  113. 113. Naseri P, Alavi Majd H, Tabatabaei SM, Khadembashi N, Najibi SM, Nazari A. Functional Brain Response to Emotional Musical Stimuli in Depression, Using INLA Approach for Approximate Bayesian Inference. Basic Clin Neurosci. 2021;12(1):95–104. pmid:33995932
  114. 114. Nazari M, Puladi F, Shekari L, Soleimani F, Seddigh Z, Shamli R. Emotional reactions to persian classical music excerpts. Journal of Modern Psychological Researches. 2011;6(21):165–89.
  115. 115. Izard CE, Dougherty FE, Bloxom BM, Kotsch NE. The differential emotions scale: A method of measuring the meaning of subjective experience of discrete emotions. Nashville: Vanderbilt University, Department of Psychology. 1974.
  116. 116. Youngstrom EA, Green KW. Reliability Generalization Of Self-Report Of Emotions When Using The Differential Emotions Scale. Educational and Psychological Measurement. 2003;63(2):279–95.
  117. 117. Boyle GJ. Reliability and validity of Izard’s differential emotions scale. Personality and Individual Differences. 1984;5(6):747–50.
  118. 118. Fairfield B, Ambrosini E, Mammarella N, Montefinese M. Affective Norms for Italian Words in Older Adults: Age Differences in Ratings of Valence, Arousal and Dominance. PLoS One. 2017;12(1):e0169472. pmid:28046070
  119. 119. Bradley MM, Lang PJ. Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. J Behav Ther Exp Psychiatry. 1994;25(1):49–59. pmid:7962581
  120. 120. Chianeh GN, Vahedi S, Rostami M, Nazari MA. Validity and reliability of self-assessment manikin. Journal of Research in Psychological Health. 2012;6:52–61.
  121. 121. Frijda NH. The Emotions. Cambridge, UK: Cambridge University Press. 1986.
  122. 122. Depue RA, Morrone-Strupinsky JV. A neurobehavioral model of affiliative bonding: implications for conceptualizing a human trait of affiliation. Behav Brain Sci. 2005;28(3):313–50; discussion 350-95. pmid:16209725
  123. 123. Hatfield E, Rapson RL. Love and attachment processes. In: Lewis M, Haviland-Jones JM, editors. Handbook of Emotions. 2nd ed. New York: Guilford Publications. 2000.
  124. 124. Beck AT, Ward CH, Mendelson M, Mock J, Erbaugh J. An inventory for measuring depression. Arch Gen Psychiatry. 1961;4:561–71. pmid:13688369
  125. 125. Kendall PC, Hollon SD, Beck AT, Hammen CL, Ingram RE. Issues and recommendations regarding use of the Beck Depression Inventory. Cogn Ther Res. 1987;11(3):289–99.
  126. 126. Beck AT, Steer RA, Ball R, Ranieri W. Comparison of Beck Depression Inventories -IA and -II in psychiatric outpatients. J Pers Assess. 1996;67(3):588–97. pmid:8991972
  127. 127. Rahimi C. Application of the Beck depression inventory-II in iranian university students. Clinical Psychology and Personality. 2014;10:173–88.
  128. 128. Erisman SM, Roemer L. A preliminary investigation of the effects of experimentally induced mindfulness on emotional responding to film clips. Emotion. 2010;10(1):72–82. pmid:20141304
  129. 129. Russell JA, Barrett LF. Core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant. J Pers Soc Psychol. 1999;76(5):805–19. pmid:10353204
  130. 130. Russell JA. A circumplex model of affect. Journal of Personality and Social Psychology. 1980;39(6):1161–78.
  131. 131. Fujimura T, Umemura H. Development and validation of a facial expression database based on the dimensional and categorical model of emotions. Cogn Emot. 2018;32(8):1663–70. pmid:29334821
  132. 132. Morris JD. Observations  : SAM: The Self-Assessment Manikin—An Efficient Cross-Cultural Measurement of Emotional Response. Journal of Advertising Research. 1995;35(6):63–8.
  133. 133. Sainz-de-Baranda Andujar C, Gutiérrez-Martín L, Miranda-Calero JÁ, Blanco-Ruiz M, López-Ongil C. Gender biases in the training methods of affective computing: Redesign and validation of the Self-Assessment Manikin in measuring emotions via audiovisual clips. Front Psychol. 2022;13:955530. pmid:36337482
  134. 134. Handayani D, Wahab A, Yaacob H. Recognition of Emotions in Video Clips: The Self-Assessment Manikin Validation. TELKOMNIKA. 2015;13(4):1343.
  135. 135. Romeo Z, Fusina F, Semenzato L, Bonato M, Angrilli A, Spironelli C. Comparison of Slides and Video Clips as Different Methods for Inducing Emotions: An Electroencephalographic Alpha Modulation Study. Front Hum Neurosci. 2022;16:901422. pmid:35734350
  136. 136. Fajula C, Bonin-Guillaume S, Jouve E, Blin O. Emotional reactivity assessment of healthy elderly with an emotion-induction procedure. Exp Aging Res. 2013;39(1):109–24. pmid:23316739
  137. 137. Fredrickson BL, Tugade MM, Waugh CE, Larkin GR. What good are positive emotions in crises? A prospective study of resilience and emotions following the terrorist attacks on the United States on September 11th, 2001. J Pers Soc Psychol. 2003;84(2):365–76. pmid:12585810
  138. 138. Song T, Zheng W, Lu C, Zong Y, Zhang X, Cui Z. MPED: A Multi-Modal Physiological Emotion Database for Discrete Emotion Recognition. IEEE Access. 2019;7:12177–91.
  139. 139. Schmitt N. Uses and abuses of coefficient alpha. Psychological Assessment. 1996;8(4):350–3.
  140. 140. Ekman P. An argument for basic emotions. Cognition and Emotion. 1992;6(3–4):169–200.
  141. 141. Mesquita B, Frijda NH. Cultural variations in emotions: a review. Psychol Bull. 1992;112(2):179–204. pmid:1454891
  142. 142. Matsumoto D. Ethnic differences in affect intensity, emotion judgments, display rule attitudes, and self-reported emotional expression in an American sample. Motiv Emot. 1993;17(2):107–23.
  143. 143. Mesquita B. Culture and emotion: Different approaches to the question. In: Mayne TJ, Bonanno GA, editors. Emotions: current issues and future directions. The Guilford Press; 2001. p. 214–50.
  144. 144. Matsumoto D, Ekman P. American-Japanese cultural differences in intensity ratings of facial expressions of emotion. Motiv Emot. 1989;13(2):143–57.
  145. 145. Ekman P, Davidson RJ. The nature of emotion: Fundamental questions. Oxford, UK: Oxford University Press. 1994.
  146. 146. Scherer KR. What are emotions? And how can they be measured?. Social Science Information. 2005;44(4):695–729.
  147. 147. Cahill L, Uncapher M, Kilpatrick L, Alkire MT, Turner J. Sex-related hemispheric lateralization of amygdala function in emotionally influenced memory: an FMRI investigation. Learn Mem. 2004;11(3):261–6. pmid:15169855
  148. 148. Barrett LF, Robin L, Pietromonaco PR, Eyssell KM. Are Women the “More Emotional” Sex? Evidence From Emotional Experiences in Social Context. Cognition & Emotion. 1998;12(4):555–78.
  149. 149. Kelly JR, Hutson-Comeaux SL. Gender-Emotion Stereotypes Are Context Specific. Sex Roles. 1999;40(1–2):107–20.
  150. 150. Goldstein JM, Seidman LJ, Horton NJ, Makris N, Kennedy DN, Caviness VS Jr, et al. Normal sexual dimorphism of the adult human brain assessed by in vivo magnetic resonance imaging. Cereb Cortex. 2001;11(6):490–7. pmid:11375910
  151. 151. Mahaldar O, Aditya S. Gender Differences in Brain Activity during Exposure to Emotional Film Clips: an EEG Study. CBB. 2017;21(1):29–53.
  152. 152. Russell JA. Core affect and the psychological construction of emotion. Psychol Rev. 2003;110(1):145–72. pmid:12529060
  153. 153. Scherer KR. The dynamic architecture of emotion: Evidence for the component process model. Cognition and Emotion. 2009;23(7):1307–51.
  154. 154. Moors A. Theories of emotion causation: A review. Cognition & Emotion. 2009;23(4):625–62.
  155. 155. Larsen JT, McGraw AP, Cacioppo JT. Can people feel happy and sad at the same time?. Journal of Personality and Social Psychology. 2001;81(4):684–96.
  156. 156. Rafaeli E, Rogers GM, Revelle W. Affective synchrony: individual differences in mixed emotions. Pers Soc Psychol Bull. 2007;33(7):915–32. pmid:17551163
  157. 157. Mesquita B, Walker R. Cultural differences in emotions: a context for interpreting emotional experiences. Behav Res Ther. 2003;41(7):777–93. pmid:12781245
  158. 158. Mauss IB, Robinson MD. Measures of emotion: A review. Cogn Emot. 2009;23(2):209–37. pmid:19809584
  159. 159. Davidson RJ. Affective neuroscience and psychophysiology: toward a synthesis. Psychophysiology. 2003;40(5):655–65. pmid:14696720
  160. 160. Phelps EA. The neuroscience of emotion: A new synthesis. Current Directions in Psychological Science. 2006;150(4):168–72.
  161. 161. Frijda NH. Moods, emotions, and personality. European Review of Social Psychology. 1993;4:167–97.