Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Measuring Psychobiosocial States in Sport: Initial Validation of a Trait Measure

  • Claudio Robazza ,

    c.robazza@unich.it

    Affiliation BIND-Behavioral Imaging and Neural Dynamics Center, Department of Medicine and Aging Sciences, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy

  • Maurizio Bertollo,

    Affiliation BIND-Behavioral Imaging and Neural Dynamics Center, Department of Medicine and Aging Sciences, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy

  • Montse C. Ruiz,

    Affiliation Department of Sport Sciences, University of Jyväskylä, Jyväskylä, Finland

  • Laura Bortoli

    Affiliation BIND-Behavioral Imaging and Neural Dynamics Center, Department of Medicine and Aging Sciences, “G. d’Annunzio” University of Chieti-Pescara, Chieti, Italy

Abstract

We examined the item characteristics, the factor structure, and the concurrent validity of a trait measure of psychobiosocial states. In Study 1, Italian athletes (N = 342, 228 men, 114 women, Mage = 23.93, SD = 6.64) rated the intensity, the frequency, and the perceived impact dimensions of a psychobiosocial states scale, trait version (PBS-ST), which is composed of 20 items (10 functional and 10 dysfunctional) referring to how they usually felt before an important competition. In Study 2, the scale was cross validated in an independent sample (N = 251, 181 men, 70 women, Mage = 24.35, SD = 7.25). The concurrent validity of the PBS-ST scale scores were also examined in comparison with two sport-specific emotion-related measures and a general measure of affect. Exploratory structural equation modeling and confirmatory factor analysis of the data of Study 1 showed that a 2-factor, 15-item solution of the PBS-ST scale (8 functional items and 7 dysfunctional items) reached satisfactory fit indices for the three dimensions (i.e., intensity, frequency, and perceived impact). Results of Study 2 provided evidence of substantial measurement and structural invariance of all dimensions across samples. The low association of the PBS-ST scale with other measures suggests that the scale taps unique constructs. Findings of the two studies offer initial validity evidence for a sport-specific tool to measure psychobiosocial states.

Introduction

The beneficial and detrimental effects of emotions on performance have been widely investigated in sport psychology [14]. As a result of this interest, researchers have developed a number of instruments to assess competitive anxiety [5, 6] or other emotions [7]. Ruiz, Hanin, and Robazza [8] have recently proposed an individualized profiling procedure to assess a large array of athletes’ performance-related experiences termed psychobiosocial states. Grounded in the individual zones of optimal functioning (IZOF) model [2, 9, 10], the profiling procedure enables researchers and practitioners to identify and assess the athletes’ idiosyncratic descriptors of experiences surrounding successful and unsuccessful performances [11]. This comprehensive assessment of the athlete’s functional and dysfunctional psychobiosocial states includes affective, cognitive, motivational, volitional (psychological), bodily-somatic, motor-behavioral (biological), operational, and communicative (social) modalities. In addition to in-depth individual profiles, researchers can benefit from a standardized tool in the assessment of athletes’ states, particularly with large samples of participants. Therefore, the purpose of our study was to examine the item characteristics and the factor structure of the items comprised in the individualized profiling of psychobiosocial states [8]. Another objective was to investigate the concurrent validity of the tool in comparison with other instruments.

The conceptualization of psychobiosocial states in the IZOF model is to some extent akin to other theoretical perspectives used to investigate emotions in achievement settings. For example, the control-value theory [12] provides an integrative approach for studying a variety of emotions experienced in different contexts, including academic settings, sport, and professions. Emotions are construed as sets of interrelated psychological processes, in which affective, cognitive, motivational, and physiological components are fundamental. Anxiety, for instance, can involve affective (feeling uncomfortable), cognitive (worry), motivational (withdrawal tendencies), and physiological (peripheral activation) components [13, 14]. Another leading perspective in achievement contexts is the biopsychosocial model of challenge and threat, so named because it integrates biological, psychological, and social levels of analysis to explain motivational processes of human performance [15, 16]. The biopsychosocial model accounts for the autonomic and endocrine influences on the cardiovascular system (the biological level), the cognitive and affective influences on evaluative processes (the psychological level), and the interplay among intraindividual, interindividual, and environmental aspects (the social psychological level). Challenge and threat motivations are embedded in these levels of analysis and their interplay. In this view, challenge and threat are motivational states evoked in the interaction between the person and the situation, with affective, cognitive, and physiological antecedents and consequences.

Rooted in the IZOF model [9, 17], psychobiosocial states have been assessed in a number of studies conducted in the sport context [11, 1821] and in the physical education setting [2227]. Previous assessments were typically conducted using a list containing an earlier version of functional and dysfunctional descriptors targeting the psychobiosocial components (modalities) of the individual’s achievement experience. Each item represented a discrete state and was composed of two or three descriptors to transmit a clear representation of one’s experience related to sport or physical education. In some studies, for example, functional and dysfunctional items for each modality were as follows: “happy” and “depressed” (affective modality); “certain” and “doubtful” (cognitive); “committed” and “disengaged” (motivational); “physically fresh” and “stiff muscles” (bodily-somatic); “active” and “clumsy” (motor-behavioral); “proficient” and “ineffective” (operational); “collaborative” and “lonely” (communicational). In a trait-like assessment, participants rated each item on a 5-point intensity scale thinking of how they usually felt within their sport or physical education context. In two previous studies on elite basketball players [28] and carom billiards players [29] a state-like assessment was implemented within 1 hr prior to competition.

Taken together, findings of the studies mentioned above show that the assessment of psychobiosocial states contributes to our understanding of relevant research questions in sport and physical education. For example, biological markers of precompetitive anxiety and activation (i.e., testosterone, cortisol, α-amylase, and chromogranin A) correlated with functional states of basketball players [28]. Furthermore, functional states mediated the effect of technical and cognitive self-efficacy on performance of carom billiards players [29]. In the physical education context [24], results showed that achievement motivational climates induced by teachers tend to determine in their students states consistent with the motivational atmosphere, thus supporting the advantages of a task-involving climate to enhance pleasant emotional experiences. The research benefits deriving from assessing psychobiosocial states highlight the need of a reliable and valid measure. Actually, Bortoli and Robazza [22] examined the factor structure of the initial version of the psychobiosocial states scale in a large sample of 11 to 14-year-old physical education students. They showed a 2-factor solution (i.e., functional and dysfunctional dimensions) to be acceptable and reliable. The same 2-factor solution was found in an Italian sample of young athletes aged 13–14 years [19]. However, the volitional modality of psychobiosocial states was not considered in these initial measures, and conceptualized later on [10]. Moreover, data on the factor structure and the concurrent validity of the scale for athletes have been missing.

Another limitation in earlier studies, with the exception of Robazza et al.’s [28] investigation, relies in the assessment of athletes’ descriptors only in terms of intensity. Several anxiety scholars have claimed that the intensity of anxiety symptoms is one aspect of the anxiety response. Another main aspect to consider is the individual’s perception of facilitative or debilitative effects on performance [30, 31]. Thus, researchers have recommended including a functional impact (also called direction) scale to take into account the perceived impact on performance of anxiety symptoms [30, 31]. Together with the intensity and perceived impact dimensions, researchers have also advocated the inclusion of a frequency scale in anxiety measures, because of the distinct patterns of intensity and frequency of anxiety symptoms found in the precompetition period [3234]. According to this call, intensity, frequency, and perceived impact dimensions were incorporated in the Competitive State Anxiety Inventory-2 [5] for the assessment of cognitive and somatic symptoms of anxiety and self-confidence [35, 36].

Based on the above review of previous research, the purposes of the present investigation were twofold. In Study 1, we evaluated the construct validity and reliability of the intensity, frequency, and perceived impact dimension scores of a trait-like version of the psychobiosocial states scale (from now on referred to as the PBS-ST scale). In Study 2, we examined the invariance of the PBS-ST scale across samples, gender, and type of sport (individual vs. team). We also assessed the concurrent validity of the PBS-ST scale in comparison with other emotion and affect related measures. According to Messick [37] and Martinent, Guillet-Descas, and Moiret [38], construct validation involves at least substantive, structural, and external aspects. The substantive aspect refers to the theoretical rationale that delineates the construct under investigation. The structural aspect relates to providing evidence of factorial validity and reliability of the construct of interest. The external aspect refers to whether the investigated construct is related to other variables consistent with a theoretical foundation. In Study 1 and Study 2 we examined the structural and external aspects respectively, given that the theoretical rationale of the scale has been addressed in previous research [8].

Method

Study 1

Study 1 was conducted to examine the item characteristics, the factor structure, and the reliability of the PBS-ST scale.

Participants.

Participants were 342 athletes (228 men, 114 women), aged 16 to 50 years (M = 23.93, SD = 6.64), drawn from main sport clubs located in central Italy. The study took place from February to April 2014. Two researchers approached the athletes in their sporting field or gym before regular practice sessions. The athletes’ rate of drop-out was about 5%. The athletes who voluntarily agreed to take part in the study had between one and 30 years of competitive experience (M = 11.50, SD = 5.85) and represented a range of individual sports (n = 146; e.g., tennis, swimming, track & field, martial arts, gymnastics, cycling, fencing) and team sports (n = 196; e.g., soccer, volleyball, basketball, rugby, water polo, baseball, softball, futsal). Athletes competed at regional level (83%), national level (12%), and international level (5%). No significant differences were found for age and sport experience between men and women or individual and team sports (p > .05).

Measure.

The PBS-ST scale consists of 20 rows of 80 adjectives (from 3 to 6 per row) to assess eight state modalities (i.e., affective, cognitive, motivational, volitional, bodily-somatic, motor-behavioral, operational, and communicative). The Italian version of the PBS-ST was developed from the original English version of the Individualized Profiling of Psychobiosocial States [8] (see S1 Appendix). The English version contains 20 items of 74 adjectives (3–4 per row to form an item) targeting the 8 functional (+) and dysfunctional (-) modalities of a psychobiosocial state. In particular, the Affective modality is assessed by six rows of adjectives for pleasant/functional affect (+), pleasant/dysfunctional affect (-), unpleasant/functional anxiety (+), unpleasant/dysfunctional anxiety (-), functional anger (+), and dysfunctional anger (-). For the other seven modalities two rows of synonym adjectives assessed functional or dysfunctional states. The intensity response dimension of each item was rated on a 5-point Likert scale ranging from 0 (not at all) to 4 (very, very much). The frequency dimension was also rated on a 5-point Likert scale anchored by 0 (never) and 4 (almost always) referring to the hour prior to competition. Finally, the perceived impact dimension was rated on a bipolar 7-point Likert scale ranging from −3 (very harmful) to +3 (very helpful), according to the individual’s perception of functional or dysfunctional effects on performance, with 0 indicating no effect. The scale was translated into Italian using the backward translation technique. Three Italian researchers fluent in English independently translated the scale from English to Italian. The researchers discussed their translations until they reached agreement on all adjectives. The translated scale was then retranslated into English by a native English speaker. The researchers checked the translated and retranslated texts to make sure they reflected the original meaning. Finally, consensus was reached on the Italian version of the scale.

Procedure.

Following approval from the ethics committee for biomedical research of the University of Chieti-Pescara, and according to the declaration of Helsinki, we contacted sport managers and coaches and explained the general purpose of the investigation to obtain authorization to approach the athletes. Prior to scale administration, participants were informed about the general purpose of the study and presented with instructions indicating that there were no right or wrong answers. They also received instructions designed to minimize social desirability bias, and emphasis was placed on the confidentiality of responses. Written informed consent approved by the ethics committee was obtained from participants or parents in the case of participants under 18 years of age. Athletes completed the PBS-ST scale individually in a quiet room before regular practice sessions. An investigator administered the scale to groups of up to five participants who voluntary took part in the study. Athletes were instructed to respond to the PBS-ST scale items referring to how they usually feel within the hour before an important competition. In particular, athletes were requested to choose one descriptor from each row that best reflected their experiences. Then, they were asked to score each descriptor in regards to its intensity, frequency, and perceived impact on performance.

Data Analysis.

We calculated the frequency of descriptors the athletes chose across each modality to identify the most- and least-often selected adjectives. Descriptive statistics, Pearson product-moment correlation coefficients, reliability alpha values, and composite reliability values of the latent variables were also computed.

We examined the factorial validity of PBS-ST scale, with items rated in intensity, frequency, and perceived impact dimensions, using Exploratory Structural Equation Modeling (ESEM) [39], and Confirmatory Factor Analysis (CFA). ESEM is a strategy that allows for the integration of exploratory and confirmatory factor analysis within the same solution [40]. Unlike standard CFA, where cross-loadings are constraint to zero, in ESEM all factor loadings and cross loadings are estimated, while some factors can be specified within a given measurement model. In addition, factor loading matrices can be rotated (we used a Bi-Geomin rotation in our analysis). ESEM models were estimated using the robust maximum likelihood estimator (MLR), while CFA models were estimated using the maximum likelihood parameter estimates (MLM) with standard errors and a mean-adjusted chi-square test statistic that is robust to non-normality [41]. According to Byrne [42], the MLM estimator is most appropriately used with continuous data non-normally distributed and complete. All data analyses were performed in Mplus version 7.31 [41].

Following the suggestions of several researchers [43, 44], different indices were chosen to assess model fit: chi-square (χ2), normed chi-square (χ2/df), comparative fit index (CFI), Tucker Lewis fit index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). Values for CFI and TLI greater than .90, and RMSEA and SRMR lower than .08, are considered evidence of acceptable fit [45], while values for CFI and TLI close to .95, and RMSEA and SRMR lower than .05, are evidence of good fit [43]. Furthermore, a χ2/df value less than 5 indicates an acceptable model fit [46]. Akaike’s Information Criterion (AIC) values were included as a measure to compare the fit of alternative models. Improvements in model fits are reflected in higher values of CFI and TLI, and lower values of AIC, χ2, χ2/df, RMSEA, and SRMR.

Results.

Descriptive statistics showed that all adjectives were selected by participants. The 10 most chosen descriptors were: focused [Cognitive(+), 64.62%], motivated [Motivational(+), 64.62%], physically-charged [Bodily-somatic(+), 64.04%], worried [Anxiety(-), 61.40%], unmotivated [Motivational(-), 57.31%], nervous [Anxiety(+) 57.31%], uncommunicative [Communicative(-), 52.34%], sluggish movement [Motor-behavioral(-), 48.54%], overjoyed [Pleasant affective(-), 47.08%], and fighting spirit [Anger(+), 44.74%]. The 10 least selected descriptors were: persistent [Volitional(+), 8.77%], purposeful [Volitional(+), 7.89%], apprehensive [Anxiety(-), 7.89%], undetermined [Volitional(-), 7.31%], carefree [Affective(+), 6.73%], discontented [Affective(+), 6.43%], annoyed [Anger(-), 6.14%], physically exhausted [Bodily-somatic(-), 5.26%], resentful [Anger(-), 5.26%], and joyful [Affective(+), 3.22%]. Descriptive statistics of items are reported in Table 1.

thumbnail
Table 1. Descriptive statistics of intensity, frequency, and perceived impact dimensions of psychobiosocial states from Study 1 (N = 340).

Note: (+) = item categorized as functional; (-) = item categorized as dysfunctional. M = mean, SD = standard deviation, SK = skewness, K = kurtosis.

https://doi.org/10.1371/journal.pone.0167448.t001

Before factor analysis, intensity, frequency, and perceived impact data scores were screened for missing values, skewness, kurtosis, and multivariate outliers. No missing data values were found. Two multivariate outliers were detected using Mahalanobis’ distance (p < .001) and then removed from data set and subsequent analyses. Because the assumption of multivariate normal data distributions was violated, ESEM and CFA models were estimated using MLR and MLM, respectively.

Three 2-factor models were tested through ESEM to assess independently the tenability of the intensity, frequency, and perceived impact dimensions. The three models included 20, 16, and 15 items, in which problematic items were progressively discarded. The decision to remove an item was based on a high value of modification indices (> 15) or a low factor loading (< .40) on the hypothesized latent factor. The final 15-item solution was also assessed using CFA, which is a more restrictive analysis than ESEM. This 2-factor, 15-item solution was also compared to an alternative 1-factor model in which all items loaded on a single factor.

Factor analysis results are summarized in Table 2. The 20-item scale assessing the intensity, frequency, and perceived impact dimensions did not provide acceptable fit. Items of the Anxiety(+) and Pleasant affective(-) modalities did not load onto the expected factor. As it can be seen from the mean scores of perceived impact (Table 1), the Anxiety(+) modality, measured by “nervous, restless, discontented, dissatisfied” descriptors, was actually perceived as dysfunctional rather than functional. Similarly, the Pleasant(-) modality, represented by “overjoyed, complacent, pleased, satisfied”, was experienced as functional. Therefore, these two modalities were discarded from following analyses. Furthermore, loadings of both Communicative(+) and Communicative(-) modalities were below the cut-off criterion of .40, and thus removed from further analysis. Although the fit of the 16-item scale improved (see Table 2), inspection of modification indices suggested that adding a path between the functional latent factor and the Anger(-) modality would substantially enhance the model fit. Rather than adding this path, we decided to exclude the Anger(-) modality in order to attain a clear distinction between functional and dysfunctional factors.

thumbnail
Table 2. Fit indices for the 2-factor models of the intensity, frequency, and perceived impact dimensions of the PBS-ST scale from Study 1.

Note: ESEM = Exploratory Structural Equation Modeling, CFA = Confirmatory Factor Analysis, χ2(df) = chi-square (degrees of freedom), CFI = comparative fit index, TLI = Tucker Lewis fit index, RMSEA = root mean square error of approximation, SRMR = standardized root mean square residual, AIC = Akaike’s Information Criterion. CFA on the 15 items unidimensional model of the perceived impact dimension did not converge.

https://doi.org/10.1371/journal.pone.0167448.t002

The final 15-item scale reached acceptable fit indices on both ESEM and CFA. The Satorra-Bentler scaled chi-square difference test (intensity, ΔS-B χ2 = 33.01, Δdf = 13, p = .002; frequency, ΔS-B χ2 = 26.76, Δdf = 13, p = .013; direction, ΔS-B χ2 = 24.48, Δdf = 13, p = .027) and the AIC values provided evidence of fit superiority of the 15-item scale compared to the 16-item scale. The 15-item model was compared to an alternative model in which all items loaded on a single factor (i.e., 15 items, unidimensional model). As expected, the unidimensional model showed a poor fit to the data. Fig 1 displays standardized factor loadings, error variances, and correlations between latent constructs (i.e., functional and dysfunctional psychobiosocial states) of the 15-item PBS-ST scale. All factor loadings were significant at p < .001 (two-tailed). Table 3 contains the means, standard deviations, Pearson-product moment correlation matrix, reliability alpha values, and composite reliability values for the latent variables.

thumbnail
Fig 1. Standardized factor loadings, error variances, and correlations between latent constructs of the 15-item PBS-ST scale from Study 1 (bold; N = 340) and Study 2 (italic; N = 249) derived from confirmatory factor analysis.

All factor loadings are significant at p < .001 (two-tailed).

https://doi.org/10.1371/journal.pone.0167448.g001

thumbnail
Table 3. Means, standard deviations, and Pearson-product moment correlation matrix for the latent variables from Study 1.

Note: Cronbach’s alphas (left side) and composite reliability values (right side) appear in the diagonal.

https://doi.org/10.1371/journal.pone.0167448.t003

Study 2

The purpose of Study 2 was to cross validate in a second independent sample the 2-factor, 15-item solution of the PBS-ST scale found in Study 1, and to examine possible gender and sport (individual vs. team) differences in the responses. An additional objective was to explore concurrent validity through correlations with two other sport-specific emotion-related measures and a general measure of affect often used in sport psychology research.

Participants.

We involved a voluntary sample of 251 athletes (181 men, 70 women), aged 16 to 52 years (M = 24.35, SD = 7.25), from sport clubs in central Italy. The study took place from February to April 2015. The rate of drop-out was around 4%. The athletes had between 1 and 32 years of sport experience (M = 10.61, SD = 6.04) and practiced individual sports (n = 125) and team sports (n = 135) at regional level (80%), national level (15%), and international level (5%). No significant differences were shown for age and sport experience between men and women or individual and team sports (p > .05).

Measures

The PBS-ST scale.

We used the 2-factor, 15-item solution of the PBS-ST scale found in Study 1 (see S1 Appendix) to assess the intensity, frequency, and perceived impact of functional and dysfunctional states.

The Sport Emotion Questionnaire (SEQ).

The SEQ [7] is a 22-item, sport-specific measure of precompetitive emotion derived from the experience of athletes. Based on a categorical perspective of emotions, the SEQ assesses intensity of anger (e.g., annoyed, irritated), anxiety (e.g., nervous, apprehensive), dejection (e.g., unhappy, disappointed), excitement (e.g., enthusiastic, energetic), and happiness (e.g., joyful, cheerful). Jones et al. [7] reported CFA values indicating that the 5-factor structure provided acceptable model fit. In a recent study [47], sport performers were required to rate how they had felt in their organizational environment during the past month. The results supported the validity and reliability of the SEQ, suggesting that the measure is conceptually well designed and relevant to sport performers. The scale was translated into Italian using the backward translation technique previously described (see the Measure section in Study 1). In our study, the question “how you feel right now, at this moment, in relation to the upcoming competition” [7] was modified asking the athletes to refer to how they usually feel before an important competition. This change was made to align directions to the PBS-ST scale instructions. Alike the PBS-ST scale and previous research using the SEQ [48], we also included the Likert scales for intensity, frequency, and perceived impact dimension ratings.

The Positive and Negative Affect Schedule (PANAS).

Grounded in a dimensional approach of affective states, Watson, Clark, and Tellegen [49] developed the PANAS. The scale is a general measure of affect comprising two 10-item adjective checklist subscales named positive affect (e.g., enthusiastic, determined, active) and negative affect (e.g., afraid, distressed, hostile). The PANAS has been often applied in sport psychology to assess affect intensity [5052]. Nicolas et al. [51] provided validity and reliability evidence of the PANAS measuring intensity of affect and perceived impact of affect on performance. Also this scale was translated into Italian using the backward translation technique. According to the assessment procedure adopted for the PBS-ST scale and the SEQ, we used the trait-like instructions and the same rating scales for intensity, frequency, and perceived impact dimensions.

The Sport Performance Psychological Inventory (IPPS-48).

A 48-item inventory [53] was developed in Italian language (Inventario Psicologico della Prestazione Sportiva; IPPS-48) to measure a range of mental skills and psychological strategies used by athletes in competition and during practice. The items pertain to eight factors, which are further included into cognitive and emotion higher-order factors. For the purposes of the current study, we administered the items included in the emotion higher-order factor only. The emotion higher-order factor comprises self-confidence (e.g., I am confident in my competitive abilities), emotional arousal control (e.g., I am able to relax and control tension when needed), worry (e.g., I feel panicked before competition), and concentration disruption (e.g., Attention wanders while competing). Athletes are asked to think about each item in terms of how frequently they have experienced the situations and the feelings described. Items are rated on a 6-point Likert type, frequency scale ranging from 1 (never) to 6 (always). CFA showed that the IPPS-48 possesses a clear factorial structure, good reliability, and the ability to distinguish among athletes of different competitive levels [53].

Procedure.

All measures were administered following the procedure set out in Study 1 (i.e., institutional approval and administration of questionnaires). Athletes were asked to rate intensity, frequency, and perceived impact of items of the PBS-ST scale, SEQ, and PANAS referring to how they usually feel in the hour before an important competition. The items of the IPPS-48 were rated in the frequency dimension only, because of the structure of the item content (e.g., “feeling panicked” entails a high intensity of the experience) and the implicit impact on performance.

Data Analysis.

Following Study 1, ESEM and CFA were performed to assess the factorial validity of the intensity, frequency, and perceived impact dimensions of the 15-item PBS-ST scale. We also derived descriptive statistics, Pearson product-moment correlation coefficients, reliability alpha values, and composite reliability values of the latent variables.

To analyze the invariance of the scale across the two study samples, we conducted multi-group CFAs with increasing parameter constraints one at a time. According to Brown [54], in CFA it is preferable to have balanced groups in terms of size to attain reliable and readily interpretable results. Therefore, we selected a number of participants from Study 1 based on age, gender, sport type, and sport experience who approximately matched the related demographic features of the sample in Study 2. The final sample included 229 participants from Study 1 (161 men and 68 women, 94 involved in individual and 135 in team sports), and 249 participants from Study 2 (181 men and 68 women, 87 involved in individual and 162 in team sports). After having determined a separate baseline model for each group, several increasingly stringent models were assessed to test measurement and structural invariance [42, 55]. The sequence of models tested for measurement invariance involved four different levels: configural (i.e., same number of factors and factor loading pattern across groups), weak metric (i.e., equality of the factor loadings), strong metric (i.e., equality of the factor loadings and intercepts), and strict metric (i.e., equality of error variance and covariance). Testing structural invariance entailed three steps: factor variance (i.e., equality of variance of factor scores), factor covariance (i.e., equality of covariance of factor scores), and factor mean (i.e., equality of latent means). The configural model provided a baseline value against which the subsequently specified models were compared. At each testing step, we used the likelihood ratio test for model comparison based on the Satorra-Bentler scaled chi-square difference (ΔS-B χ2) between models. We also computed the difference in CFI between models. A difference less than or equal to .01 between nested models was considered as a criterion of invariance [56].

Invariance across gender and sport type (individual vs. team) was also examined for each dimension (i.e., intensity, frequency, and functional impact) using multiple indicator, multiple cause (MIMIC) models, also called CFA with covariates [54]. We used MIMIC modeling instead of multi-group CFA because of the relatively unbalanced sample size by gender and sport (i.e., smaller number of women and athletes of individual sports in the sample compared to men and athletes of team sports). Gender and sport type covariates were dummy coded to represent group membership (i.e., woman = 0, man = 1; and individual sport = 0, team sport = 1, respectively). Then, the latent variables and indicators were regressed onto the covariates. Unlike multi-group CFA, MIMIC modeling can only test measurement invariance (indicator intercepts) and population heterogeneity (factor means). Although less flexible, MIMIC models allow more robust and parsimonious comparisons because there are fewer freely estimated parameters in the analysis of a single covariance matrix and less restrictive sample size.

After having determined the invariance of the scale across the two study samples, we conducted ESEM and CFA on the whole data from both samples (Study 1 and 2) to assess the overall measurement model, which includes intensity, frequency, and perceived impact dimensions. Finally, we ascertained the factorial validity of the intensity, frequency, and perceived impact dimensions of the SEQ and the PANAS, and the frequency dimension of the IPPS-48 before examining the associations with the PBS-ST scale to determine concurrent validity.

Results.

In the data screening procedure, two multivariate outliers were identified using Mahalanobis’ distance (p < .001) and then removed. ESEM and CFA results for the PBS-ST scale are reported in Table 4, while descriptive statistics, correlation coefficients, alpha values, and composite reliability values are contained in Table 5. Fig 1 shows standardized factor loadings, error variances, and correlations between latent constructs. Both ESEM and CFA yielded satisfactory fit indices on the 15-item PBS-ST scale, thereby confirming the tenability of the 2-factor, 15-item solution reached in Study 1.

thumbnail
Table 4. Fit indices for the 15-item, 2-factor models of the intensity, frequency, and perceived impact dimensions of the PBS-ST scale from Study 2.

Note: ESEM = Exploratory Structural Equation Modeling, CFA = Confirmatory Factor Analysis, χ2(df) = chi-square (degrees of freedom), CFI = comparative fit index, TLI = Tucker Lewis fit index, RMSEA = root mean square error of approximation, SRMR = standardized root mean square residual, AIC = Akaike’s Information Criterion.

https://doi.org/10.1371/journal.pone.0167448.t004

thumbnail
Table 5. Means, standard deviations, and Pearson-product moment correlation matrix for the latent variables from Study 2.

Note: Reliability alphas (left side) and composite reliability values (right side) appear in the diagonal.

https://doi.org/10.1371/journal.pone.0167448.t005

Multi-group CFAs and MIMIC analyses were then conducted to assess the invariance of the scale across the two study samples. Results are shown in Table 6. The configural multiple-sample CFA model fitted the data adequately, confirming that the PBS-ST scale had the same factor structure for both study groups. The ΔS-B χ2 tests between the configural and all other nested models were not significant. In addition, the difference of the CFI value between the configural and other models was equal or less than .01. These findings provided evidence of substantial measurement and structural invariance of the three dimensions across samples. MIMIC results also indicated measurement equivalence on the three dimensions for gender and sport type. Inclusion of the gender or sport type covariates did not alter the factor structure or indicated differences in item responses (all modification indices < 4.0).

thumbnail
Table 6. Fit indices for multi-group confirmatory factor analyses of the intensity, frequency, and perceived impact dimensions of the PBS-ST scale.

Note: χ2(df) = chi-square (degrees of freedom), χ2/df = chi-square/degrees of freedom, CFI = comparative fit index, TLI = Tucker Lewis fit index, RMSEA = root mean square error of approximation, SRMR = standardized root mean square residual, ΔS-B χ2df) = Satorra-Bentler scaled chi-square difference test (degrees of freedom difference).

https://doi.org/10.1371/journal.pone.0167448.t006

To assess the overall measurement model, we performed ESEM and CFA on data from both samples (N = 589). While CFA analysis yielded non convergence, ESEM analysis resulted in a convergent solution. Yet, the model showed a poor fit to the data, χ2/df = 3.058, CFI = .706, TLI = .635, RMSEA = .066 (90% CI = .063–0.068), SRMR = .044. A review of modification indices indicated the presence of large residual covariance values between items of the intensity dimension with the respective items of the frequency dimension. This result most likely reflected the high degree of overlap between the item scores of intensity and frequency dimensions also shown in the high correlation values (Tables 3 and 5). In light of this evidence [42], we judged sound to respecify the model to include correlated residuals of items of the two dimensions. The respecified model yielded acceptable fit to the data, χ2/df = 1.647, CFI = .943, TLI = .919, RMSEA = .037 (90% CI = .033–0.041), SRMR = .030.

Before examining concurrent validity, we ascertained the factorial validity of measures. CFA and reliability indices on the intensity, frequency, and perceived impact dimensions of the SEQ and the PANAS, and the frequency dimension of the IPPS-48 provided evidence of acceptable factorial validity and reliability of the measures (see Table 7). Specifically, we found support for the hypothesized 5-factor structure of the SEQ and the 4-factor structure of the IPPS-48. The 2-factor structure of the PANAS was also supported after specification of correlated residual terms of two items on the Positive Affect subscale (alert and attentive) and four items on the Negative Affect subscale (nervous and jittery, afraid and scared). Aligned with Nicolas et al. [51] contentions, the procedure of including correlated error terms among redundant items of a same affect category is justified because the Watson et al.’s [49] scale used in the current study did not incorporate the content categories originally proposed by Zevon and Tellegen [57] into the factor structure of the PANAS. Thus, theoretically meaningful latent variables may have been erroneously omitted from the original model [51].

thumbnail
Table 7. Confirmatory factor analysis fit indices, reliability alpha range, and composite reliability range of the SEQ, the PANAS, and the IPPS-48 dimensions from Study 2.

Note: χ2(df) = chi-square (degrees of freedom), CFI = comparative fit index, TLI = Tucker Lewis fit index, RMSEA = root mean square error of approximation, SRMR = standardized root mean square residual. 1Two correlated errors on the Positive Affect subscale, and four correlated errors on the Negative Affect subscale of the PANAS.

https://doi.org/10.1371/journal.pone.0167448.t007

Concurrent validity of the PBS-ST scale was examined via latent factor correlations between the scale and the criterion-related measures. We employed the indications provided by Zhu [58] to guide the interpretations of the effect size of correlation coefficients. Correlations between the intensity, frequency, and perceived impact dimensions of the two PBS-ST subscales (i.e., functional and dysfunctional) with the five SEQ subscales (i.e., anger, anxiety, dejection, excitement, and happiness), the two PANAS subscales (i.e., positive and negative affect), and the four IPPS-48 subscales (i.e., self-confidence, emotional arousal control, worry, and concentration disruption) were low, ranging from -.138 to .283 (SEQ), from -.131 to .285 (PANAS), and from -.198 to .238 (IPPS-48). These results suggest that the PBS-ST scale taps unique constructs.

Discussion

The purpose of the study was to evaluate the validity and reliability of the intensity, frequency, and perceived impact dimensions of the Italian version of the PBS-ST scale. The items of the scale are those proposed for an individualized profiling of high-level athletes [8]. Although non-standardized scales measuring athletes’ psychobiosocial states have actually been used for research and applied purposes in sport [28, 29] and physical education [24] settings, the validity and reliability of these scales was not previously established.

At a group level, participants selected all adjectives representing the eight modalities of a psychobiosocial state (i.e., affective, cognitive, motivational, volitional, bodily, motor-behavioral, operational, and communicative), thus supporting the relevance of the descriptors and the contention that athletes’ descriptions of their states reflect emotion and non-emotion content [21, 59]. At a descriptive level, mean intensity and frequency scores of items in the functional subscale were generally larger than scores of items in the dysfunctional subscale (Table 1). As expected, functional items were perceived as helpful for performance, while dysfunctional items were perceived as harmful, with the exception of three items pertaining to Anxiety(+), Pleasant affective(-), and Communicative(-) modalities that showed reverse effects.

Adequate factorial validity was observed on all dimensions for a 2-factor solution comprising 15 items, 8 functional and 7 dysfunctional. ESEM and CFA on the initial 20-item scale [8] did not provide acceptable fit to the data. Therefore, on a first step two items were discarded because they did not load on the expected factor. According to the IZOF conceptualization [2, 9], these items are purported to measure an unpleasant/functional state [i.e., Anxiety(+)] and an pleasant/dysfunctional state [i.e., Pleasant affective(-) modality]. In the IZOF model, indeed, emotion content is construed based on the 2 × 2 interplay between hedonic valence (i.e., pleasant or unpleasant experience) and functionality (functional or dysfunctional effects on performance). This interaction leads to four global emotion content categories: (a) pleasant-functional, (b) unpleasant-functional, (c) pleasant-dysfunctional, and (d) unpleasant-dysfunctional. It is assumed that pleasant-functional states (e.g., feeling enthusiastic and confident) help the performer generate and use the energy to sustain effort and coordination for task execution, while unpleasant-functional states (e.g., feeling nervous and dissatisfied) mainly serve to energize behavior toward task accomplishment. Furthermore, pleasant-dysfunctional states (e.g., feeling complacent and satisfied) are contended to reflect a lack of energy or ineffective resource recruitment and utilization, while unpleasant-dysfunctional states (e.g., feeling worried and apprehensive) would determine a waste of energy by diverting individual’s resources toward task-irrelevant cues. In this conceptual framework, the athletes’ interpretation of their states is crucial for the development of knowledge and beliefs about their performance. In particular, athletes’ interpret their conditions as pleasant or unpleasant, and functional or dysfunctional for performance depending not only on emotion intensity, but also on their knowledge, attitudes, preferences, or rejections of the experiences. These meta-experiences result from individuals’ spontaneous or deliberate reflection on conditions leading to success or failure, which consequently contribute to knowledge and beliefs with respect to their own experiences [5961]. For instance, an athlete may perceive that high anxiety benefits competitive performance because specific symptoms, such as increased heart rate, are helpful in energizing behavior and directing attention to the task. On the other hand, feeling complacent and satisfied before competition can lead to ineffective investment of energy and unfocused attention, and therefore be perceived by the athlete as harmful. This knowledge when repeatedly confirmed leads to a positive attitude toward anxiety and a negative attitude toward complacency, which are then interpreted as indicators of readiness for competition or lack of energy and focus.

Study findings of idiographic assessments of psychobiosocial states with the same items of the PBS-ST scale support the above contentions [8]. In contrast to the results from an individualized assessment approach, the group-oriented results in our study do not lend support to the use of the Anxiety(+) and Pleasant affective(-) modalities in a nomothetic format as indicators of functional and dysfunctional states, respectively. As previously observed, Anxiety(+) and Pleasant affective(-) modalities showed opposite effects than those expected. This may be due to the athletes’ unstructured meta-experiences and the attribution of emotion effects based on hedonic valence according to a common belief that unpleasant states are always harmful for performance and that pleasant states are always helpful. The Communicative(-) modality also showed opposite effects than those expected, in that mean scores of perceived impact were positive rather than negative. Furthermore, the Communicative(+) modality loaded poorly in the expected factor, and therefore both communicative modalities were discarded. According to accounts of individual experiences [8], communicative processes are very idiosyncratic. Some athletes withdraw from others to attain an optimal attention focus, avoid distractions, and cope with competitive stress, whereas other athletes seek support from the coach and peers and thus display communicative and social behavior. Therefore, “communicative, outgoing, sociable, connected” and “uncommunicative, withdrawn, alone, disconnected” adjectives in the PBS-ST scale likely reflect different precompetitive behaviors that can be regarded as helpful or harmful depending on the individual.

As a final step in factor analysis, we also decided to remove Anger(-) modality from the PBS-ST scale to avoid cross loadings between latent variables, and therefore to keep a clear distinction between functional and dysfunctional factors. Adjectives of the Anger(+) modality (i.e., fighting spirit, fierce, aggressive) were included in the functional subscale in line with findings of previous research indicating that anger can play an important role in the generation or mobilization of energy [21, 62, 63]. In collision and combat sports in particular, such as rugby and karate, athletes can experience properly harnessed anger symptoms as necessary to increase effort toward task achievement and outperform the opponent [6466]. In this perspective, also adjectives purported to represent the Anger(-) modality (i.e., furious, resentful, irritated, annoyed), can likely be interpreted as helpful rather than harmful, thereby accounting for the cross loadings observed between latent constructs.

The 2-factor, 15-item PBS-ST scale showed adequate factorial validity on the three dimensions (i.e., intensity, frequency, and perceived impact). Notably, intensity and frequency of latent dimensions were highly related in both Study 1 (functional intensity and frequency, r = .828; dysfunctional intensity and frequency, r = .957; see Table 3) and Study 2 (functional intensity and frequency, r = .766; dysfunctional intensity and frequency, r = .973; see Table 5). These high correlation coefficients may have resulted from the trait-like format of the scale whereby participants were asked to recall how they usually felt before competition. They may have found difficult to distinguish intensity from frequency of recalled psychobiosocial states. An alternative explanation may lie in the intensity and frequency of psychobiosocial states having a similar pattern just prior to competition. In either case, in future studies researchers may decide to use the PBS-ST scale in a more parsimonious manner by measuring either intensity or frequency dimensions. In contrast, the relation of intensity and frequency dimensions to the perceived impact ranged from very low to moderate, and therefore assessing perceived impact could contribute to our understanding of the individual’s experience. However, further research is needed to investigate the trend of psychobiosocial states over time, in line with research on anxiety that has demonstrated different pattern of intensity and frequency of anxiety symptoms in the time (e.g., a week) leading up to competition [32, 67]. Of note, Thomas, Picknell, and Hanton [68] compared performers’ actual and recalled responses to the Competitive State Anxiety Inventory-2 [5] at precompetition and postcompetition intervals. Memory of the frequency of their competitive anxiety symptoms was generally more reliable than memory of the intensity, and athletes were more attuned to the frequency rather than the intensity. This evidence led Thomas et al. [68] to argue in favor of the concept that the awareness of the frequency of symptoms may act as a precursor for increasing anxiety levels, and that frequency may reflect experienced symptoms more accurately when recalling emotional accounts. Accordingly, they recommended practitioners to consider frequency in addition to intensity to help performers cope more effectively with anxiety in the time preceding competition.

A second purpose of the study was to examine invariance of the PBS-ST scale across independent samples, gender, and type of sport, and to assess the concurrent validity of the PBS-ST scale. Findings provided support for substantial measurement and structural invariance on the three dimensions across samples. Measurement equivalence was also found on the three dimensions for gender and sport type. These results indicate that the PBS-ST scale can be used in the assessment of psychobiosocial states, which allows for unbiased comparison of scores across samples, male and female athletes, and individual and team sports. Concurrent validity results of the PBS-ST scale showed low associations with sport specific emotions (i.e., anger, anxiety, dejection, excitement, and happiness), sport-specific and emotion-related constructs (i.e., self-confidence, emotional arousal control, worry, and concentration disruption), and a global measure of positive and negative affect. The low relationships suggest that the PBS-ST scale gauges athletes’ functional and dysfunctional states not assessed on the other scales administered in this study. Grounded in the IZOF model [2, 9], the items included in the PBS-ST scale have been found relevant in tapping the athletes’ holistic experience across a wide range of sports [27].

In conclusion, our study offers initial validity evidence for a sport-specific tool to measure psychobiosocial states. The 2-factor structure and the internal consistency reliability have been confirmed, and the concurrent validity suggests that the scale gauges unique constructs. Future research should determine the extent to which the scale validity generalizes across samples of different cultures, age, competitive level, and sport experience. Furthermore, the instrument was used to measure trait-like aspects of psychobiosocial states based on athletes’ retrospective reports of how they typically respond. Future studies can employ a situational (state-like) version of the scale using the timeframe of right now, and examine psychobiosocial states prior to, during, or after performance, or after interventions aimed at improving individual’s conditions leading to best achievements [6971]. Criterion validity of the scale should be also determined in comparison with additional subjective measures of emotions, objective measures of individual states, such as behavioral, biological, and neural markers [28, 72, 73], and performance criteria [29]. Overall, our results encourage the use of the PBS-ST scale and, more generally, the assessment of psychobiosocial states in the sport setting.

Supporting Information

S1 Appendix. The Psychobiosocial Items of the PBS-ST scale and the Corresponding Italian Translation.

https://doi.org/10.1371/journal.pone.0167448.s001

(PDF)

Author Contributions

  1. Conceptualization: CR MB MR LB.
  2. Formal analysis: CR MR.
  3. Investigation: CR MB LB.
  4. Methodology: CR MB MR LB.
  5. Resources: CR MR.
  6. Validation: CR MB MR LB.
  7. Writing – original draft: CR MB MR LB.
  8. Writing – review & editing: CR MB MR LB.

References

  1. 1. Friesen AP, Lane AM, Devonport TJ, Sellars CN, Stanley DN, Beedie CJ. Emotion in sport: considering interpersonal regulation strategies. Int Rev Sport Exer P. 2013;6:139–54.
  2. 2. Hanin YL, editor. Emotions in sport. Champaign, IL: Human Kinetics; 2000.
  3. 3. Jones MV. Controlling emotions in sport. Sport Psychol. 2003;17:471–86.
  4. 4. Lazarus RS. How emotions influence performance in competitive sports. Sport Psychol. 2000;14:229–52.
  5. 5. Martens R, Burton D, Vealey RS, Bump LA, Smith DE. Development and validation of the Competitive State Anxiety Inventory-2. In: Martens R, Vealey RS, Burton D, editors. Competitive anxiety in sport. Champaign, IL: Human Kinetics; 1990. p. 117–190.
  6. 6. Smith RE, Smoll FL, Schutz RW. Measurement and correlates of sport-specific cognitive and somatic trait anxiety: the Sport Anxiety Scale. Anxiety Res. 1990;2:263–80.
  7. 7. Jones MV, Lane AM, Bray SR, Uphill M, Catlin J. Development and validation of the sport emotion questionnaire. J Sport Exercise Psy. 2005;27:407–31.
  8. 8. Ruiz MC, Hanin YL, Robazza C. Assessment of performance-related experiences: An individualized approach. Sport Psychol. Forthcoming 2016.
  9. 9. Hanin YL. Emotions in sport: current issues and perspectives. In: Tenenbaum G, Eklund RC, editors. Handbook of sport psychology. 3rd ed. Hoboken, NJ: John Wiley & Sons; 2007. p. 31–58.
  10. 10. Hanin YL. Coping with anxiety in sport. In: Nicholls A, editor. Coping in sport: theory, methods, and related constructs. New York: Nova Science Publishers; 2010. p. 159–175.
  11. 11. Ruiz MC, Hanin YL. Athletes’ self-perceptions of optimal states in karate. Rev Psicol Deporte. 2004;13:229–44.
  12. 12. Pekrun R. The control-value theory of achievement emotions: assumptions, corollaries, and implications for educational research and practice. Educ Psychol Rev. 2006;18:315–41.
  13. 13. Pekrun R, Goetz T, Frenzel AC, Barchfeld P, Perry RP. Measuring emotions in students' learning and performance: the Achievement Emotions Questionnaire (AEQ). Contemp Educ Psychol. 2011;36:36–48.
  14. 14. Huang CJ. Achievement goals and achievement emotions: a meta-analysis. Educ Psychol Rev. 2011;23:359–88.
  15. 15. Blascovich J. Challenge and threat. In: Elliot AJ, editor. Handbook of approach and avoidance motivation. New York, NY: Psychology Press; 2008. p. 431–445.
  16. 16. Blascovich J, Tomaka J. (1996). The biopsychosocial model of arousal regulation. Adv Exp Soc Psychol, 28, 1–51.
  17. 17. Hanin J, Ekkekakis P. Emotions in sport and exercise settings. In: Papaioannou AG, Hackfort D, editors. Routledge companion to sport and exercise psychology: global perspectives and fundamental concepts. New York, NY: Routledge; 2014. p. 83–104.
  18. 18. Bortoli L, Bertollo M, Comani S, Robazza C. Competence, achievement goals, motivational climate, and pleasant psychobiosocial states in youth sport. J Sports Sci. 2011;29:171–80. pmid:21113845
  19. 19. Bortoli L, Bertollo M, Robazza C. Dispositional goal orientations, motivational climate, and psychobiosocial states in youth sport. Pers Indiv Differ. 2009;47:18–24.
  20. 20. Bortoli L, Messina G, Zorba M, Robazza C. Contextual and individual influences on antisocial behaviour and psychobiosocial states of youth soccer players. Psychol Sport Exerc. 2012;13:397–406.
  21. 21. Ruiz MC, Hanin YL. Metaphoric description and individualized emotion profiling of performance related states in high-level karate athletes. J Appl Sport Psychol. 2004;16:1–16.
  22. 22. Bortoli L, Robazza C. Dispositional goal orientations, motivational climate, and psychobiosocial states in physical education. In: Chiang LA, editor. Motivation of exercise and physical activity. New York, NY: Nova Science; 2007. p. 119–133.
  23. 23. Bortoli L, Bertollo M, Filho E, Robazza C. Do psychobiosocial states mediate the relationship between perceived motivational climate and individual motivation in youngsters? J Sports Sci. 2014; 32:572–82. pmid:24073933
  24. 24. Bortoli L, Bertollo M, Vitali F, Filho E, Robazza C. The effects of motivational climate interventions on psychobiosocial states in high school physical education. Res Q Exerc Sport. 2015;86:196–204. pmid:25675270
  25. 25. Robazza C, Bortoli L. Changing students’ attitudes towards risky motor tasks: an application of the IZOF model. J Sports Sci. 2005;23:1075–88. pmid:16194984
  26. 26. Robazza C, Bortoli L, Carraro A, Bertollo M. “I wouldn’t do it; it looks dangerous”: Changing students’ attitudes and emotions in physical education. Pers Indiv Differ. 2006;41:767–77.
  27. 27. Ruiz MC, Raglin J, Hanin YL. The individual zones of optimal functioning (IZOF) model (1978–2014): historical overview of its development and use. Int J Sport Exercise Psy. Forthcoming 2016.
  28. 28. Robazza C, Gallina S, D’Amico MA, Izzicupo P, Bascelli A, Di Fonso A, et al. Relationship between biological markers and psychological states in elite basketball players across a competitive season. Psychol Sport Exerc. 2012;13:509–17.
  29. 29. Di Corrado D, Vitali F, Robazza C, Bortoli L. Self-efficacy, emotional states, and performance in carom billiards. Percept Mot Skills. 2015;121:14–25. pmid:26226286
  30. 30. Jones G. More than just a game—research developments and issues in competitive anxiety in sport. Brit J Psychol. 1995;86:449–78. pmid:8542197
  31. 31. Jones G, Hanton S. Pre-competitive feeling states and directional anxiety interpretations. J Sports Sci. 2001;19:385–95. pmid:11411775
  32. 32. Swain A, Jones G. Intensity and frequency dimensions of competitive state anxiety. J Sports Sci. 1993;11:533–42. pmid:8114179
  33. 33. Mellalieu SD, Hanton S, Fletcher D. A competitive anxiety review: recent directions in sport psychology research. In: Hanton S, Mellalieu SD, editors. Literature reviews in sport psychology. New York, NY: Nova Science; 2006. p. 1–45
  34. 34. Wagstaff CRD, Neil R, Mellalieu SD, Hanton S. Key movements in directional research in competitive anxiety. Routledge Online Studies on the Olympic and Paralympic Games. 2012;1:143–66.
  35. 35. Fernandes MG, Nunes SA, Raposo JV, Fernandes HM, Brustad R. The CSAI-2: An examination of the instrument’s factorial validity and reliability of the intensity, direction and frequency dimensions with Brazilian athletes. J Appl Sport Psychol. 2013;25:377–91.
  36. 36. Martinent G, Ferrand C, Guillet E, Gautheur S. Validation of the French version of the Competitive State Anxiety Inventory-2 Revised (CSAI-2R) including frequency and direction scales. Psychol Sport Exerc. 2010;11:51–7.
  37. 37. Messick S. Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. Am Psychol. 1995;50:741–9.
  38. 38. Martinent G, Guillet-Descas E, Moiret S. Reliability and validity evidence for the French Psychological Need Thwarting Scale (PNTS) scores: significance of a distinction between thwarting and satisfaction of basic psychological needs. Psychol Sport Exerc. 2015:20:29–39.
  39. 39. Asparouhov T, Muthen B. Exploratory structural equation modeling. Struct Equ Modeling. 2009;16:397–438.
  40. 40. Wiesner M, Schanding GT. Exploratory structural equation modeling, bifactor models, and standard confirmatory factor analysis models: application to the BASC-2 behavioral and emotional screening system teacher form. J School Psychol. 2013;51:751–63.
  41. 41. Muthén LK, Muthén BO. Mplus user’s guide. 7th ed. Los Angeles, CA: Muthén & Muthén; 2012.
  42. 42. Byrne BM. Structural equation modeling with Mplus: basic concepts, applications, and programming. New York, NY: Routledge; 2012.
  43. 43. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling. 1999;6:1–55.
  44. 44. MacCallum RC, Austin JT. Applications of structural equation modeling in psychological research. Annu Rev Psychol. 2000;51:201–26. pmid:10751970
  45. 45. Browne MW, Cudeck R. Alternative ways of assessing model fit. In: Bollen KA, Long JS, editors. Testing structural equation models. Beverly Hills, CA: Sage; 1993. p. 136–162.
  46. 46. Schumacker RE, Lomax RG. A beginner’s guide to structural equation modeling. 2nd ed. Mahwah, NJ: Lawrence Erlbaum Associates; 2004.
  47. 47. Arnold R, Fletcher D. Confirmatory factor analysis of the Sport Emotion Questionnaire in organisational environments. J Sports Sci. 2015;33:169–79. pmid:25375248
  48. 48. Uphill MA, Lane AM, Jones MV. Emotion Regulation Questionnaire for use with athletes. Psychol Sport Exerc. 2012;13:761–70.
  49. 49. Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: the PANAS scales. J Pers Soc Psychol. 1988;54:1063–70. pmid:3397865
  50. 50. Crocker PRE. A confirmatory factor analysis of the Positive Affect Negative Affect Schedule (PANAS) with a youth sport sample. J Sport Exercise Psy. 1997;19:91–7.
  51. 51. Nicolas M, Martinent G, Campo M. Evaluation of the psychometric properties of a modified Positive and Negative Affect Schedule including a direction scale (PANAS-D) among French athletes. Psychol Sport Exerc. 2014;15:227–37.
  52. 52. Robazza C, Bortoli L, Nocini F, Moser G, Arslan C. Normative and idiosyncratic measures of positive and negative affect in sport. Psychol Sport Exerc. 2000;1:103–16.
  53. 53. Robazza C, Bortoli L, Gramaccioni G. L’Inventario Psicologico della Prestazione Sportiva (IPPS-48). [The Sport Performance Psychological Inventory]. Giornale Italiano di Psicologia dello Sport. 2009;4:14–20.
  54. 54. Brown T. Confirmatory factor analysis for applied research. 2nd ed. New York, NY: The Guilford Press; 2015.
  55. 55. Farmer AY, Farmer GL. Research with diverse groups: research designs and multivariate latent modeling for equivalence. New York, NY: Oxford University Press; 2014.
  56. 56. Cheung GW, Rensvold RB. Evaluating goodness-of-fit indexes for testing measurement invariance. Struct Equ Modeling. 2002;9:233–55.
  57. 57. Zevon MA, Tellegen A. The structure of mood change: an idiographic/nomothetic analysis. J Pers Soc Psychol. 1982;43:111–22.
  58. 58. Zhu W. Sadly, the earth is still round (p < 0.05). J Sport Health Sci. 2012;1:9–11.
  59. 59. Hanin YL, Stambulova NB. Metaphoric description of performance states: An application of the IZOF model. Sport Psychol. 2002;16:396–415.
  60. 60. Nieuwenhuys A, Hanin YL, Bakker FC. Performance-related experiences and coping during races: a case of an elite sailor. Psychol Sport Exerc. 2008;9:61–76.
  61. 61. Nieuwenhuys A, Vos L, Pijpstra S, Bakker FC. Meta experiences and coping effectiveness in sport. Psychol Sport Exerc. 2011;12:135–43.
  62. 62. Campo M, Mellalieu S, Ferrand C, Martinent G, Rosnet E. Emotions in team contact sports: a systematic review. Sport Psychol. 2012;26:62–97.
  63. 63. Woodman T, Davis PA, Hardy L, Callow N, Glasscock I, Yuill-Proctor J. Emotions and sport performance: an exploration of happiness, hope, and anger. J Sport Exercise Psy. 2009;31:169–88.
  64. 64. Robazza C, Bertollo M, Bortoli L. Frequency and direction of competitive anger in contact sports. J Sport Med Phys Fit. 2006;46:501–08.
  65. 65. Robazza C, Bortoli L. Perceived impact of anger and anxiety on sporting performance in rugby players. Psychol Sport Exerc. 2007;8:875–96.
  66. 66. Ruiz M C, Hanin YL. Perceived impact of anger on performance of skilled karate athletes. Psychol Sport Exerc. 2011;12:242–49.
  67. 67. Hanton S, Thomas O, Maynard I. Competitive anxiety responses in the week leading up to competition: the role of intensity, direction and frequency dimensions. Psychol Sport Exerc. 2004;5:169–81.
  68. 68. Thomas O, Picknell G, Hanton S. Recall agreement between actual and retrospective reports of competitive anxiety: A comparison of intensity and frequency dimensions. J Sports Sci. 2011;29:495–508. pmid:21279866
  69. 69. Harmison RJ. Peak performance in sport: identifying ideal performance states and developing athletes’ psychological skills. Prof Psychol: Res Pract. 2006;37:233–43.
  70. 70. Robazza C, Pellizzari M, Hanin Y. Emotion self-regulation and athletic performance: an application of the IZOF model. Psychol Sport Exerc. 2004;5:379–404.
  71. 71. Woodcock C, Cumming J, Duda JL, Sharp L-A. Working within an Individual Zone of Optimal Functioning (IZOF) framework: consultant practice and athlete reflections on refining emotion regulation skills. Psychol Sport Exerc. 2012;13:291–302.
  72. 72. Bertollo M, Bortoli L, Gramaccioni G, Hanin Y, Comani S, Robazza C. Behavioural and psychophysiological correlates of athletic performance: A test of the multi-action plan model. Appl Psychophys Biof. 2013;38:91–9.
  73. 73. di Fronso S, Robazza C, Filho E, Bortoli L, Comani S, Bertollo M. Neural markers of performance states in an Olympic Athlete: an EEG case study in air-pistol shooting. J Sport Sci Med. 2016;15:214–22.