Figures
Abstract
Problematic smartphone use (PSU) is a global issue associated with numerous adverse outcomes, especially among young people. One of the most widely used instruments to evaluate PSU is the Smartphone Addiction Scale-Short Version (SAS-SV). This study examined the psychometric properties of the Spanish version of the SAS-SV, including factorial validity, convergent validity, divergent validity, and reliability. The final sample comprised 530 students from a university in Honduras. Confirmatory factor analysis provided evidence supporting the validity of the instrument’s internal structure. Reliability was estimated using McDonald’s omega and Cronbach’s alpha coefficients. Convergent validity was assessed through correlations with problematic Internet use, depression, anxiety, and stress. Measurement invariance tests were conducted across sex and age categories. The results indicated that the SAS-SV adequately fits a one-dimensional, reliable model and demonstrated measurement equivalence across groups of sex and age. Finally, the SAS-SV demonstrated a strong correlation with problematic Internet use, depression, anxiety, and stress. These findings support the SAS-SV as a valid and reliable instrument for examining PSU among university students in Honduras.
Citation: Hidalgo-Fuentes S, Martínez-Álvarez I, Llamas-Salguero F, Pineda-Zelaya IS, Merino-Soto C, Chans GM (2025) Psychometric properties of the smartphone addiction scale-short version (SAS-SV) in Honduran university students. PLoS One 20(7): e0327226. https://doi.org/10.1371/journal.pone.0327226
Editor: Frantisek Sudzina, Aalborg University, DENMARK
Received: October 12, 2024; Accepted: June 11, 2025; Published: July 31, 2025
Copyright: © 2025 Hidalgo-Fuentes et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The data that support the findings of this study are openly available in Tecnologico de Monterrey Data Hub at https://doi.org/10.57687/FK2/DCVIJU.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Since the iPhone’s launch in 2007, the popularity of smartphones has grown extraordinarily, consolidating as the most widespread digital device in the global market, significantly surpassing other devices such as laptops and tablets [1]. Unlike traditional mobile phones, which were limited to calls and SMS messages, smartphones offer a wide range of features, including email, social networking, video games, and internet browsing. It is estimated that there are 4.74 billion smartphone users worldwide [2,3], which continues to grow annually, especially in developing countries where many people access the Internet exclusively through this device [4].
Despite smartphone use’s benefits, excessive or uncontrolled use, known as problematic smartphone use (PSU), has been linked to various negative consequences in personal, social, and academic spheres [5–7]. These adverse effects include anxiety [8], depression [9], stress [10], musculoskeletal pain [11], poor sleep quality [12], loneliness [13], academic procrastination [14], poor academic performance [15,16], and generally lower perceived well-being and quality of life [17].
In addition to these various harmful consequences, PSU is highly prevalent, affecting 26.99% of the general population [18], with rates significantly increasing in recent years [19]. This rate is considerably higher in low- or middle-income countries, reaching 43.84% [18]. A recent systematic review by Candussi et al. [20] reveals that PSU is especially prevalent among university students, with rates as high as 67%, underscoring the need for further research in this population.
Although some recent studies have found no difference in PSU prevalence by sex [9,15], numerous others have reported higher rates among women [21–25]. However, it is clear that men and women exhibit different patterns of smartphone use: women tend to focus on social and communication functions, while men are more likely to use smartphones for gaming and Internet browsing [26]. These findings suggest that even if differences in overall PSU levels between men and women are not always observed, distinct usage patterns exist. These observed disparities justify the need for gender-based measurement invariance analysis to determine whether the PSU model is applicable and consistent across both populations.
With respect to age, studies indicate that PSU prevalence varies significantly across age groups, being higher among young adults and university students compared to adolescents and children [18]. These findings highlight age as a relevant factor in understanding differences in problematic smartphone use. The variability in usage patterns across life stages points to the importance of conducting age-based measurement invariance analyses [27], to evaluate whether the factors associated with PSU and the model’s structure remain consistent across different age groups. Such analyses provide a more nuanced understanding of how demographic variables influence PSU and offer robust evidence regarding the model’s validity across age groups.
Given its high prevalence and associated negative outcomes, it is relevant to discuss how PSU is conceptualized and defined in the scientific literature. Although PSU is often referred to as “smartphone addiction” in the scientific literature due to its similarities to behavioral and substance addictions [28,29], some authors recommend using the term “problematic smartphone use” (PSU). They argue there is insufficient evidence to classify this behavior as an addictive disorder. Additionally, using the term “addiction” can lead to the pathologization and stigmatization of smartphone use [30–32].
PSU is not listed as a disorder in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) [33] or the International Classification of Diseases (ICD-11) [34]. However, criteria for its diagnosis have been proposed [35,36]. Despite its name, PSU has captured the attention of the scientific community and healthcare professionals due to its serious negative consequences. Studies on this issue have increased significantly recently [37,38], and the World Health Organization considers it a public health problem [39].
The high prevalence of PSU, especially among university students and in low-income countries [40], and its numerous negative consequences make it necessary to have tools for its early detection. A meta-analysis by Harris et al. [41] identified 78 validated scales to assess PSU but concluded that many lacked adequate psychometric properties.
The Smartphone Addiction Scale (SAS) and its short version (SAS-SV) are among the most widely used tests worldwide to evaluate PSU. Developed in South Korea by Kwon et al. [25], the SAS is a multidimensional scale comprising 33 items grouped into six dimensions: disturbance of daily life, positive anticipation, abstinence, cyberspace-oriented relationship, overuse, and tolerance. The items are answered on a six-point Likert scale. The scores of the six dimensions are summed to obtain a total score, where higher scores indicate more PSU. The SAS showed excellent internal consistency in its validation (α = 0.967).
The SAS-SV [42] is a short version of the SAS comprising ten items selected by experts. Like the SAS, the items are answered on a six-point Likert scale, with scores ranging from 10 to 60. The authors propose cut-off points to classify excessive smartphone use: 31 for men and 33 for women.
The SAS-SV has a one-dimensional structure and showed high internal consistency in its validation (α = 0.911). In addition to its good psychometric properties, the small number of items in the SAS-SV offers several advantages, including shorter application times and higher response rates [43].
These advantages have made the SAS-SV a widely used scale, validated in numerous international samples, including Chinese children and adolescents [44], Brazilian adolescents [45], Iranian adolescents [46,47], Indonesian adolescents [48], Italian adolescents and young people [49], and Moroccan youth and adults [50]. The scale has also been tested on Chinese [21] and American [51] adults. Additionally, studies have been carried out among Turkish [52], Pakistani [53], Serbian [54], Chinese [55], and Italian [22] populations.
Few validations of this instrument have been developed in the Spanish-speaking world. Lopez-Fernandez validated the Spanish adult population scale [56], proposing a cut-off point of 32 to classify subjects with PSU, regardless of sex. Subsequently, Chávez and Rojas-Kramer validated the scale applied to Mexican university students [57].
All validations confirm the SAS-SV’s one-dimensional structure, except those performed by Cheung et al. [44] and Zhao et al. [55], who found a better fit with a three-factor structure. The internal consistency of the validations was generally good, with Cronbach’s alphas ranging from 0.74 [48] to 0.89 [54,57].
According to the National Telecommunications Commission of Honduras, approximately eight million Hondurans (82.7%) have a mobile phone line. Of these, seven million (71.70% of the population) are mobile Internet subscribers, with most relying on 3G or 4G broadband services [58]. Despite the widespread use of mobile internet, no studies have examined the prevalence or factors associated with PSU in Honduras, revealing a clear need for research in this area.
The rapid growth of smartphone use in Honduras—particularly among younger populations—has raised increasing concerns about digital addiction and its potential impact on mental health. Cultural factors may also significantly contribute to the development of PSU [59].
In addition, the country’s economic context and the growing accessibility of mobile technologies have made smartphones an essential part of daily life. However, the potential negative effects on mental health remain insufficiently studied.
These contextual factors underscore the importance of investigating PSU in Honduras, where the sociocultural and technological landscape may differ significantly from that of other regions. Yet, the absence of validated evidence specific to the Honduran population poses challenges to conducting such research. While the Smartphone Addiction Scale–Short Version (SAS-SV) has been translated into Spanish, this alone does not ensure its validity and reliability across all Spanish-speaking contexts, given the considerable geographic and social diversity of the Spanish language [60].
Thus, validating the SAS-SV in the Honduran context is essential, as cultural, socioeconomic, and technological differences may influence how PSU manifests in this population. Such validation would not only address a significant gap in the literature but also establish a stronger foundation for future research on problematic smartphone use across Central America.
This study aimed to analyze the psychometric properties of the SAS-SV scale when applied to Honduran university students. The reliability and validity of the scale in this population were examined. A validated SAS-SV for Honduran university students provides researchers with a reliable instrument to explore the characteristics of PSU in this population group. This information will allow screening evaluations using local or established cut-off points and enable cross-cultural comparisons.
Our hypotheses regarding SAS-SV were as follows:
H1) The scale will demonstrate a unidimensional factor structure.
H2) The scale will demonstrate adequate internal consistency (> 0.80).
H3) The scale will show measurement invariance across sex (male and female) and age groups.
H4) The scale will demonstrate convergent validity through positive correlations with problematic Internet use, depression, anxiety, and stress.
Method
Participants
The sample comprised 530 students from the Francisco Morazán National Pedagogical University of Honduras, selected through convenience sampling. Participants ranged in age from 17 to 64 years (M = 26.16 and SD = 8.33) and were primarily female (Table 1). Following the recommendations of Lloret et al. [61] and Henson et al. [62], who suggest that the exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) should be performed on different samples to avoid misleading conclusions, we divided the total sample into two random subsamples (approximately 50% of the participants each), using the SPSS 28 statistical program.
Instruments
Sociodemographic questions. Sociodemographic data were collected, including participants’ age, biological sex, and average number of hours of smartphone use per day.
Smartphone addiction scale-short version [42]. This instrument consists of 10 items scored on a Likert scale from 1 (Strongly Disagree) to 6 (Strongly Agree). The range of scores is between 10 and 60, where a higher score indicates a higher level of smartphone addiction. This study employed the Spanish adaptation by Lopez Fernandez [56]. As the Honduran author deemed the translation appropriate for the local context, no modifications were made to the original item wording.
Internet addiction test (IAT) [63]. Problematic Internet use was assessed using the IAT, which comprises 20 items about the frequency of Internet use behaviors answered on a six-point Likert scale, from 0 (Never) to 5 (Always). The total score range of the scale is from 0 to 100, with higher scores indicating a higher level of Internet addiction. This study employed the Spanish adaptation by Fernández-Villa et al. [64]. In the present sample, the IAT demonstrated excellent internal consistency, with both McDonald’s omega and Cronbach’s alpha coefficients of 0.93.
Depression, anxiety, and stress scale–21 (DASS-21) [65]. Depression and anxiety were assessed using the DASS-21, a tool designed to measure the negative emotional states of depression, anxiety, and stress. Each was evaluated through a subscale of seven items. Participants indicated their responses regarding their emotional states over the past week using a four-point Likert scale, ranging from 1 (Does not apply to me at all) to 4 (Applies to me very much, or most of the time). Higher scores reflect higher levels of depression, anxiety, and stress. This study employed the Spanish adaptation by Daza et al. [66]. In our sample, the DASS-21 subscales demonstrated excellent internal consistency: the depression subscale yielded Cronbach’s alpha and McDonald’s omega coefficients of 0.90; the anxiety subscale, 0.88; and the stress subscale, 0.89.
Procedure
An online survey was designed using Google Forms to collect the data needed for the study, which was distributed to students via email. The first page of the questionnaire provided information on the objectives and scope of the research, as well as its voluntary and utterly anonymous nature. No personally identifiable information was collected, ensuring participants’ anonymity and confidentiality.
Participants were required to provide written informed consent before answering the questions by marking a specific item indicating their understanding. The students did not receive any incentive to participate in the research. The Francisco Morazán National Pedagogical University Research Ethics Committee approved the study, reference number 2023−003.
Statistical analysis
Given the limited number of studies examining the validity of psychosocial measures in the Honduran population, a sequential analytical strategy was implemented to maximize the extraction of psychometric information. First, responses exhibiting insufficient effort or careless answering were identified to reduce potential sources of response bias (IE/C) [67,68]. Second, item-level descriptive statistics were computed through quantitative analysis. Third, the internal structure of the scale was evaluated in terms of dimensionality, reliability, and measurement invariance across groups. Finally, associations with external variables were examined.
Potential responses with insufficient effort or carelessness. Two complementary methods were applied to identify patterns of insufficient effort or carelessness [67,68]: response invariability and response inconsistency [68]. Response invariability was assessed using the longstring method (LS), which counts the number of identical consecutive responses in a participant’s answers [69]. Response inconsistency was evaluated using the Mahalanobis distance (D2) method [70]. These two indices—LS and D2—are widely recommended as a minimum screening procedure for detecting IE/C responses [68].
A cutoff of LS > 7 (upper limit based on Tukey’s hinge) was used for response invariability. For response inconsistency, participants were flagged if their D2 exceeded the critical chi-square value corresponding to the number of items (i.e., D2 > χ2cutoff, df: number of items). Participants exceeding both thresholds (LS > 7 and D2 > χ2cutoff) were considered to have produced potentially overly consistent or inconsistent responses due to insufficient effort [67–69].
Item analysis. To determine the statistical properties, distributional statistics were calculated.
Internal Structure. Three types of evidence were examined to evaluate the internal structure of the SAS-SV: dimensionality, reliability, and measurement invariance across groups.
Modeling. To determine the most appropriate measurement model for the SAS-SV, dimensionality was assessed using a range of analytic approaches. First, to obtain a baseline of the number of latent dimensions, three methods were applied: empirical Kaiser criterion (EKC) [71], parallel analysis (PA) [72], and the Hull criterion (HULL) [73]. The inter-item correlation matrices were polychoric to ensure consistency with the primary modeling of the SAS-SV.
The CFA was conducted using the ULSMV estimator (Unweighted Least Squares Estimator, with an χ2 statistic adjusted for mean and variance). This estimator yields more accurate parameter estimates for ordinal data than other estimators [74,75]. The WLSMV estimator (weighted least square mean and variance adjusted) was used for comparison and decision purposes, frequently implemented by treating the items as ordinal categorical variables [76]. After identifying the number of dimensions, the congeneric one-dimensional model (not equality of factor loads) was evaluated. Second, a tau-equivalent model was evaluated, which restricts the loads to equality (a condition typically used for estimating the alpha coefficient). Third, according to the results of Hamamura et al. [77], a structure of three correlated factors was modeled.
Score reliability. The omega coefficient [78], appropriate for congeneric models, was calculated; the alpha coefficient was also used in reference to previous studies using this coefficient. An approximation was made to the reliability at different points of the latent attribute. To this end, the items’ factor loads and response thresholds (estimated from the results of the inter-item polychoric correlations) were transformed into discrimination and difficulty parameters under the 2-parameter model of the Item Response Theory [78,79].
Equivalence between groups. Once the structure was defined, the researchers investigated the equivalence between groups by identifying possible differential functioning between items (DIF). Within the SEM framework, a multiple-indicator-multiple-causes (MIMIC) approach was used [80], where the covariates that may explain DIF (in this study, sex and age) were exogenous variables or predictors of item regression. To test the two forms of DIF, uniform (DIFunif, DIF constant at the levels of the construct, equivalent to the non-invariance of the intercepts) and non-uniform DIF analyses (DIFnunif, DIF not constant at the construct levels, equivalent to the non-invariance of factor loads), we employed Restricted Factor Analysis (RFA) [81] with product of indicators (RFA-PI) [82,83]. The PIs represent the interaction between the two covariates. Three predictors of item responses were used: the construct (θ), the grouping variable (υ1: sex, υ2: age), and the interaction between the construct (θ) and each grouping variable (θυ1 and θυ2). The interaction variable is the product θυi, doubly centered to maintain orthogonality between predictors. RFA-PI is equivalent to MIMIC modeling and can be interpreted similarly. The parameters for identifying DIF were evaluated with a score test, with p = 0.05 corrected with a Bonferroni adjustment [84].
Association with other variables. Convergent validity is a form of construct validity evidence that refers to the degree to which a test correlates with other measures that assess theoretically related constructs. Its primary function is to help confirm whether an instrument appropriately measures the intended construct [85].
To assess the convergent validity of the SAS-SV, Pearson’s bivariate correlations were calculated between the SAS-SV total score and the following variables: Internet Addiction Test (IAT) scores, DASS-21 subscale scores (depression, anxiety, and stress), and participants’ self-reported daily smartphone usage (in hours). The strength of the correlations was interpreted based on the guidelines proposed by Gignac and Szodorai [86], where values around 0.10 are considered small, around 0.20 moderate, and 0.30 or higher are considered strong.
Prevalence of smartphone users. Prevalence was estimated based on the cut-off point criteria established by Kwon et al. [42], which initially proposed gender-specific thresholds using receiver operating characteristic (ROC) curve analysis. However, in the present study, no statistically significant differences were observed between males and females in the total SAS-SV scores. Therefore, a single cut-off point was applied to classify participants as PSU. Differences in total SAS-SV scores between problematic and non-problematic users were analyzed using independent-samples t-tests.
Results
Insufficient effort/potential carelessness
Ninety participants were detected by D2, obtaining values equal to or greater than the cut-off point (D2 = 23.20, p = 0.01). The longstring method detected 173 subjects at a cut-off point of 7. Due to the low qualifying agreement (χ2 = 21.66, p < 0.001; Cramer’s V = 0.173, 95% CI = 0.134, 0.198), the cases detected by both methods were removed independently. The final sample was 530.
Item analysis
The average response of each item tended to be below 3 (Weakly disagree), and the response variability was similar (SD < 1.5). The distributional shape of the items was positive asymmetric; all were below 1.5. Regarding kurtosis, all items were below 1.0 (except item 1). See Table 2. In general, there was no excess of asymmetry and kurtosis. However, the absence of multivariate normality (Henze-Zirkler test = 3.81, p < 0.001) and univariate normality (Anderson-Darling test between 15.26 and 62.17; all at p < 0.001) was detected.
Internal structure
Dimensionality. Using inter-item polychoric correlations, the EKC method yielded the 1st and 2nd eigenvalues at 6.100 and 1.069, respectively; the reference values were 1.294 and 1.151 (percentile method). Similarly, parallel analysis (PA) obtained the 1st and 2nd eigenvalues as 6.100 and 1.069, respectively; the reference values were 1.271 and 1.191 (percentile method). The Hull criterion indicated a single dimension. These results were consistent with item Pearson correlations. In conclusion, EKC, HULL, and PA indicated a latent single dimension.
Measurement models fit. After confirming the unidimensional structure of the SAS-SV, parameter estimates were obtained under two submodels: the congeneric model, which allows item factor loadings to vary, and the tau-equivalent model, which assumes equal factor loadings across all items. Additionally, the multidimensional model proposed by Hamamura et al. [77] was empirically tested to explore alternative factor structures.
Congeneric model. The differences in the adjustment from both estimators did not differ substantially in terms of factor loads and standard errors (Table 3). The fit was acceptable in terms of CFI and SRMR and comparatively better with ULSMV, although it was not adequate in terms of RMSEA. The factor loads with both estimators were high, and their distribution was very similar (ULSMV, M = 0.748, Min = 0.532, Max = 0.902, SD = 0.105; WLSMV, M = 0.761, Min = 0.532, Max = 0.903, SD = 0.108). Due to these results, the ULSMV estimator was chosen. With this estimator, the range of factor loads of the SAS-SV items was between 0.532 and 0.903, which, following Sharma’s guidelines [87], identifies factor loads less than 0.32 as poor, those ≥ 0.45 as acceptable, those ≥ 0.55 as good, those ≥ 0.63 as very good, and those above 0.71 as excellent, allowing us to consider that all the SAS-SV items are good indicators of the PSU construct among Honduran university students.
Tau-equivalent model. The model was not acceptable compared to the congeneric model due to differences in fit: ULSMV χ2 = 912.521 (df = 44), CFI = 0.956, SRMR = 0.098, RMSEA = 0.193 (90% CI = 0.182, 0.204).
Three-factor model. This model, which comprises three correlated constructs, was tested based on Hamamura’s work [77]. ULSMV χ2 = 70.555 (df = 24), CFI = 0.997, SRMR = 0.031, RMSEA = 0.061 (90% CI = 0.044, 0.077). This model was good in all its fit indicators. However, an examination of local fit (factor loads or interfactor correlations) found very high correlations (Daily~~Withd = 0.720, Daily~~Others = 0.799, Withd~~Others = 0.900). Therefore, the one-dimensional congeneric model was accepted.
Equivalence between groups. In sex grouping, tests to identify uniform DIF (χ2 M = 0.772, Min = 0.023, Max = 1.769) and non-uniform DIF (χ2 M = 1.088, Min = 0.008, Max = 5.663) were not statistically significant (Table 4). This same result also occurred in age for uniform DIF (χ2 M = 0.905, Min = 0.000, Max = 2.714) and non-uniform DIF (χ2 M = 1.048, Min = 0.14, Max = 3.381). Therefore, the intercepts and factor load of the items were equivalent in these two groups.
Reliability. The omega coefficient (0.907; 95% CI = 0.894, 0.920) and the alpha coefficient (0.907; 95% CI = 0.890, 0.918) were high and virtually equal. The approximation of the item information and the score appears in Figs 1 and 2, respectively. The items and the score maintain a consistent level of information in the latent attribute. Specifically, the maximum level of information is attained from the mean to about two SD above the mean. Reliability follows the same pattern (Fig 2).
Item information from factor analysis. Note: SAS1… SAS10: labels of the SAS-SV items.
Test score information from factor analysis.
Validity with other variables
Convergent validity was examined through the correlations between the total score of the SAS-SV and the following variables: the total score of the IAT, scores of the DASS21, and the number of hours per day using the smartphone. As shown in Table 5, the total score presented statistically significant positive correlations with problematic Internet use, anxiety, depression, and stress. Following the criterion proposed by Gignac and Szodorai [86], which considers correlations of 0.10, 0.20, and 0.30 to be relatively small, medium, and relatively high, respectively, these correlations are of high magnitude. On the other hand, the correlation between the SAS-SV score and the daily hours of smartphone use was insignificant.
Prevalence of smartphone users
Kwon et al. [42] originally proposed gender-specific cut-off points for identifying problematic smartphone use (PSU): 31 for males and 33 for females (on a scale ranging from 10 to 60). To estimate the prevalence of PSU in our sample, we followed the procedure used in the Spanish and French adaptations of the SAS-SV [56], beginning with an assessment of gender differences in total SAS-SV scores. Since no statistically significant differences were found between males and females in our sample, t(528) = −1.609, p = 0.108, a single average cut-off point of 32 was applied across genders. This approach allowed for a more homogeneous classification of PSU levels among participants.
Using this unified cut-off, 103 students (19.4%) were classified as exhibiting problematic smartphone use. When examined by gender, 18.5% of males and 19.7% of females were classified as PSU users. Statistically significant differences were observed between PSU and non-PSU groups in total SAS-SV scores, t(528) = −32.775, p < 0.001. Students in the PSU group had a mean score of 39.22 (SD = 6.01), whereas those in the non-PSU group had a mean score of 18.72 (SD = 5.62).
Discussion
The primary objective of this study was to examine the psychometric properties of the SAS-SV in a sample of Honduran university students. This is the first comprehensive evaluation of the scale’s psychometric qualities within the Honduran context. This study offers a preliminary assessment of the SAS-SV’s validity and reliability in this specific university population and lays a strong foundation for future research on PSU in Honduras.
An exploratory factor analysis (EFA) was conducted using multiple criteria to determine the optimal number of factors to retain. The results consistently supported a unidimensional structure as the most appropriate for the SAS-SV. Subsequently, several factor models were tested through confirmatory factor analysis. The tau-equivalent unidimensional model demonstrated poor fit and was therefore deemed inadequate. In contrast, the three-factor model proposed by Hamamura et al. [77], showed acceptable global fit indices; however, the extremely high correlations among the three factors indicated a lack of discriminant validity among the dimensions. Based on these findings, the congeneric unidimensional model was selected as the most suitable representation of the scale.
As an additional note, the results for the RMSEA were inconsistent with our conclusion regarding the goodness of fit of the SAS-SV measurement model. However, the inclusion of RMSEA to conclude about the fit of SEM models, although common, is controversial due to the influence of the model’s degrees of freedom [88,89]. Importantly, all other fit indices supported the conclusion of adequate model fit, thereby providing continued support for the selected measurement model.
The internal consistency of the SAS-SV was analyzed using Cronbach’s alpha and McDonald’s omega coefficients, both obtaining values of 0.91. According to the criteria established by Nunnally et al. [90], these values indicate that the scale is suitable for both research and clinical applications. The scale is intended to facilitate the early identification of students with PSU, enabling timely interventions to mitigate this behavior and prevent its associated negative consequences.
Regarding the Spanish-language adaptations, reliability coefficients have also been consistently high across different linguistic and cultural contexts. The validation conducted in a Spanish population reported a Cronbach’s alpha of 0.88 [56], while the Mexican adaptation showed an alpha of 0.89 [57]. These findings suggest that the Spanish versions of the SAS-SV maintain strong internal consistency across Spanish-speaking populations, reinforcing the robustness and cross-cultural applicability of the instrument.
Regarding convergent validity, it was hypothesized that the SAS-SV score would present positive correlations with problematic Internet use, depression, anxiety, and stress. The SAS-SV score in our study showed a highly positive correlation with the IAT score. Although it is essential to distinguish between PSU and problematic Internet use, as differences have been found in the prevalence and predictor variables of both phenomena [18,23,31,91]), several studies have found overlap between the two constructs [92–94] beyond the use of Internet-based technology. Likewise, a recent meta-analysis found neuroanatomical similarities between adolescents and young people with PSU and problematic Internet use, mainly pertaining to executive functions and reward processing [95]. Besides the similarities between the PSU and problematic Internet use, it is noteworthy that the IAT dimensions (excessive use, neglect of social life and work, anticipation, salience, lack of control) are similar to the symptoms assessed by the SAS-SV [45].
The total score of the SAS-SV also showed high positive correlations with the three DASS-21 scales, consistent with the results of various meta-analyses and reviews that found PSU associated with psychological distress, especially anxiety and depression [7,96–98]. Although the direction of this correlation remains controversial, some studies claim that PSU increases levels of anxiety and depression [99–101], while others found that psychological distress increases the risk of developing PSU [102–105]. In line with the latter, the theoretical model proposed by Billieux and Maurage [30] argues that one of the pathways that can lead to developing PSU is the “reassurance pathway,” referring to individuals driven to use their smartphones by the need to maintain relationships for the security of having others; their risk factors would include a high level of anxiety. Integrating both proposals, some studies have found that the correlation between PSU and psychological distress is bidirectional [106,107].
Finally, although several studies have found a positive correlation between the SAS-SV score and self-reported smartphone usage time [45,56,108–110], the present study found no statistically significant correlation between these variables. It should be noted that PSU and smartphone usage time are not the same construct, and high smartphone usage time alone does not necessarily cause adverse effects [19]. Additionally, self-reported time spent using digital devices, like smartphones, should not be taken as a central explanatory variable for the problematic use of new technologies because it is not an objective reflection of their actual use [111].
The analysis of DIF, including both uniform and non-uniform DIF, revealed no statistically significant differences in item performance across sex and age groups. Specifically, the χ2 values for both types of DIF did not reach statistical significance, suggesting that the test items function equivalently and consistently across the subgroups evaluated. This finding is particularly important as it reinforces the measurement invariance of the instrument across diverse populations.
The absence of DIF implies that the scale measures the same underlying construct regardless of sex or age, thereby reducing the risk of bias in the results. This evidence of measurement invariance strengthens the instrument’s construct validity, indicating that demographic variables do not systematically influence participants’ responses. Such invariance is essential when generalizing findings across different subpopulations and supports the use of the instrument in diverse research settings without concern that sex- or age-related bias will compromise the interpretation of results.
With regard to the prevalence of PSU in our sample, which was 19.4%, this figure differs from the 26.96% reported in a recent meta-analysis of university students [18]. Several factors may account for this discrepancy, including the specific characteristics of our sample, the cultural context in which the study was conducted, and the methodological approaches employed. Despite this variability, our findings support the notion that PSU is a relevant concern in university settings, even though its prevalence may vary depending on the characteristics of the population studied.
This study has several limitations that should be considered when interpreting its findings. First, participants were recruited through convenience sampling from a single public university focused exclusively on teacher training. This institution primarily serves students from Indigenous communities and has one of the lowest tuition costs among universities in Honduras. These factors may influence participants’ socioeconomic background and access to technology, thereby limiting the generalizability of the results. Future research in Honduras should aim to include more diverse and representative samples to confirm the present findings.
Second, the study relied on self-report measures, which may be subject to response biases such as social desirability. This limitation could affect the validity of the data. Future research could benefit from the use of passive monitoring techniques to objectively assess smartphone use. These methods allow for the collection of accurate and ecologically valid data (e.g., screen time) without requiring active engagement from participants [112].
Third, although participants’ smartphone usage time was recorded, the specific ways in which they use this technology were not examined. Future research should investigate the particular smartphone activities university students engage in to better understand their impact on PSU.
Lastly, this study did not evaluate the test–retest reliability of the SAS-SV. Future studies should address this limitation by examining the scale’s temporal stability within the Honduran population to ensure the consistency of scores over time in this context.
Despite these limitations, the study includes a notable strength: the evaluation of response bias due to insufficient effort or careless responding (IE/C). This response pattern has been discussed in the literature for several years [69] and has more recently been recognized as a significant source of error that can compromise the validity of results in both psychometric and non-psychometric research relying on self-reports [67–69]. With the exception of a few recent studies conducted in Spanish-speaking contexts [113–115], methods for detecting IE/C responses remain infrequently used in empirical research with Spanish-speaking samples.
In the present study, although the detection of IE/C resulted in the exclusion of approximately one-third of the original sample, this decision contributed to two important outcomes: (1) it enhanced the internal validity of the findings by minimizing potential distortions, and (2) the proportion of excluded cases was consistent with prevalence rates reported in the literature [116,117].
Conclusions
Despite the aforementioned limitations, this study provides robust evidence supporting the psychometric validity of the Spanish version of the SAS-SV among Honduran university students. The scale demonstrated strong factorial validity, high internal consistency, and measurement invariance across sex and age groups. Furthermore, the strong positive correlations between SAS-SV scores and indicators of psychological distress—such as depression, anxiety, and stress—highlight the relevance of addressing problematic smartphone use (PSU) within the broader context of mental health, given its close association with emotional well-being.
These findings suggest that the SAS-SV can serve as an effective early screening tool for identifying students at risk of PSU, offering valuable insights into the relationship between smartphone use and mental health. Owing to its psychometric robustness, the scale may be integrated into mental health screenings within educational institutions to facilitate the early detection of at-risk individuals and support timely intervention. Early identification and appropriate support may help prevent the escalation of PSU into more severe psychological conditions, such as chronic anxiety, depression, or stress-related disorders.
In conclusion, the SAS-SV emerges as a valuable tool for both research and practice, enabling the early identification of PSU and supporting intervention strategies aimed at mitigating its negative psychological effects. Given the increasing prevalence of smartphone use among young adults—particularly in academic settings—the implementation of validated screening tools such as the SAS-SV may carry significant public health benefits. Their use could promote healthier digital habits and contribute to the improvement of overall mental health in the population.
Acknowledgments
The authors thank the participants for their involvement in responding to the survey. We also acknowledge the financial support provided by the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico, which was instrumental in developing this work.
References
- 1. Samaha M, Hawi NS. Relationships among smartphone addiction, stress, academic performance, and satisfaction with life. Comput Hum Behav. 2016;57:321–5.
- 2.
Oberlo. How many people have smartphones in 2024? 2024. Available from: https://www.oberlo.com/statistics/how-many-people-have-smartphones
- 3.
Statista. Number of smartphone users worldwide from 2014 to 2029 2025 [updated 03 March 2025]. Available from: https://www.statista.com/forecasts/1143723/smartphone-users-in-the-world
- 4. Tangcharoensathien V, Witthayapipopsakul W, Viriyathorn S, Patcharanarumol W. Improving access to assistive technologies: challenges and solutions in low- and middle-income countries. WHO South East Asia J Public Health. 2018;7(2):84–9. pmid:30136666
- 5. Ding Y, Huang H, Zhang Y, Peng Q, Yu J, Lu G, et al. Correlations between smartphone addiction and alexithymia, attachment style, and subjective well-being: A meta-analysis. Front Psychol. 2022;13:971735. pmid:36124050
- 6. Osorio-Molina C, Martos-Cabrera MB, Membrive-Jiménez MJ, Vargas-Roman K, Suleiman-Martos N, Ortega-Campos E, et al. Smartphone addiction, risk factors and its adverse effects in nursing students: A systematic review and meta-analysis. Nurse Educ Today. 2021;98:104741. pmid:33485161
- 7. Ratan ZA, Parrish A-M, Zaman SB, Alotaibi MS, Hosseinzadeh H. Smartphone addiction and associated health outcomes in adult populations: A systematic review. Int J Environ Res Public Health. 2021;18(22):12257. pmid:34832011
- 8. Ge J, Liu Y, Cao W, Zhou S. The relationship between anxiety and depression with smartphone addiction among college students: The mediating effect of executive dysfunction. Front Psychol. 2023;13:1033304. pmid:36710811
- 9. Shi X, Wang A, Zhu Y. Longitudinal associations among smartphone addiction, loneliness, and depressive symptoms in college students: Disentangling between- and within-person associations. Addict Behav. 2023;142:107676. pmid:36878182
- 10. Vujić A, Szabo A. Hedonic use, stress, and life satisfaction as predictors of smartphone addiction. Addict Behav Rep. 2022;15:100411. pmid:35746955
- 11. Mustafaoglu R, Yasaci Z, Zirek E, Griffiths MD, Ozdincler AR. The relationship between smartphone addiction and musculoskeletal pain prevalence among young population: a cross-sectional study. Korean J Pain. 2021;34(1):72–81. pmid:33380570
- 12. Correa-Iriarte S, Hidalgo-Fuentes S, Martí-Vilar M. Relationship between problematic smartphone use, sleep quality and bedtime procrastination: A mediation analysis. Behav Sci. 2023;13(10):839. pmid:37887489
- 13. Sönmez M, Gürlek Kısacık Ö, Eraydın C. Correlation between smartphone addiction and loneliness levels in nursing students. Perspect Psychiatr Care. 2021;57(1):82–7. pmid:32424870
- 14. Hidalgo-Fuentes S. Problematic smartphone use and procrastination in the academic setting: a meta-analysis. Electron J Res Educ Psychol. 2022;20:449–68.
- 15. Alotaibi MS, Fox M, Coman R, Ratan ZA, Hosseinzadeh H. Smartphone addiction prevalence and its association on academic performance, physical health, and mental well-being among university students in Umm Al-Qura University (UQU), Saudi Arabia. Int J Environ Res Public Health. 2022;19(6):3710. pmid:35329397
- 16. Sunday OJ, Adesope OO, Maarhuis PL. The effects of smartphone addiction on learning: A meta-analysis. Comput Hum Behav Rep. 2021;4:100114.
- 17. Fischer-Grote L, Kothgassner OD, Felnhofer A. The impact of problematic smartphone use on children’s and adolescents’ quality of life: A systematic review. Acta Paediatr. 2021;110(5):1417–24. pmid:33305437
- 18. Meng S-Q, Cheng J-L, Li Y-Y, Yang X-Q, Zheng J-W, Chang X-W, et al. Global prevalence of digital addiction in general population: A systematic review and meta-analysis. Clin Psychol Rev. 2022;92:102128. pmid:35150965
- 19. Olson JA, Sandra DA, Colucci ÉS, Al Bikaii A, Chmoulevitch D, Nahas J, et al. Smartphone addiction is increasing across the world: A meta-analysis of 24 countries. Comput Hum Behav. 2022;129:107138.
- 20. Candussi CJ, Kabir R, Sivasubramanian M. Problematic smartphone usage, prevalence and patterns among university students: A systematic review. J Affect Disord Rep. 2023;14:100643.
- 21. Luk TT, Wang MP, Shen C, Wan A, Chau PH, Oliffe J, et al. Short version of the Smartphone Addiction Scale in Chinese adults: Psychometric properties, sociodemographic, and health behavioral correlates. J Behav Addict. 2018;7(4):1157–65. pmid:30418073
- 22. Servidio R, Griffiths MD, Di Nuovo S, Sinatra M, Monacis L. Further exploration of the psychometric properties of the revised version of the Italian Smartphone Addiction Scale – Short Version (SAS-SV). Curr Psychol. 2022;42(31):27245–58.
- 23. Tateno M, Kim D-J, Teo AR, Skokauskas N, Guerrero APS, Kato TA. Smartphone addiction in Japanese college students: Usefulness of the Japanese version of the Smartphone Addiction Scale as a screening tool for a new form of internet addiction. Psychiatry Investig. 2019;16(2):115–20. pmid:30808117
- 24. Wang A, Wang Z, Zhu Y, Shi X. The prevalence and psychosocial factors of problematic smartphone use among Chinese college students: A three-wave longitudinal study. Front Psychol. 2022;13:877277. pmid:35450331
- 25. Kwon M, Lee J-Y, Won W-Y, Park J-W, Min J-A, Hahn C, et al. Development and validation of a smartphone addiction scale (SAS). PLoS One. 2013;8(2):e56936. pmid:23468893
- 26. Cocoradă E, Maican CI, Cazan A-M, Maican MA. Assessing the smartphone addiction risk and its associations with personality traits among adolescents. Child Youth Serv Rev. 2018;93:345–54.
- 27. Csibi S, Griffiths MD, Demetrovics Z, Szabo A. Analysis of problematic smartphone use across different age groups within the ‘Components model of addiction’. Int J Ment Health Addict. 2019;19(3):616–31.
- 28. Busch PA, McCarthy S. Antecedents and consequences of problematic smartphone use: A systematic literature review of an emerging research area. Comput Hum Behav. 2021;114:106414.
- 29. Kelleghan AR, Leventhal AM, Cruz TB, Bello MS, Liu F, Unger JB, et al. Digital media use and subsequent cannabis and tobacco product use initiation among adolescents. Drug Alcohol Depend. 2020;212:108017. pmid:32408138
- 30. Billieux J, Maurage P, Lopez-Fernandez O, Kuss DJ, Griffiths MD. Can disordered mobile phone use be considered a behavioral addiction? An update on current evidence and a comprehensive model for future research. Curr Addict Rep. 2015;2(2):156–62.
- 31. Montag C, Wegmann E, Sariyska R, Demetrovics Z, Brand M. How to overcome taxonomical problems in the study of Internet use disorders and what to do with “smartphone addiction”? J Behav Addict. 2021;9(4):908–14. pmid:31668089
- 32. Panova T, Carbonell X. Is smartphone addiction really an addiction? J Behav Addict. 2018;7(2):252–9. pmid:29895183
- 33.
Association AP. Diagnostic and statistical manual of mental disorders: DSM-5™. 5th ed. Arlington, VA, US: American Psychiatric Publishing, Inc.; 2013.
- 34.
World Health Organization. International classification of diseases 11th revision (ICD-11). 2018.
- 35. Lin Y-H, Chiang C-L, Lin P-H, Chang L-R, Ko C-H, Lee Y-H, et al. Proposed diagnostic criteria for smartphone addiction. PLoS One. 2016;11(11):e0163010. pmid:27846211
- 36. Wu Y-L, Lin S-H, Lin Y-H. Two-dimensional taxonomy of internet addiction and assessment of smartphone addiction with diagnostic criteria and mobile apps. J Behav Addict. 2021;9(4):928–33. pmid:33410771
- 37. Khan NF, Khan MN. A bibliometric analysis of peer-reviewed literature on smartphone addiction and future research agenda. Asia-Pac J Bus Adm. 2021;14(2):199–222.
- 38. Karakose T, Tülübaş T, Papadakis S. Revealing the intellectual structure and evolution of digital addiction research: An integrated bibliometric and science mapping approach. Int J Environ Res Public Health. 2022;19(22):14883. pmid:36429603
- 39.
World Health Organization. Public health implications of excessive use of the internet, computers, smartphones and similar electronic devices: meeting report, Main Meeting Hall, Foundation for Promotion of Cancer Research, National Cancer Research Centre, Tokyo, Japan, 27-29 August 2014. Geneva, Switzerland: World Health Organization; 2015.
- 40. Alimoradi Z, Lotfi A, Lin C-Y, Griffiths MD, Pakpour AH. Estimation of behavioral addiction prevalence during COVID-19 pandemic: A systematic review and meta-analysis. Curr Addict Rep. 2022;9(4):486–517. pmid:36118286
- 41. Harris B, Regan T, Schueler J, Fields SA. Problematic Mobile Phone and Smartphone Use Scales: A systematic review. Front Psychol. 2020;11:672. pmid:32431636
- 42. Kwon M, Kim D-J, Cho H, Yang S. The smartphone addiction scale: development and validation of a short version for adolescents. PLoS One. 2013;8(12):e83558. pmid:24391787
- 43. Galesic M, Bosnjak M. Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opin Q. 2009;73(2):349–60.
- 44. Cheung T, Lee RLT, Tse ACY, Do CW, So BCL, Szeto GPY, et al. Psychometric properties and demographic correlates of the Smartphone Addiction Scale-Short Version among Chinese children and adolescents in Hong Kong. Cyberpsychol Behav Soc Netw. 2019;22(11):714–23. pmid:31621411
- 45. Andrade ALM, Scatena A, Martins GDG, Pinheiro B de O, Becker da Silva A, Enes CC, et al. Validation of Smartphone Addiction Scale - Short Version (SAS-SV) in Brazilian adolescents. Addict Behav. 2020;110:106540. pmid:32682269
- 46. Esmaeilpour F, Letafatkar A, Baker JS, Dutheil F, Khazaei O, Rabiei P, et al. Reliability and construct validity of the smartphone addiction scale short version (SAS-SV) in Iranian students. J Public Health. 2021;31(3):345–53.
- 47. Fallahtafti S, Ghanbaripirkashani N, Alizadeh SS, Rovoshi RS. Psychometric properties of the Smartphone Addiction Scale – Short Version (SAS-SV) in a sample of Iranian adolescents. Int J Dev Sci. 2019;14(1–2):19–26.
- 48. Arthy CC, Effendy E, Amin MM, Loebis B, Camellia V, Husada MS. Indonesian version of Addiction Rating Scale of Smartphone Usage Adapted from Smartphone Addiction Scale-Short Version (SAS-SV) in junior high school. Open Access Maced J Med Sci. 2019;7(19):3235–9. pmid:31949522
- 49. De Pasquale C, Sciacca F, Hichy Z. Italian validation of Smartphone Addiction Scale Short Version for adolescents and young adults (SAS-SV). Psych. 2017;08(10):1513–8.
- 50. Sfendla A, Laita M, Nejjar B, Souirti Z, Touhami AAO, Senhaji M. Reliability of the Arabic Smartphone Addiction Scale and Smartphone Addiction Scale-Short Version in two different Moroccan samples. Cyberpsychol Behav Soc Netw. 2018;21(5):325–32. pmid:29762065
- 51. Harris B, McCredie M, Fields S. Examining the psychometric properties of the Smartphone Addiction Scale and its short version for use with emerging adults in the U.S. Comput Hum Behav Rep. 2020;1:100011.
- 52. Noyan C, Darcin A, Nurmedov S, Yilmaz O, Dilbaz N. Validity and reliability of the Turkish version of the Smartphone Addiction Scale-Short Version among university students. Anadolu Psikiyatri Derg. 2015;16:73.
- 53. Khalily MT, Saleem T, Bhatti MM, Ahmad I, Hussain B. An Urdu adaptation of smartphone addiction scale-short version (SAS-SV). J Pak Med Assoc. 2019;69(5):700–10. pmid:31105291
- 54. Nikolic A, Bukurov B, Kocic I, Soldatovic I, Mihajlovic S, Nesic D, et al. The validity and reliability of the Serbian version of the Smartphone Addiction Scale-Short Version. Int J Environ Res Public Health. 2022;19(3):1245. pmid:35162268
- 55. Zhao H, Rafik-Galea S, Fitriana M, Song T-J. Translation and psychometric evaluation of Smartphone Addiction Scale-Short Version (SAS-SV) among Chinese college students. PLoS One. 2022;17(11):e0278092. pmid:36445890
- 56. Lopez-Fernandez O. Short version of the Smartphone Addiction Scale adapted to Spanish and French: Towards a cross-cultural research in problematic mobile phone use. Addict Behav. 2017;64:275–80. pmid:26685805
- 57. Escalera-Chávez ME, Rojas-Kramer CA. SAS-SV Smartphone addiction scale in Mexican university students. Educ Res Int. 2020;2020:1–10.
- 58.
Telecomunicaciones CONATEL. Informe anual de los indicadores del sector de telecomunicaciones en Honduras. 2023.
- 59. Lee S-G, Trimi S, Kim C. The impact of cultural differences on technology adoption. J World Bus. 2013;48(1):20–9.
- 60.
Lipski JM. Geographical and social varieties of Spanish: An overview. In: Hualde JI, Olarrea A, O’Rourke E, editors. The handbook of Hispanic linguistics. Chichester, UK: John Wiley & Sons, Ltd; 2012. p. 1–26.
- 61. Lloret-Segura S, Ferreres-Traver A, Hernández-Baeza A, Tomás-Marco I. El análisis factorial exploratorio de los ítems: una guía práctica, revisada y actualizada. An Psicol. 2014;30(3).
- 62. Henson RK, Roberts JK. Use of exploratory factor analysis in published research: Common errors and some comment on improved practice. Educ Psychol Meas. 2006;66(3):393–416.
- 63. Young KS. Internet addiction: The emergence of a new clinical disorder. Cyberpsychol Behav. 1998;1(3):237–44.
- 64. Fernández-Villa T, Molina AJ, García-Martín M, Llorca J, Delgado-Rodríguez M, Martín V. Validation and psychometric analysis of the Internet Addiction Test in Spanish among college students. BMC Public Health. 2015;15:953. pmid:26400152
- 65. Lovibond PF, Lovibond SH. The structure of negative emotional states: comparison of the Depression Anxiety Stress Scales (DASS) with the Beck Depression and Anxiety Inventories. Behav Res Ther. 1995;33(3):335–43. pmid:7726811
- 66. Daza P, Novy DM, Stanley MA, Averill P. The depression anxiety stress scale-21: Spanish translation and validation with a Hispanic sample. J Psychopathol Behav Assess. 2002;24(3):195–205.
- 67. Meade AW, Craig SB. Identifying careless responses in survey data. Psychol Methods. 2012;17(3):437–55. pmid:22506584
- 68. Ward MK, Meade AW. Dealing with careless responding in survey data: Prevention, identification, and recommended best practices. Annu Rev Psychol. 2023;74:577–96. pmid:35973734
- 69. Johnson JA. Ascertaining the validity of individual protocols from Web-based personality inventories. J Res Pers. 2005;39(1):103–29.
- 70.
Mahalanobis PC, editor On the generalized distance in statistics. Proc Natl Inst Sci; 1936; Calcutta.
- 71. Braeken J, van Assen MALM. An empirical Kaiser criterion. Psychol Methods. 2017;22(3):450–66. pmid:27031883
- 72. HORN JL. A rationale and test for the number of factors in factor analysis. Psychometrika. 1965;30:179–85. pmid:14306381
- 73. Lorenzo-Seva U, Timmerman ME, Kiers HAL. The Hull method for selecting the number of common factors. Multivariate Behav Res. 2011;46(2):340–64. pmid:26741331
- 74. Forero CG, Maydeu-Olivares A, Gallardo-Pujol D. Factor analysis with ordinal indicators: A Monte Carlo study comparing DWLS and ULS estimation. Struct Equ Model. 2009;16(4):625–41.
- 75. Yang-Wallentin F, Joreskog K, Luo H. Confirmatory factor analysis of ordinal variables with misspecified models. Struct Equ Model. 2010;17(3):392–423.
- 76. Brauer K, Ranger J, Ziegler M Confirmatory factor analyses in psychological test adaptation and development: A nontechnical discussion of the WLSMV estimator. Psychol Test Adapt Dev. 2023;4(1):4–12.
- 77. Hamamura T, Kobayashi N, Oka T, Kawashima I, Sakai Y, Tanaka SC, et al. Validity, reliability, and correlates of the Smartphone Addiction Scale-Short Version among Japanese adults. BMC Psychol. 2023;11(1):78. pmid:36959621
- 78.
McDonald RP. Test theory: A unified treatment. Mahwah, NJ: L. Erlbaum Associates; 1999.
- 79. Kamata A, Bauer DJ. A note on the relation between factor analytic and item response theory models. Struct Equ Model. 2008;15(1):136–53.
- 80. Muthén BO. Latent variable modeling in heterogeneous populations. Psychometrika. 1989;54(4):557–85.
- 81. Oort FJ. Using restricted factor analysis to detect item bias. Methodika. 1992;6(2):150–66.
- 82.
Kolbe L, Jorgensen TD. Using product indicators in restricted factor analysis models to detect nonuniform measurement bias. Cham: Springer International Publishing; 2018.
- 83. Kolbe L, Jorgensen TD. Using restricted factor analysis to select anchor items and detect differential item functioning. Behav Res Methods. 2019;51(1):138–51. pmid:30402814
- 84.
Lee J, Little TD, Preacher KJ. Methodological issues in using structural equation models for testing differential item functioning. Cross-Cultural Analysis. 2 ed. New York, NY: Routledge; 2018. p. 65–94.
- 85.
Chin CL, Yao G. Convergent validity. In: Michalos AC, editor. Encyclopedia of quality of life and well-being research. Dordrecht: Springer Netherlands; 2014. p. 1275–6.
- 86. Gignac GE, Szodorai ET. Effect size guidelines for individual differences researchers. Pers Individ Differ. 2016;102:74–8.
- 87.
Sharma S. Applied multivariate techniques. New York: John Wiley and Sons Inc.; 1996.
- 88. Kenny DA, Kaniskan B, McCoach DB. The performance of RMSEA in models with small degrees of freedom. Sociol Methods Res. 2014;44(3):486–507.
- 89. Kenny DA, McCoach DB. Effect of the number of variables on measures of fit in structural equation modeling. Struct Equ Model. 2003;10(3):333–51.
- 90.
Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. New York: McGraw-Hill; 1994.
- 91. Choi S-W, Kim D-J, Choi J-S, Ahn H, Choi E-J, Song W-Y, et al. Comparison of risk and protective factors associated with smartphone addiction and Internet addiction. J Behav Addict. 2015;4(4):308–14. pmid:26690626
- 92. Ayar D, Bektas M, Bektas I, Akdeniz Kudubes A, Selekoglu Ok Y, Sal Altan S, et al. The effect of adolescents’ internet addiction on smartphone addiction. J Addict Nurs. 2017;28(4):210–4. pmid:29200048
- 93. Ben-Yehuda L, Greenberg L, Weinstein A. Internet addiction by using the smartphone-relationships between internet addiction, frequency of smartphone use and the state of mind of male and female students. J Reward Defic Syndr Addict Sci. 2016;2(1).
- 94. Jin Jeong Y, Suh B, Gweon G. Is smartphone addiction different from Internet addiction? comparison of addiction-risk factors among adolescents. Behav Inf Technol. 2019;39(5):578–93.
- 95. León Méndez M, Padrón I, Fumero A, Marrero RJ. Effects of internet and smartphone addiction on cognitive control in adolescents and young adults: A systematic review of fMRI studies. Neurosci Biobehav Rev. 2024;159:105572. pmid:38320657
- 96. Elhai JD, Dvorak RD, Levine JC, Hall BJ. Problematic smartphone use: A conceptual overview and systematic review of relations with anxiety and depression psychopathology. J Affect Disord. 2017;207:251–9. pmid:27736736
- 97. Elhai JD, Levine JC, Hall BJ. The relationship between anxiety symptom severity and problematic smartphone use: A review of the literature and conceptual frameworks. J Anxiety Disord. 2019;62:45–52. pmid:30529799
- 98. Yang J, Fu X, Liao X, Li Y. Association of problematic smartphone use with poor sleep quality, depression, and anxiety: A systematic review and meta-analysis. Psychiatry Res. 2020;284:112686. pmid:31757638
- 99. Coyne SM, Stockdale L, Summers K. Problematic cell phone use, depression, anxiety, and self-regulation: Evidence from a three year longitudinal study from adolescence to emerging adulthood. Comput Hum Behav. 2019;96:78–84.
- 100. Geng Y, Gu J, Wang J, Zhang R. Smartphone addiction and depression, anxiety: The role of bedtime procrastination and self-control. J Affect Disord. 2021;293:415–21. pmid:34246950
- 101. Santander-Hernández FM, Peralta CI, Guevara-Morales MA, Díaz-Vélez C, Valladares-Garrido MJ. Smartphone overuse, depression & anxiety in medical students during the COVID-19 pandemic. PLoS One. 2022;17(8):e0273575. pmid:36040873
- 102. Kim E, Koh E. Avoidant attachment and smartphone addiction in college students: The mediating effects of anxiety and self-esteem. Comput Hum Behav. 2018;84:264–71.
- 103. Lee HC, Hong MH, Oh CK, Shim SH, Jun YJ, Lee SB, et al. Smart-phone addiction, depression/anxiety, and self-esteem with attention-deficit hyperactivity disorder in Korean children. J Korean Acad Child Adolesc Psychiatry. 2015;26(3):159–64.
- 104. Wolniewicz CA, Tiamiyu MF, Weeks JW, Elhai JD. Problematic smartphone use and relations with negative affect, fear of missing out, and fear of negative and positive evaluation. Psychiatry Res. 2018;262:618–23. pmid:28982630
- 105. Zhou H, Dang L, Lam LW, Zhang MX, Wu AMS. A cross-lagged panel model for testing the bidirectional relationship between depression and smartphone addiction and the influences of maladaptive metacognition on them in Chinese adolescents. Addict Behav. 2021;120:106978. pmid:33971499
- 106. Stanković M, Nešić M, Čičević S, Shi Z. Association of smartphone use with depression, anxiety, stress, sleep quality, and internet addiction. Empirical evidence from a smartphone application. Pers Individ Differ. 2021;168:110342.
- 107. Zhang K, Guo H, Wang T, Zhang J, Yuan G, Ren J, et al. A bidirectional association between smartphone addiction and depression among college students: A cross-lagged panel model. Front Public Health. 2023;11:1083856. pmid:36761134
- 108. Haug S, Castro RP, Kwon M, Filler A, Kowatsch T, Schaub MP. Smartphone use and smartphone addiction among young people in Switzerland. J Behav Addict. 2015;4(4):299–307. pmid:26690625
- 109. Nikolic A, Bukurov B, Kocic I, Vukovic M, Ladjevic N, Vrhovac M, et al. Smartphone addiction, sleep quality, depression, anxiety, and stress among medical students. Front Public Health. 2023;11:1252371. pmid:37744504
- 110. Randjelovic P, Stojiljkovic N, Radulovic N, Stojanovic N, Ilic I. Problematic smartphone use, screen time and chronotype correlations in university students. Eur Addict Res. 2021;27(1):67–74. pmid:32172240
- 111. Parry DA, Davidson BI, Sewall CJR, Fisher JT, Mieczkowski H, Quintana DS. A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nat Hum Behav. 2021;5(11):1535–47. pmid:34002052
- 112. Ryding FC, Kuss DJ. Passive objective measures in the assessment of problematic smartphone use: A systematic review. Addict Behav Rep. 2020;11:100257. pmid:32467846
- 113. Merino-Soto C, Martí-Vilar M, Serrano-Pastor L. Careless responses and construct validity of Wong-Law Emotional Intelligence Scale. Psych J. 2021;10(6):944–6. pmid:34614549
- 114. Merino-Soto C, Fernández-Arata M, Fuentes-Balderrama J, Chans GM, Toledano-Toledano F. Research perceived competency scale: A new psychometric adaptation for university students’ research learning. Sustainability. 2022;14(19).
- 115. Santa-Cruz-Espinoza H, Chávez-Ventura G, Dominguez-Vergara J, Merino-Soto C. Occupational self-efficacy scale: Validity in teachers. Acta Psychol. 2024;249:104441.
- 116. Chauliac M, Willems J, Gijbels D, Donche V. The prevalence of careless response behaviour and its consequences on data quality in self-report questionnaires on student learning. Front Educ. 2023;8.
- 117. Oppenheimer DM, Meyvis T, Davidenko N. Instructional manipulation checks: Detecting satisficing to increase statistical power. J Exp Soc Psychol. 2009;45(4):867–72.