Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Evaluating the psychometric quality of school connectedness measures: A systematic review

  • Amy Hodges ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

    Affiliation School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia, Australia

  • Reinie Cordier,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Supervision, Writing – review & editing

    Affiliation School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia, Australia

  • Annette Joosten,

    Roles Conceptualization, Investigation, Methodology, Supervision, Writing – review & editing

    Affiliations School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia, Australia, School of Allied Health, Australian Catholic University, Melbourne, Victoria, Australia

  • Helen Bourke-Taylor,

    Roles Conceptualization, Investigation, Methodology, Supervision, Writing – review & editing

    Affiliations School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia, Australia, School of Primary and Allied Health, Medicine, Nursing and Health Sciences, Monash University, Frankston, Victoria, Australia

  • Renée Speyer

    Roles Conceptualization, Investigation, Methodology, Writing – review & editing

    Affiliations School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia, Australia, Department Special Needs Education, University of Oslo, Oslo, Norway, Department of Otorhinolaryngology and Head and Neck Surgery, Leiden University Medical Centre, Leiden, the Netherlands



There is a need to comprehensively examine and evaluate the quality of the psychometric properties of school connectedness measures to inform school based assessment and intervention planning.


To systematically review the literature on the psychometric properties of self-report measures of school connectedness for students aged six to 14 years.


A systematic search of five electronic databases and gray literature was conducted. The COnsensus-based Standards for the selection of heath Measurement INstruments (COSMIN) taxonomy of measurement properties was used to evaluate the quality of studies and a pre-set psychometric criterion was used to evaluate the overall quality of psychometric properties.


The measures with the strongest psychometric properties was the School Climate Measure and the 35-item version Student Engagement Instrument exploring eight and 12 (of 15) school connectedness components respectively.


The overall quality of psychometric properties was limited suggesting school connectedness measures available require further development and evaluation.


The concept of school connectedness has received growing attention from researchers and educators in recent years due to its reported impact on health, social and academic outcomes [13]. Students who have a stronger sense of school connectedness are more likely to: engage in socially appropriate behaviours; have higher levels of self-esteem; obtain better grades; display acceptable conduct at school; and are more likely to graduate than students with a lower sense of school connectedness [47]. Longitudinal research suggests that students’ sense of school connectedness in early schooling increases engagement in risk behaviour’s such as smoking, marijuana use, alcohol consumption and sexualised behaviour in later schooling [2, 810]. Recent evidence also suggests that students with a lower sense of school connectedness are more likely to experience clinical anxiety and depression during their schooling and in later life [3, 11].

School connectedness presents an attractive focus for educators, school psychologists and researchers as it is a subjective concept that is amenable to change through the provision of appropriate school based supports [8, 12]. School connectedness literature is being used widely to inform the development of school based interventions, as well as inform educational policy and reform [13, 14]. The Australian Early Years Learning Framework [15] is an example of this; centred around the notion that for students to experience learning that is engaging and supportive of success in later life, they need to first have a sense of belonging to their school community. As such, there is a need for valid and reliable measures to assess the effectiveness of school based interventions targeting school connectedness, in order to minimise the long term documented impacts of reduced school connectedness on students’ academic success and socio-emotional wellbeing. Furthermore, access to school connectedness measures with sound psychometric properties will assist in gaining further evidence to support the use of school based interventions and assist in informing educational policy and reform.

School connectedness: Theoretical underpinnings and definition

Despite growing interest in the concept of school connectedness, there is considerable debate regarding the definition of school connectedness. Many terms have been used inter-changeably in the literature to describe school connectedness including school climate, belonging, bonding, membership and orientation to school [16, 17]. As a result, the operationalisation and measurement of school connectedness has been challenging.

Theoretical models of school connectedness are most commonly embedded within psychology literature. Deci and Ryan’s [18] self-determination theory is regularly referred to within school connectedness literature [1923]. This theory proposes that for an individual to be motivated and to function optimally, a set of psychological needs such as relatedness, competence and autonomy must be supported [18]. Relatedness refers to a need to feel a sense of belonging with peers and teachers [18, 24]. Competence is the need to feel capable of learning and autonomy is the need to feel that you have choice and control at school [18, 24]. These three innate psychological traits are often cited to account for human tendencies to “…engage in activities, to exercise capacities and to pursue connectedness in social groups” [24]; all of which are foundational skills in developing students’ sense of school connectedness. Self-determination theory suggests that students with a strong sense of relatedness or belonging to their peers, teacher and school community are in a better position to learn and more likely to perform better at school due to improved wellbeing and resilience. Furthermore, students who perceive their school environment to be fair, ordered and disciplined and who feel in control of their academic outcomes at school, are more likely to engage and feel connected at school. Deci and Ryan’s [18] self-determination theory illuminates the impact affective, behavioural and cognitive factors have in supporting or hindering a student’s sense of school connectedness.

Early research relating to school connectedness has focused on affective aspects of school connectedness [17, 25]. Affective engagement, also referred to as psychological and emotional engagement, refers to a student’s feelings towards his/her school, learning, teachers and peers [17, 25, 26]. Affective engagement is accurately captured in Goodenow’s [27] definition of school connectedness, which is the “…extent to which a student feels personally accepted, respected, included and supported by others” [27] in the school environment. This definition, however, does not take into consideration behavioural and cognitive factors that can also impact a student’s sense of school connectedness, which have been explored in more recent school connectedness literature. Behavioural engagement includes observable student actions of participation while at school and is investigated through student conduct, effort and participation [5, 28, 29]. Conversely, cognitive engagement includes students’ perceptions and beliefs associated with school and learning [5, 28, 29]. That is, to feel connected to school the student must be actively involved in classroom and school activities, including school organised extra-curricular activities, and actively think about how they can involve themselves in the learning process at school. Wingspread’s Declaration of School Connections [30], which describes school connectedness as a “…belief by students that adults in the school community care about students learning and about them as individuals and can be represented by high academic expectations from teachers with support for learning, positive teacher-student interactions and feelings of safety” [30], more accurately captures behavioural and cognitive aspects of school connectedness.

Several reviews have focused on defining the meta-construct of school connectedness [7, 25, 31]. These reviews highlight that the construct of school connectedness has evolved over time—from a relatively simple construct focusing on students’ general feelings towards school; to a more complex multi-dimensional construct comprising not only students’ feelings towards school, but also their perceptions and beliefs towards school and learning, and their involvement in classroom and playground activities and school events. Researchers in the field postulate that definitions of school connectedness should include the triad of indicators (i.e., affective, behavioural, and cognitive) and facilitators (i.e., personal and contextual factors) that influence connectedness [25]. Indicators “…convey a student’s degree or level of connection with learning while facilitators are factors that influence the strength of the connection” [25]. Although this definition has been proposed, authors of this study have not found a definition of school connectedness that fully encapsulates all of these components. Following an extensive review of the literature, authors of the study thematically categorised factors contributing towards students’ sense of school connectedness under affective, cognitive and behavioural domains illustrated in Table 1. For the purposes of this review, these domains and concepts will be subsumed under the broader construct of school connectedness. Collectively, the concepts in Table 1 are critical dimensions of students’ experiences in school. Together, they are essential in promoting student development and overall academic success. These concepts are often targeted within individual and school wide interventions strategies. As such, there is a need for measures that assess these school connectedness domains and constructs both cross-sectionally and longitudinally.

Measuring school connectedness

Not surprisingly, given the difficulties in defining school connectedness, there are various ways in which this concept has been measured. The differences in the way the concept is measured are theoretical and methodological. The theoretical background of the researcher often determines how school connectedness is measured. For example, Jimerson, Campos and Grieif [31] identify and assess student motivation as an affective indicator of school connectedness with a background in psychology; while Fredricks, Blumenfeld and Paris [7] identify it as a cognitive indicator with a background in educational psychology. While motivation is an intrinsic process, it manifests itself extrinsically through student behaviour [32]. Therefore, authors of this study have categorised student interest or motivation as a behavioural indicator of school connectedness (see Table 1).

The purpose of assessing school connectedness often determines how the construct is measured. Some measures have been developed specifically for the school context (e.g., What’s Happening In This School [33]), whereas others extend their exploration to the home and community environment with subscales or items that refer to school (e.g., Adolescents Sense of Wellbeing Related to Stress [34]). Some measures have been developed specifically to assess students’ sense of school connectedness in particular subjects such as maths, science or physical education (e.g., What’s Happening In This Class (Singapore version) [35]). Some measures focus on assessing an individual student’s sense of connectedness (e.g., Student Engagement Instrument [36]), whereas others aim to assess an individual’s perception of connectedness at a classroom or school level (e.g., Classroom Environment Scale [37], Classroom Peer Context Questionnaire [38]). Schools conducting research into school connectedness will often tailor their measurement approach based on their needs; for example, whether they want to gain an understanding of their schools sense of connectedness to inform funding allocation, versus whether they want to identify individual at-risk students to inform the provision of school supports [39].

There is debate within the literature regarding whether self-report or proxy report measures should be used when evaluating school connectedness [40]. Many would argue the subjective nature of school connectedness makes it less amenable to third party report [17, 31]. For example, the teacher may observe the student to play with peers or engage in the curriculum, but the student themselves, for whatever reason, may not feel like they are a part of their school community. Self-report measures help to depict the student’s personal perception of their experience at school. Teacher-report methods may be more suitable in capturing behavioural components of school connectedness such as the students’ level of effort or persistence at school that can be objectively observed [41]. As previously mentioned, students will experience a sense of connectedness when their needs of autonomy, competence and relatedness are met within the school environment [24]. The assumption is that students’ feelings of being included and accepted at school, as well as the perception they are making important contributions to the school community, help to create and maintain feelings of connectedness. Therefore, in order to gain an accurate depiction of students’ sense of school connectedness, the use of student self-report measures is warranted and will be the focus of this particular review.

The differences in the way school connectedness is defined makes it difficult to compare measures to each other in an attempt to identify the most valid and reliable tool to use in the school context. As children spend more time in schools than any other place outside their homes, it is important to be able to validly and reliably assess student experiences within school so that appropriate supports can be provided [39]. Furthermore, it is important to be able to reliably measure this construct with students in early primary school, to prevent or minimise the long term documented impacts of reduced school connectedness on student outcomes.

The COSMIN taxonomy has been successfully applied to more than 560 systematic reviews [42, 43]. The COSMIN checklist is a standardised tool that can be used to critically appraise the methodological quality of studies reporting on the psychometric properties of measures [43]. The COSMIN checklist was chosen for this systematic review as it has been developed following extensive international consultation and consensus among experts in the field of psychometrics and clinimetrics. The COSMIN was used in the current review to compare the psychometric properties of existing school connectedness measures, originally developed in English that capture affective, cognitive and behavioural domains of school connectedness using self-report methods for students aged six to 14 years of age. It is expected that this systematic review will assist in the choice of instruments measuring school connectedness, by providing an objective account of the strengths and weaknesses of self-report measures available for school aged children.


The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement guided the methodology and writing of this systematic review. The PRISMA statement is a 27–item checklist that is deemed essential in the transparent reporting of systematic reviews [44]. A completed PRISMA checklist for the current review is accessible (see S1 Table).

Eligibility criteria

Research articles, published manuals and reports detailing the psychometric properties of self-report instruments designed to measure school connectedness of students aged six to 14 years of age were deemed eligible for inclusion in this review. To be included, abstracts and instruments needed to address all three school connectedness domains (i.e., behavioural; affective and cognitive); address at least five of 15 concepts within school connectedness domains (see Table 1); provide validity evidence for students aged six to 14 years of age; be specific to the school context; have psychometrics properties published within the last 20 years; and be written in English. Psychometrics properties published more than 20 years ago were deemed out-dated. Measures were excluded if the full text of the article was not retrievable; they were specific to a subject area (e.g., maths or science) or a student population (e.g., students with craniofacial abnormalities). Measures that provided validity evidence for students requiring special education assistance were included in the review, as long as the sample also included typically developing students. Dissertations, conference and review papers were excluded as they are not peer reviewed, and the search yielded sufficient results.

Information sources

The first systematic literature search was performed on the 13th June 2016 by two authors using the following five electronic databases: CINAHL, Embase, ERIC, Medline, PsycINFO. Subject headings and free text were used when searching each database. A gray literature search was also conducted using Google Scholar and PsycEXTRA between the 21st and 27th July 2016 to identify additional measures. See S2 Table for a complete list of search terms used across all searches. A second literature search was conducted on the 18th September 2016 using the title of the measure and its acronym in CINAHL, Embase, ERIC, Medline and PsycINFO to identify additional psychometric articles not identified in the first search. To be comprehensive, websites of publishers of assessments in education and social science such as Pearson Education, ACER and Academic Therapy Publications were searched.

Study selection

Abstracts were reviewed using three dichotomous scales to determine (a) if the study involved students aged between 0 and 18 years (yes/no), (b) if the instrument measured school connectedness or related terms (e.g., group membership, learner engagement, school community relationship, student participation, school involvement) (yes/no) and (c) if the study reported on the psychometric properties of the measure (yes/no). Results from the three dichotomous scales were then combined to generate a single ordinal scale from 0 to 3; 0 indicating the abstract did not meet any criteria and 3 indicating the abstract met all three criteria. A random sample of 40% of abstracts was generated using an electronic random allocator ( Based on previous systematic reviews using COSMIN [4547], this percentage was deemed sufficient to detect systematic error. The random sample was reviewed by the primary author and an independent rater to establish inter-rater reliability. Inter-rater reliability between raters was deemed excellent: Weighted Kappa = 0.814 (95% CI: 0.791–0.836). Abstracts that did not meet any of the criteria or met only one of the criteria were excluded from the study. Abstracts that met two or three of the criteria were reviewed a second time and discussed by the primary author and independent rater to gain consensus and ensure only studies meeting all eligibility criteria were included in full text review. The primary author then rated the remaining abstracts and 132 full texts articles meeting all three criteria. Articles were excluded if the full text did not meet criteria (see Fig 1). Scoring a random sample of abstracts first, allowed the researcher to learn from the process and avoid systematic errors.

Fig 1. Flow diagram of the reviewing process according to PRISMA [44].

Data collection process and data extraction

Information from articles were extracted under the following descriptive categories: purpose of the measure, number of subscales, total number of items, response options and time to complete, article reference and sample characteristics. The information extracted from articles was guided by the Cochrane Handbook for Systematic Reviews [48] Section 7.3a and the Systematic Reviews Centre for Reviews and Dissemination [49].

Methodological quality

The methodological quality of included studies was assessed using the COSMIN taxonomy of measurement properties and definitions for health-related patient reported outcomes [43, 50]. The COSMIN checklist is a standardised tool and consists of nine domains: internal consistency, reliability (including test-retest reliability, inter-rater reliability and intra-rater reliability), measurement error, content validity (including face validity), structural validity, hypotheses testing, cross cultural validity, criterion validity and responsiveness [43]. Refer to Table 2 for the definitions of all psychometric properties as defined by the COSMIN statement [50]. Responsiveness was not evaluated as a psychometric property as it would have increased the size of the review exponentially and was deemed outside the scope of this review. Criterion validity was also not evaluated due to the absence of a ‘gold standard’ measure of school connectedness. Cross-cultural validity was not evaluated as instruments included in the review were developed and published in English. Interpretability is not considered to be a psychometric property under the COSMIN framework and was therefore not described or evaluated in this review.

Table 2. COSMIN definitions of domains, psychometric properties and aspects of psychometric properties for health-related patient-reported outcomes adapted from Mokkink et al. [50].

Each domain of the COSMIN checklist includes 5 to 18 items focusing on various aspects of study design and statistical analyses. A 4–point rating scale proposed by Terwee et al. [51] enables an overall methodological quality score from poor to excellent, to be obtained for each measure. Terwee et al. [51] suggests taking the lowest rating of any item in the domain as the final quality rating, however this makes it difficult to differentiate between subtle psychometric qualities of assessments. Therefore a revised scoring system was applied and presented as a percentage: Poor (0–25%), Fair (25.1%–50.0%), Good (50.1%–75%) and Excellent (75.1–100%) [47]. As some COSMIN items only have an option to rate as good or excellent, the total score for each psychometric property was calculated using the formula detailed below, to accurately capture the quality of psychometric properties [43]:

After the studies were assessed for methodological quality, the quality of psychometric properties were evaluated using modified criteria by Terwee [51] and Schellingerhout et al. [52]. A summary of the criteria used for rating the quality of internal consistency, content validity, structural validity and hypothesis testing is detailed in Table 3. Finally, each measurement property for all instruments was given an overall score using criteria set out by Schellingerhout [52]. An overall quality rating was created by combining the study quality scores measured by COSMIN and the psychometric quality ratings as measured by Terwee et al. (2007) and Schellingerhout [52]. This method has been used successfully in previous psychometric reviews [45, 53]. The COSMIN checklist [51] and Terwee [51] and Schellingerhout et al. [52] criteria accommodates studies that use both Classical Test Theory (CTT) and Item Response Theory (IRT) methodology.

Table 3. Criteria of psychometric quality rating based on Terwee et al. [50] and Schellingerhout et al. (2012).

To maximise consistency of ratings, the fifth author of this study who has extensive experience in the area provided training to the primary author and an independent rater on how to complete the COSMIN checklist and to determine the quality of the psychometric properties. The first author scored all the papers. A random selection of 40% of COSMIN ratings and all psychometric quality ratings were scored by an independent rater. Both raters met until 100% consensus was achieved when ratings differed in category. The fifth author met with the two raters to resolve differences in ratings when a consensus could not be reached (Weighted Kappa: 0.886, 95% CI: 0.823–0.948).

Data items, risk of bias and synthesis of results

All data items for each measure were obtained. Items that were not reported were recorded as ‘NR’. Risk of bias was assessed at an individual study level using the COSMIN checklist. Studies that obtained a high rating were deemed to be at low risk of bias and studies that obtained a low rating were deemed at high risk of bias. Psychometric properties only received a ‘positive’ or ‘negative’ rating if clear and appropriate methodology was reported. If unclear or inappropriate methodology was used, an ‘indeterminate’ rating was recorded; providing further evidence for risk of bias. Ratings from individual studies and psychometric properties were then combined to create an overall rating for each psychometric property of each measure. Risk of bias is subsumed into final results.


Systematic literature search

A total of 3,754 abstracts were retrieved from database searches, including duplicates. The total abstracts from subject heading and free text word searches across databases were: CINAHL = 656, Embase = 1,060, ERIC = 724, Medline = 789, PsycINFO = 525. Reference lists of included articles were searched for additional literature. A total of 1,763 duplicates were identified across the five databases and removed. After the removal of duplicate abstracts, a total of 1,991 articles were screened for inclusion in the review. Of these studies, 132 full text articles on 87 measures were assessed for eligibility. Of these 87 measures, 15 met the inclusion criteria and 72 were excluded. Refer to S3 Table for an overview of the 72 excluded instruments and the reasons for exclusion. The references of two manuals were identified for two included instruments; however, because they were irretrievable they were not included in the review. Therefore, psychometric properties of 15 measures were obtained, which were assessed using 18 research articles and 1 research report. Fig 1 illustrates the reviewing process according to PRISMA.

Included school connectedness measures

Table 4 summarises characteristics of 15 measures that met inclusion criteria and articles reporting on psychometric properties. All measures were developed and validated with typically developing students from a range of ethnic and socio-economic backgrounds in the United States, except for one, which was developed in New Zealand [54]. The majority of measures were developed with an adolescent sample (12 to 18 years), with only a small number of measures developed and validated with students under the age of 12 years [55, 56]. Only three measures extended their samples to include students receiving special education services; however, these students made up less than 15% of the total sample [55, 5759]. The majority of studies had large sample sizes, with the median sample size being 1,642 (range of 77 to 47,488). All of the measures that met eligibility criteria were published after 1996. Of the 15 measures, 11 were published within the last 10 years (since 2006). All measures collected responses via pen and paper questionnaires and were conducted within the school setting. Some measures were administered verbally to students who identified as having English as their second language.

Table 4. Characteristics of identified school connectedness measures and description of studies describing their development and validation.

Table 5 summarises the domains of school connectedness measured by each instrument. The subdomains were categorised following a thematic synthesis by four members of the research team based on the definitions or descriptions of the scales and/or subscales in included studies. Subdomains were identified and subsumed under the most relevant domain: (1) affective (i.e., feelings of acceptance, belonging and inclusion; feelings of respect and being respected; value importance of school; feelings of safety; sense of autonomy and independence and academic self-efficacy), (2) cognitive (i.e., perceptions of—teacher relationships and support; peer relationships and support; academic support; discipline, order and fairness; and the value parents place on school) and (3) behavioural (i.e., involvement, participation and engagement; effort and persistence; conduct and interest and motivation). No single instrument measured all aspects of affective, cognitive and behavioural domains of school connectedness. The measure that measured the most aspects was versions of the Student Engagement Instrument (i.e., 35 item, 33 item and elementary version) [36, 55, 57, 60, 61], which measured 12 of 15 affective, cognitive and behavioural components of school connectedness.

Table 5. Domains and concepts of school connectedness measured by included instrument.

Psychometric properties

Table 6 summarises quality ratings of psychometric studies and therefore risk of bias as determined by COSMIN. All measures included in the review were found to have good to excellent study quality for internal consistency, structural validity and hypothesis testing and poor to excellent study quality for content validity. Internal consistency and structural validity were the most frequently reported properties having being described in 17 and 16 studies respectively. Content validity was described for eight measures and hypothesis testing for 10 measures. Five studies reporting on hypothesis testing, described findings for more than one hypothesis. Of the 15 included instruments, six were revisions of earlier versions of measures of school connectedness (i.e., SEI– 35 item [36], SEI– 33 item [57, 60, 61], SEI—Elementary [55], Developmental Study Centre’s School Climate Survey—Abbreviated Version [59], SPPCC—Adapted [54], SCM—Adapted [69]). These measures were evaluated separately as the item pool and response format of these measures had been changed. For 11 measures only single studies were identified. The SEI (33 item version) [57, 60, 61] and the SCM [67, 68] had the most studies; reporting on psychometric properties in three research articles. Thirteen measures reported on two or more of six psychometric properties (average 3; range 1–4). The PSES [62] and the Developmental Study Centre’s School Climate Survey (Full Version) [56] were the only measures to report on one psychometric property. Many measures had no published information relating to content validity including the PSES [62], SESQ [13], SEI– 33 item version [57, 60, 61], Developmental Study Centre’s School Climate Survey (Full Version and Abbreviated Version) [56, 59], SBI—R and SCM (Revised Version). The only study that was excluded from further analysis in the review was by Voekl [65] for receiving a poor COSMIN rating for content validity.

Table 6. Overview of the psychometric properties and methodological quality of school connectedness measures.

Refer to Table 7 for a summary of the quality of psychometric properties of included measures based on Terwee et al. [51] and Schellingerhout et al. (2012). Refer to Table 8 for a summary of the overall psychometric quality ratings per psychometric property for each measure as evaluated against Schellingerhout et al [52] criteria. A description of the criteria used to rate overall psychometric quality can be found in the notes section of Table 8.

Table 7. Quality of psychometric properties based on the criteria by Terwee et al. [51] and Schellingerhout [52].

Table 8. Overall quality score of assessments for each psychometric property based on levels of evidence by Schellingerhout et al. [52].


There is no universally accepted definition of school connectedness; however, the construct is referred to regularly within the literature and is a key area in informing educational policy and reform [39]. The reliable and valid measurement of school connectedness is important to researchers and educators, to minimise the long term documented implications of reduced school connectedness on students’ academic success and socio-emotional wellbeing through the provision of appropriate school based supports. This systematic review provides a comprehensive summary of the quality of psychometric properties of self-report school connectedness measures available for students aged 6 to 14 years using the COSMIN taxonomy of measurement properties.

Quality of the studies using the COSMIN taxonomy

Construct validity, within the COSMIN taxonomy, comprises structural validity, hypothesis testing and content validity [43]. To confidently select and use measures in research it is important to understand “…how well [the] measure assesses what it claims to measure and how well it holds its meaning across varied contexts and sample groups” [45]. Construct validity supersedes all other psychometric properties in measurement development as it is irrelevant if an instrument has good reliability if the construct which it measures is not well established. Many instruments are currently being used to assess school connectedness or related terms. Interestingly, however, the majority of studies in this review failed to adequately define or conceptualise the construct of school connectedness. Rather, studies focused on describing the methodology they used to develop the measure, including the statistical analyses used to test psychometric properties.

A lack of conceptualisation of school connectedness has made it difficult to: (a) adequately compare measures in this review; (b) determine if included measures fully operationalise the construct of school connectedness; and (c) determine whether students sense of school connectedness has changed, or whether change is due to the evolving nature of the construct and the way it is understood currently by researchers and educators in the field. As illustrated in Table 5, none of the measures included in this review, fully capture all aspects of school connectedness and in addition, the quality of descriptions were lacking.

The majority of studies included in this review fail to explicitly state the intended purpose of the measure. That is, whether the instrument was originally intended as an outcome measure to evaluate changes over time following the implementation of school based supports or whether it was intended purely as a diagnostic tool to identify whether school based supports are required. Without this information, researchers and educators may make inappropriate choices and misinterpret assessment findings; leading to errors in clinical judgement. Future research should focus on developing a universal definition of school connectedness and further validate included measures.

Test-retest, inter-rater and intra-rater reliability and measurement error were not reported for any measures included in this review. Given that psychological constructs, such as school connectedness, are relatively stable over time it is important to utilise measures that have low error and are able to detect minor changes over time. Preliminary reliability testing is necessary to evaluate an instruments responsiveness. Without this information, it is difficult to make evidence based informed choices when selecting measures in research. This being said, some measures included in the review such as the SSES [39] have been used in research to evaluate changes in school connectedness over time. Although responsiveness was not evaluated in this review, researchers and educators should exercise caution when using included measures due to a lack of information on their reliability.

Some studies included in the review reported verbal administration of measures to students who identified as using English as their second language. This method of administration places a high demand on students’ expressive and receptive language skills as well as their verbal comprehension and memory recall resulting in a potential for error in the recorded true scores. Minor changes in question wording, question order or response format can result in different findings [40]. This method of questionnaire administration may have impacted the quality of findings in these studies. Furthermore, it is important to consider inherent bias that exists with self-report measures. Student responses may be affected by their perception of support within their school–“…they may take into account social norms when responding, which may result in social desirability bias” [40]. Methods do exist to reduce this problem such as assuring students of confidentiality and anonymity; however, this can increase students suspicions about the sensitivity of the topic [40]. Many studies included in the review failed to explicitly state how measures were administered and/or did not report on efforts to minimise the impact of social desirability bias on data quality.

Although the focus of this review was to evaluate the psychometric properties of school connectedness measures for students aged 6 to 14 years, the samples of included studies largely comprised older students up to the age of 18 years. Students under the age of 12 years represented approximately 25% of samples in included studies. This calls into question the utility and appropriateness of these measures with younger student populations. When examining included measures in more detail, it was noted many measures had lengthy item pools. For example, the Developmental Study Centre’s School Climate Survey (Full Version) [56] and the SESQ [13] included 100 and 109 items respectively. Not only would these measures be time consuming, they would require a great deal of concentration for a young student to complete. It is important to be able to validly and reliably assess students’ sense of school connectedness in early primary school in order to identify and support at-risk students to prevent the long-term documented implications of a lack of school connectedness on student outcomes. Future research should focus on validating included measures with younger students to ensure measures are age appropriate and can be reliably and validly used in this population.

Overall quality of psychometric properties

The overall quality of measurement properties critiqued in this study varied widely. The school connectedness self-report measures with the strongest psychometric properties were the SCM [6769] and the 35–item version of the SEI [36]. The SCM [6769] addressed eight of 15 school connectedness components (see Table 5) and reported on four of six psychometric properties (see Table 6); scoring strong positive ratings for content validity and hypothesis testing, a moderate positive rating for internal consistency and a conflicting rating for structural validity. The 35–item version of the SEI [36] reported on four of six psychometric properties; scoring strong positive ratings for internal consistency and content validity and indeterminate ratings for structural validity and hypothesis testing. Interestingly, however, the SEI [36] addressed the most (i.e., 12 of 15) school connectedness components of any measure included in the review; suggesting that the SEI [36] not only has promising psychometrics but encompasses a broader range of school connectedness components. The school connectedness measure with the poorest psychometric properties was the SPPCC [54], reporting on three of six psychometric properties; scoring strong negative ratings for internal consistency and structural validity, and conflicting results for content validity. Across all measures and measurement properties there were a number of conflicting ratings (14%), many indeterminate ratings (41%), and missing data (36%); suggesting more research is required to determine the psychometric qualities of these measures.

An in-depth discussion about the statistical frameworks used in included articles is outside the scope of this review; however, it is noteworthy to draw reader’s attention to the fact that none of the measures included in this review were tested at an item level using IRT. All measures were tested using CTT. A major limitation of CTT is its relatively weak theoretical assumptions and circular dependency; that is “(a) the person statistic (i.e., observed score) is (item) sample dependent and (b) the item statistics are (examinee) sample dependent; which poses some difficulties in CTT’s application in some measurement situations” [70]. IRT was developed to address the main limitations of CTT. However, IRT does have its own limitations in that it is a complex model requiring much larger samples of participants compared to CTT [71]. Even with the need for larger samples when using IRT, the benefits of IRT outweigh the singular use of CTT [70, 71]. IRT assists in determining whether (a) a measure has any redundant items; (b) items are functioning sufficiently to adequately capture the construct of interest; and (c) the response format is operating appropriately [70]. Future research should test included measures using IRT to gain a more in-depth understanding of measures functioning at an item level.


Although every effort was taken to ensure the scientific rigor of this systematic review, there were a number of limitations. Information published in languages other than English were not included. Therefore, there may be some relevant findings regarding the psychometric properties of measures that were not included in this review. In addition, authors of included studies were not contacted therefore some information may have been overlooked. Furthermore, evaluating the quality of criterion validity, cross cultural validity and responsiveness was outside the scope of this review.


As school connectedness is both a precursor to and an outcome of academic success, it is important to be able to reliably and validly assess students’ sense of school connectedness in order to accurately identify and support at-risk students [17, 39]. The current systematic review reported on the psychometric properties of 15 self-report school connectedness measures for students aged between 6 and 14 years of age. The measures with the strongest psychometric properties was the SCM and the 35item version SEI exploring 8 and twelve (of 15) school connectedness components respectively. This systematic review highlighted the need for further research to examine the psychometric properties of existing school connectedness measures that were identified as having moderate to strong positive evidence.

Supporting information

S3 Table. Overview of school connectedness instruments: Reasons for exclusion.


S1 File. Excluded publications and reasons for exclusion.



The first author completed this study as part of the requirements for the completion of a PhD under supervision of Reinie Cordier, Annette Joosten and Helen Bourke-Taylor. The authors of the study would like to thank Katina Swan who assisted with abstract screening and instrument ratings and Jae-Hyun Kim who supported the team with COSMIN training and instrument ratings.


  1. 1. Osterman KF. Students’ Need for Belonging in the School Community. Review of Educational Research. 2000;70(3):323–67.
  2. 2. Maddoz SJ, Prinz RJ. School bonding in children and adolescents: Conceptualisation, assessment and associated variables. Clinical Child and Familty Psychology Review. 2003;6:31–49.
  3. 3. Shochet I, Dadds MR, Ham D, Montague R. School Connectedness Is an Underemphasised Parameter in Adolescent Mental Health: Results of a Community Prediction Study. Journal of Clinical Child & Adolescent Psychology. 2006;35:170–9. pmid:16597213
  4. 4. McNeely CA, Nonnemaker JM, Blum RW. Promoting School Connectedness: Evidence from the National Longitudinal Study of Adolescent Health. Journal of School Health. 2002;72:138–46. pmid:12029810
  5. 5. Newman F, Wehlage GG, Lamborn SD. The significance and sources of student engagement. Student engagement and achievement in American secondary schools. New York, NY.: Teachers College Press; 1992. p. 62–91.
  6. 6. Finn JD, Rock DA. Academic success among students at risk for school failure. Journal of Applied Psychology. 1997;82:221–34. pmid:9109280
  7. 7. Fredricks J, Blumenfeld P, Paris A. School Engagement: Potential of the Concept, State of the Evidence. Review of Educational Research. 2004;74:59–109.
  8. 8. Chapman RL, Buckley L, Sheehan M, Shochet I. School Based Programs for Increasing Connectedness and Reducing Risk Behaviour: A Systematic Review. Educational Psychology Review. 2013;25:95–114.
  9. 9. Connell JP, Spencer MB, Aber JL. Educational risk and resilience in African-American youth: context, self, action and outcomes in school. Child Development. 1994;65:493–506. pmid:8013236
  10. 10. Resnick MD, Bearman PS, Blum RW, Bauman KE, Harris KM, Jones J, et al. Protecting adolescents from harm: findings from the national longitudinal study on adolescent health. Journal of the American Medical Association. 1997;278:823–32. pmid:9293990
  11. 11. McGraw K, Moore S, Fuller A, Bates G. Family, peer and school connectedness in final year secondary school students. Australian Psychologist. 2008;43:27–37.
  12. 12. Shochet I, Ham D. Universal school based approaches to preventing adolescent depression: Past findings and future directions of the Resourceful Adolescent Program. International Journal of Mental Health Promotion. 2012;6(3).
  13. 13. Hart SR, Stewart K, Jimerson SR. The Student Engagement in Schools Questionnaire (SESQ) and the Teacher Engagement Report Form—New (TERF-N): Examining the preliminary evidence. Contemporary School Psychology. 2011;15:67–79.
  14. 14. Christenson SL, Sinclair MF, Lehr CA, Godber Y. Promoting successful school completion: Critical conceptual and methodological guidelines. School Psychology Quarterly. 2001;16:468–84.
  15. 15. Commonwealth of Australia. Belonging, Being & Becoming: The Early Years Learning Framework for Australia. In: Australian Government Department of Education EaWRftCoAG, editor. 2009.
  16. 16. Archambault I, Janosz M, Fallu J, Pagani L. Student engagement and its relationship with early high school dropout. Journal of Adolescence. 2009;32:651–70. pmid:18708246
  17. 17. Libbey HP. Measuring student relationships to school: Attachment, bonding, connectedness and engagement. Journal of School Health. 2004;74:274–83. pmid:15493704
  18. 18. Deci EL, Ryan RM. Intrinsic motivation and self determination in human behaviour. New York: Plenum; 1985.
  19. 19. Niemiec CP, Ryan RM. Autonomy, competence and relatedness in the classroom: applying self-determination theory to educational practice. Theory and Research in Education. 2009;SAFE Publications(7):2.
  20. 20. Flavell JH. Cognitive development: Children’s knowledge about the mind. Psychology ARo, editor. Palo Alto, CA.: Annual Reviews, Inc.; 1999. 21–45 p.
  21. 21. Ryan RM, Grolnick WS. Origins and pawns in the classroom: self report and projective assessments of individual differences in children’s perceptions. Journal of Personality and Social Psychology. 1986;50:550–8.
  22. 22. Deci EL, Schwartz IS, Sheinman L, Ryan RM. An instrument to assess adults’ orientations toward control versus autonomy with children: Reflections on intrinsic motivation and perceived competence. Journal of Educational Psychology. 1981;73:642–50.
  23. 23. Nor Aziah A. An Overview of Connectedness ICT Development for Social and Rural Connectedness. New York, NY.: Springer; 2013.
  24. 24. Deci EL, Ryan J. The "what" and "why" of goal pursuits: Human needs and the self determination of behaviour. Psychological Inquiry. 2000;11:227–68.
  25. 25. Appleton JJ, Christenson SL, Furlong MJ. Student engagement with school: Critical conceptual and methodological issues of the construct. Psychology in the Schools. 2008;45:369–86.
  26. 26. Goodenow C. The Psychological Sense of School Membership among Adolescents: Scale Development and Educational Correlates. Psychology in the Schools. 1993;30:79–90.
  27. 27. Goodenow C. Classroom belonging among early adolescent students: Relationships to motivation and achievement. Journal of Early Adolescence. 1993;13:21–43.
  28. 28. Appleton JJ, Christenson SL, Kim D, Reschly A. Measuring cognitive and psychological engagement: Validation of the Student Engagement Instrument. Journal of School Psychology. 2006;44:427–45.
  29. 29. Marks HM. Student engagement in instructional activity: Patterns in the elementary, middle and high school years. American Educational Research Journal. 2000;37(1):153–84.
  30. 30. Achrekar A, Anglin T, Bishop J, Blum L, Blum R, Bogden J, et al. Wingspread declaration on school connections. Journal of School Health. 2004;74:233–4. pmid:15493700
  31. 31. Jimerson SR, Campos E, Grieif JL. Toward understanding of definitions and measures of school engagement and related terms. The Calfornia School Psychologist. 2003;8:7–27.
  32. 32. Covington MV. Goal theory, motivation and school achievement: an integrative review. Annual Review of Psychology. 2000;51:171–200. pmid:10751969
  33. 33. Aldridge J, Laughksch R, Seopa M, Fraser B. Development and Validation of an Instrument to Monitor the Implementation of Outcome Based Learning Environments in Science Classrooms in South Africa. International Journal of Science Education. 2006;28(1):45–70.
  34. 34. Haraldsson KS, Lindgren EM, Fridlund B, Baigi A, Lydell M, Marklund B. Evaluation of a school-based health promotion programme for adolescents aged 12–15 years with focus on well-being related to stress. Public Health. 2008;122:25–33. pmid:17719616
  35. 35. Chionh YH, Fraser BJ. Classroom Environment, Achievement, Attitudes and Self-Esteem in Geography and Mathematics in Singapore. International Research in Geographical and Environmental Education. 2009;18(1):29–44.
  36. 36. Appleton JJ, Christenson SL. Scale description and references for the Student Engagement Instrument. 2004.
  37. 37. Trickett EJ, Moos RH. Classroom Environment Scale Manual. Chicago, IL.: Mindgarden; 2002.
  38. 38. Boor-Klip HJ, Segers E, Henrick MMHG, Cillessen AHN. Development and Psychometric Properties of the Classroom Peer Context Questionnaire. Social Development. 2016;25:370–89.
  39. 39. National Center for School Engagement. Quantifying School Engagement: Research Report. Denver, CO.: Colorado Foundation for Families and Children, 2006.
  40. 40. Bowling A. Mode of questionnaire administration can have serious effects on data quality. Journal of Public Health. 2005;27:281–91. pmid:15870099
  41. 41. West MR. The limitations of self-report measures of non-cognitive skills. Brookings, 2014.
  42. 42. Terwee CB. An overview of systematic reviews of measurement properties of outcome measurement instruments that intend to measure (aspects of) health status or (health- related) quality of life. The Netherlands: Department of Epidemiology and Biostatistics VU University Medical Center Amsterdam., 2014.
  43. 43. Mokkink LB, Terwee CB, Knol DL, Stratford PW, Alonso J, Patrick DL, et al. The COSMIN checklist for evaluating the methodological quality of studies on measurement properties: A clarification of its content. BMC Medical Research Methodology. 2010;10:1–8.
  44. 44. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of Internal Medicine. 2009;151:264–9. pmid:19622511
  45. 45. Cordier R, Chen Y, Speyer R, Totino R, Doma K, Leicht A, et al. Child-report measures of Occupational Performance: A Systematic Review. Plos One. 2016;11:1–24. pmid:26808674
  46. 46. Cordier R, Milbourn B, Buchanan A, Chung D, Martin R, Speyer R. A systematic review evaluating the psychometric properties of measures of social inclusion. In Press. 2016.
  47. 47. Cordier R, Speyer R, Chen YW, Wilkes-Gillan S, Brown T, Bourke-Taylor H. Evaluating the psychometric quality of social skills measures: A systematic review. Plos One. 2015;10:1–32.
  48. 48. Higgins JP, Green S. Cochrane handbook for systematic reviews for interventions.: Wiley Online Library; 2008.
  49. 49. Centre for Reviews Dissemination. Systematic reviews: CRD’s guidance for undertaking reviews in health care. Layerthorpe, York.: CRD University of York; 2009.
  50. 50. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. International consensus on taxonomy, terminology and definitions of measurement properties for health related patient reported outcomes: results of the COSMIN study. Journal of Clinical Epidemology. 2010;63:737–45.
  51. 51. Terwee CB, Bot S, de Boer M, van der Windt D, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionaires. Journal of Clinical Epidemiology. 2007;60:34–42. pmid:17161752
  52. 52. Schellingerhout JM, Verhagen AP, Heymans MW, Koes BW, de Vet H, Terwee CB. Measurement properties of disease-specific questionaires in patients with neck pain: a systematic review. Quality of Life Research. 2012;21:659–70. pmid:21735306
  53. 53. Doma K, Speyer R, Leicht A, Cordier R. Comparison of psychometric properties between usual-week and past-week self- reported physical activity questionnaires: a systematic review. International Journal of Behavioral Nutrition and Physical Activity. 2016;14(10). pmid:28137268
  54. 54. Rubie-Davies C, Asil M, Teo T. Assessing measurement invariance of the Student Personal Perception of Classroom Climate across different ethic groups. Journal of Psychoeducational Assessment. 2016;34:442–60.
  55. 55. Carter C, Reschly A, Lovelace MD, Appleton JJ, Thompson D. Measuring student engagement among elementary students: pilot of the Student Engagement Instrument—Elementary Version. American Psychological Association. 2012;27:61–73. pmid:22774781
  56. 56. Solomon D, Battistich V, Watson M, Schaps E, Lewis C. A six-district study of educational change: direct and mediated effects of the child development project. Social Psychology of Education. 2000;4:3–51.
  57. 57. Lovelace MD, Reschly A, Appleton JJ, Lutz ME. Concurrent and predictive validity of the Student Engagement Instrument. Journal of Psychoeducational Assessment. 2014;32:509–20.
  58. 58. Renshaw TL. A replication of the technical adequacy of the Student Subjective Wellbeing Questionnaire. Journal of Psychoeducational Assessment. 2015;33:757–68.
  59. 59. Ding C, Liu Y, Berkowitz M. The study of factor structure and reliability of an abbreviated school climate survey. Canadian Journal of School Psychology. 2011;26:241–56.
  60. 60. Betts JE, Appleton JJ, Reschly A, Christenson SL, Huebner SE. A study of the factorial invariance of the student engagement instrument (SEI): results from middle and high school students. American Psychological Association. 2010;25:84–93.
  61. 61. Reschly A, Betts JE, Appleton JJ. An examination of the validity of two measures of student engagement. International Journal of School & Educational Psychology. 2014;2:106–14.
  62. 62. Anderson-Butcher D, Amorose A, Iachini A, Ball A. The Development of the Perceived School Experiences Scale. Research on Social Work Practice. 2012;22:186–94.
  63. 63. Renshaw TL, Long ACJ, Cook C. Assessing adolescents’ positive psychological functioning at school: Development and validation of the Student Subjective Wellbeing Questionnaire. School Psychology Quarterly. 2014;30:534–52. pmid:25180834
  64. 64. Rowe E, Kim S, Baker J, Kamphaus R, Horne A. Student Personal Perception of Classroom Climate: Exploratory and Confirmatory Factor Analyses. Educational and Psychological Measurement. 2010;70:858–79.
  65. 65. Voelkl KE. Measuring students identification with school. Educational and Psychological Measurement. 1996;56:760–70.
  66. 66. Rodney LW, Johnson DL, Srivastava R. The impact of culturally relevant violence prevention models on school-age youth. The Journal of Primary Prevention. 2005:439–54. pmid:16215693
  67. 67. Zullig KJ, Koopman TM, Patton JM, Ubbes VA. School Climate: Historical Review, Instrument Development and School Assessment. Journal of Psychoeducational Assessment. 2010;28:139–52.
  68. 68. Zullig KJ, Collins R, Ghani N, Patton JM, Huebner SE, Ajamie J. Psychometric support of the School Climate Measure in a large, diverse sample of adolescents: A replication and extension. Journal of School Health. 2014;84:82–90. pmid:25099422
  69. 69. Zullig KJ, Ghani N, Patton JM, Collins M, Hunter AA, Huebner SE, et al. Preliminary development of a revised version of School Climate Measure. American Psychological Association. 2015;27:1072–81. pmid:25642931
  70. 70. Fan X. Item response theory and classical test theory: an empirical compraison of their item/person statistics. Educational and Psychological Measurement. 1998;58:1–17.
  71. 71. Duong M. Introduction to Item Response Theory and Its Applications. Michigan State University, 2004.