Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Reliability and validity of clinically accessible smartphone applications to measure joint range of motion: A systematic review

  • Justin W. L. Keogh ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Queensland, Australia, Sports Performance Research Institute New Zealand (SPRINZ), AUT University, Auckland, New Zealand, Cluster for Health improvement, Faculty of Science, Health, Education and Engineering, University of the Sunshine Coast, Sunshine Coast, Queensland, Australia

  • Alistair Cox,

    Roles Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Physiotherapy, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, QLD, Australia

  • Sarah Anderson,

    Roles Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Physiotherapy, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, QLD, Australia

  • Bernard Liew,

    Roles Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Centre of Precision Rehabilitation for Spinal Pain (CPR Spine), School of Sport, Exercise and Rehabilitation Sciences, University of Birmingham, Edgbaston, Birmingham, United Kingdom

  • Alicia Olsen,

    Roles Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Queensland, Australia

  • Ben Schram,

    Roles Conceptualization, Methodology, Writing – review & editing

    Affiliations Department of Physiotherapy, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, QLD, Australia, Water Based Research Unit, Bond Institute of Health and Sport, Bond University, Gold Coast, QLD, Australia

  • James Furness

    Roles Conceptualization, Data curation, Methodology, Writing – original draft, Writing – review & editing

    Affiliations Department of Physiotherapy, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, QLD, Australia, Water Based Research Unit, Bond Institute of Health and Sport, Bond University, Gold Coast, QLD, Australia


Measuring joint range of motion is an important skill for many allied health professionals. While the Universal Goniometer is the most commonly utilised clinical tool for measuring joint range of motion, the evolution of smartphone technology and applications (apps) provides the clinician with more measurement options. However, the reliability and validity of these smartphones and apps is still somewhat uncertain. The aim of this study was to systematically review the literature regarding the intra- and inter-rater reliability and validity of smartphones and apps to measure joint range of motion. Eligible studies were published in English peer-reviewed journals with full text available, involving the assessment of reliability and/or validity of a non-videographic smartphone app to measure joint range of motion in participants >18 years old. An electronic search using PubMed, Medline via Ovid, EMBASE, CINAHL, and SPORTSDiscus was performed. The risk of bias was assessed using a standardised appraisal tool. Twenty-three of the eligible 25 studies exceeded the minimum 60% score to be classified as a low risk of bias, although 3 of the 13 criteria were not achieved in >50% of the studies. Most of the studies demonstrated adequate intra-rater or inter-rater reliability and/or validity for >50% of the range of motion tests across all joints assessed. However, this level of evidence appeared weaker for absolute (e.g. mean difference ± limit of agreement, minimal detectable change) than relative (e.g. intraclass correlation, correlation) measures; and for spinal rotation than spinal extension, flexion and lateral flexion. Our results provide clinicians with sufficient evidence to support the use of smartphones and apps in place of goniometers to measure joint motion. Future research should address some methodological limitations of the literature, especially including the inclusion of absolute and not just relative reliability and validity statistics.


The measurement of joint range of motion (ROM) in static and dynamic, passive and active, human movements is an essential skill in the musculoskeletal assessments commonly performed by physiotherapists, as well as some strength and conditioning coaches, to examine joint function, detect joint asymmetry and evaluate treatment efficacy as an objective outcome measure [1]. In the present study, static ROM is defined as the range of a joint held motionless at either of its limit of movement. Dynamic ROM is the range a joint moved to and from the limits of movement. When a joint is moved passively by an assessor or external device, passive ROM is assessed. When a joint moves as a result of muscular contraction, active ROM is assessed. The universal goniometer has long been the preferred method of clinical ROM measurement (especially static ROM) due to its ease of use, low cost, and demonstrated reasonable levels of reliability and validity in numerous studies [24].

However, the universal goniometer is not without its drawbacks, even when assessing static joint ROM. When assessing static ROM such as the angle of hinge joints like the knee and elbow in adults, there may always be some degree of error due to the universal goniometer not typically being long enough to be aligned directly with the appropriate landmarks on both proximal and distal adjacent joints. Spinal rotation may also be difficult to measure with a universal goniometer due to the difficulty in palpating anatomical landmarks to use as a reference point [57]. It is perhaps no surprise then that reliability is reduced when measuring the spinal compared to upper and lower limb motion with a universal [69]. These potential issues highlighted for the use of the universal goniometer in assessing static joint ROM may be further exacerbated in inexperienced clinicians who have a relative inability to correctly locate anatomical landmarks; as well as the assessment of dynamic rather than static ROM [10, 11].

The development of smartphone technology and software applications (apps), coupled with the ubiquity of smartphone ownership, now allows smartphones to measure joint ROM. Like the universal goniometer, smartphones are similarly easy to use, relatively inexpensive, and highly accessible [12]. Their inbuilt sensors such as an accelerometer, gyroscope, and magnetometer provide the necessary equipment to allow the smartphone to measure angles and displacements [12]. With the use of apps that can be downloaded onto the smartphone, these measurements can be transformed into meaningful assessment data such as joint ROM. One possible advantage of smartphone apps is that their use may circumvent some of the difficulties of using the universal goniometer regarding landmark identification and alignment. Where smartphone apps can altogether overcome the aforementioned drawbacks of the universal goniometer may depend upon the technology used and the experience of the clinician with this alternative approach. The emergence of smartphone apps therefore presents the clinical practitioners with a new set of tools to incorporate into clinical practice, especially for some of the more difficult joint ROMs to quantify.

In order for clinicians to be willing to replace the universal goniometer (at least in some contexts) with smartphone apps as a tool to clinically assess ROM, the validity and reliability of smartphone apps must be comparable or better than the universal goniometer. In psychometric terminology, reliability deals with the consistency in angle and displacement measures produced by smartphone apps, when used by multiple assessors (inter-rater), and when the same assessor performs multiple measurements (intra-rater) [13]. Validity deals with the extent that the measurement obtained from one device, such as smartphone apps, correlates or matches the criterion laboratory devices such as 3-D motion capture or criterion clinical tools such as the universal goniometer [13].

On the topic of synthesizing the psychometric properties of smartphone apps, a number of systematic reviews have been conducted [14, 15]. However, the review of Milani et al. [14] is considered to be outdated due to the relative explosion of research into human movement analysis apps and as such, only included 12 studies assessing joint angle measurements. Further, while the review of Rehan Youssef and Gumaa [15] was more recent and well conducted in most aspects, there were several methodological limitations. First, their literature search was completed in August 2016 (including 15 studies and one case study assessing joint ROM) [15]. Second, they utilised a non-validated risk of bias assessment tool that they personally developed [15]. Third, there was a relative lack of reporting of specific reliability and validity data for each of the multiple actions that can occur at some joints such as the spine (trunk) and shoulder joints [15]. The relative lack of reporting specific data for each joint action is a major issue for clinicians, as it is quite possible that a particular smartphone and app may have sufficient reliability and/or validity or measuring some actions at a particular joint in certain planes of motion (e.g. flexion and extension in the sagittal plane) but that more complicated actions such as rotation in the transverse plane may be less reliable and/or valid.

The purpose of this systematic review was to address some of the limitations of the previous review in this area so as to better assist the clinician identify which smartphone apps may show adequate inter-rater and intra-rater reliability as well as validity for the measurement of ROM at particular joints and actions in clinical practice. This state-of-the-art review will assist clinical practitioners in deciding the appropriateness and choice of smartphone apps for clinical ROM assessment.

Search methodology

Search strategy

The protocol for this systematic review has not been registered. A database search of PubMed, Medline via Ovid, EMBASE, CINAHL, and SPORTSDiscus was initially performed on 20th October 2017 by two independent reviewers. This search was repeated on 20th December 2018 to maximise the currency of the findings of this review. The search strategy is described in Appendix 1.

Inclusion and exclusion criteria.

Studies retrieved from the search process were determined by two independent reviewers, with a third reviewer used to assist with consensus, were any discrepancies being initially reported by the first two independent reviewers. The eligibility of the studies to be included in this review was determined by the following criteria: published in peer-reviewed journals; measure human participants aged over 18 years old; used a smartphone app to measure joint ROM and assessed validity and/or reliability of these apps; published from 2007 as this was the year the iPhone was launched; published in English and have full text available. Case studies, abstracts only or grey literature were not included. Smartphone apps which required either image/video recordings and/or post data collection analyses to generate joint angles were excluded, as such an approach is unlikely to be used in clinical practice due to privacy concerns with the storage of video footage and the additional analysis time that would be required.

Quality assessment

The Critical Appraisal Tool (CAT), developed by Brink and Louw [16] was used to appraise the methodological quality of studies reporting a reliability and/or validity component. The included studies to be appraised were rated on a set of specific criteria involving 13 items that assessed a variety of methodological factors including subject and rater characteristics, rater blinding, testing order, criterion measure characteristics and statistical analyses performed [16]. Consistent with a recent study that has used the CAT [17], in order to satisfy Criteria 13 (Statistical Methods) the study had to report absolute reliability and/or validity statistics (e.g. SEM, MDC or MD±LOA) in addition to the more commonly reported relative reliability and/or validity statistics (e.g. r or ICC). As not all included studies assessed validity, not all the CAT criteria were relevant to each study. In this case, each validity item was scored as not applicable (NA) and that criteria not included in the overall assessment of the particular study’s risk of bias. Consistent with the use of the CAT in previous studies, a threshold of ≥ 60% was considered as high quality, and a quality of < 60% was rated as poor quality, consistent with previous systematic reviews [4, 17, 18].

The methodological quality of the studies identified by the search was assessed by two independent reviewers. Across all the 13 items of the CAT, there was an overall agreement of 86.2% between the raters when reviewing the methodological quality of the 37 articles included in this review, resulting in a Cohen’s unweighted Kappa statistic of 0.64, indicating good agreement between the two raters [19].

Data extraction

Data was obtained from studies that met the inclusion and exclusion criteria, which included: the CAT assessment, participants, application and smartphone device, joint movement assessed and position that the participant was in whilst being assessed. Where applicable, data was extracted for intra-rater and inter-rater reliability as well as validity. Both relative and absolute reliability and validity statistics were reported where available to provide an index of the correlation or rank order (relative measure) and change/difference in the mean (absolute measure) [20, 21]. Common measures of relative reliability and validity include the intra-class correlation coefficient (ICC), concordance correlation coefficient (CCC) and Pearson’s product moment correlation (r). Alternatively, common measures of absolute reliability and validity include the standard error of measurement (SEM), minimal detectable change (MDC), mean difference (MD) and limits of agreement (LoA).

Data analysis

A critical narrative approach was applied to synthesize and analyse the data. For each measure, the following criteria were used to judge the level of intra-rater and inter-rater reliability and validity. For relative measures, the following criteria were used ICC: Poor = ICC < 0.40, Fair = ICC 0.40–0.59, Good = ICC 0.60–0.74, Excellent ≥ 0.75 [22]; for r: negligible r = 0–0.29, low r = 0.30–0.49, moderate r = 0.50–0.69, high r = 0.70–0.89, very high r = 0.90–1 [23]; and for CCC: Poor CCC < 0.90, Moderate CCC = 0.90–0.94, Substantial CCC = 0.95–0.99, Almost perfect CCC > 0.99 [24]. For absolute measures of reliability and validity, the following criteria were used SEM: Poor SEM > 5° and Good SEM ≤ 5° [25]; for MDC: Poor MDC > 5° and Good MDC ≤ 5° [25, 26] for LOA, a standard deviation threshold of 5° [2528] multiplied by 1.96 to derive the 95% LOA bandwidth: poor > ± 9.8° and Good < ± 9.8°.


Selection of studies

Fig 1 represents the article review process based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [29]. Our initial literature search identified 1066 studies, with the second literature search identified an additional 170 studies, leading to combined total of 1236 identified studies. Within the 1236 identified studies, 268 duplicates were removed prior to the title and abstract screening, with an additional five duplicates subsequently identified when screening the second literature search. The search strategy yielded 36 eligible studies, with one additional study identified through other sources for a total of 37 studies.

Study characteristics and methodology

A description of the broad methodology of each included study was depicted in Table 1. In order of the most common to least common joints assessed, were spine (trunk), knee, shoulder, wrist, elbow, ankle and hip. As the trunk and shoulder allow movement in more directions than other joints, the studies assessing the trunk and shoulder typically looked at a greater number of joint movements across the multiple planes of movement. For example, the studies assessing trunk motion typically looked at trunk flexion/extension, lateral flexion and axial rotation; with the studies assessing shoulder motion typically examining flexion, abduction, horizontal adduction as well as external/internal rotation.

Table 1. Characteristics of studies included in this review.

The majority of studies involved healthy participants, although some studies involved patients with neck pain [30, 31], shoulder pathology [27, 32, 33], various upper limb injuries [34, 35] or knee pain [36, 37]. A relatively wide variety of smartphones, applications and criterion devices (for the assessment of validity) were utilised in the studies. The most common smartphones were iPhones which were used in 28 studies, with the most common model being the iPhone 4 which was used in nine studies. Samsung phones were used in another six studies, with one study also using an iPod. A wide variety of apps were utilised, with only the most frequently used being the Clinometer (n = 5) and Knee Goniometer (n = 3). All other apps were used in either one or two studies. For the 30 studies that looked at some aspects of validity, the validity of the app was most commonly compared to goniometers (n = 19), 3D motion capture (n = 5) or inclinometers (n = 4).

Critical appraisal

A critical appraisal of the included articles is summarised in Table 2. The percentage of CAT score ranged from 55% [65] to 100% [32]. Papers with ‘NA’ in their appraisal were not assessed against that particular criteria. Two studies were considered to be of low quality with a score < 60% [53, 55], with another one study close to this criteria with an overall quality score of 62% [38]. Only two of the CAT criteria were achieved in less than 50% of the studies (Criteria Six: Order of Examination and Criteria 13: Statistical Methods). This contrasted with one other criteria being achieved in all studies Criteria #10: Execution of the Index Test).

Reliability and validity

The reliability and validity of the assessments are summarised in Table 3. For the sake of simplicity, the following three text sections will summarise the key results for intra-rater reliability, inter-rater reliability and validity, respectively.

Table 3. Reliability and validity of the selected studies.

Intra-rater reliability.

Twenty-six studies assessed aspects of intra-rater reliability, with 10 studies reporting relative metrics only, one study reporting absolute metrics only, and the remaining 15 studies reporting both relative and absolute metrics. Twenty-five of 26 studies reported excellent intra-rater relative reliability as defined by an ICC > 0.75 for more than 50% of the joint movements they examined, the only exception being Tousignant-Laflamme et al. [45]. However, this classification of poor relative intra-rater reliability for Tousignant-Laflamme et al. [45] was primarily due to the results of one examiner using an iPhone 3, compared to the other examiner who used an iPhone 4. If we were to consider all the studies that assessed relative intra-rater reliability with an iPhone 4, all six studies demonstrated that smartphone apps had adequate relative intra-rater reliability [1, 34, 37, 42, 48, 64]. Thirteen of 17 studies reported good absolute intra-rater reliability as defined by a SEM or MDC < 5o or LOA < ± 9.8o for more than 50% of the joint movements they examined, with only four studies not satisfying this threshold [40, 44, 51, 59]. It should however be noted that the study by Quek et al. [44] satisfied the criteria for more than 50% of the movements when quantified by the SEM (the three of the four movements) but failing this when reliability was assessed by the MDC for all four movements.

Inter-rater reliability.

Twenty-five studies assessed aspects of inter-rater reliability, in which 13 studies reported relative metrics only, and 12 studies reported both relative and absolute metrics. Twenty-three of 25 studies demonstrated excellent inter-rater reliability as defined by an ICC > 0.75 for more than 50% of the joint movements they examined, with only two studies not satisfying this threshold for relative inter-rater reliability [37, 45]. Six out of 11 studies reported good absolute inter-rater reliability as defined by a SEM or MDC < 5o or LOA < ± 9.8o for more than 50% of the joint movements they examined, with five studies not satisfying this criteria [27, 30, 31, 39, 42]. While Pourahmadi et al. [30] was deemed to not meet this threshold of absolute inter-rater reliability, this was based on all four MDC values > 5o, although the SEM values for the same movements were all < 5o.


Thirty studies measured some aspects of validity, of which seven studies reported relative metrics only, five studies reported absolute metrics only, and 18 studies reported both relative and absolute metrics. Twenty of 25 studies observed excellent/substantial relative validity as defined by ICC > 0.75, r > 0.9 or CCC > 0.95 for more than 50% of the joint movements examined, with five studies not meeting this threshold for this criteria [28, 30, 36, 45, 54]. Seventeen of 23 studies observed excellent/substantial absolute validity as defined by SEM or MDC < 5o or LOA < ± 9.8o for more than 50% of the joint movements they examined, with six studies not meeting this threshold of absolute validity [27, 34, 36, 42, 53, 57].


This study systematically reviewed the literature for studies which examined the reliability and/or validity of smartphones and apps to quantify joint ROM. Thirty-seven studies were found to be eligible, with the studies assessing joint ROM across most of the body’s major joints. Specifically, the most common joints assessed were the spine/trunk (n = 11), knee (n = 9) and shoulder (n = 6), with a smaller number of studies examining the wrist (n = 4), elbow (n = 3), ankle (n = 3) and hip (n = 1) joints. The primary result of the systematic review was that the apps generally demonstrated adequate intra-rater and inter-rater reliability as well as validity when compared to criterion devices such as goniometers, inclinometers and 3D motion capture. However, there was a trend for the reliability outcomes that these results were somewhat stronger for relative (e.g. ICC, r) than absolute measures (e.g. SEM, MDC).

The tendency for the relative measures to be stronger than absolute measures is something that needs to be clearly understood by the clinician. Historically, many reliability and/or validity studies have only reported relative statistics such as the ICC and Pearson’s product moment correlation [3, 7, 8]. Relative statistical measures are typically used to describe the resemblance of two or more units within a group (e.g. the similarity of measurements undertaken by two clinicians) as a function of the resemblance between different groups. ICC is thus operationalized as a ratio between two variance measures [66]. To illustrate, the inter-rater reliability ICC measure of Pourahmadi et al. [30] is derived by the ratio of variance between (1) the variance between two measurements from the same participant, repetition, and session, against (2) the variance between two measurements from the same participant, repetition, session, from different raters [66]. While these relative statistics provide important information regarding the correlation or rank order of two or more measurements, they provide no detail regarding the magnitude of change/difference in the measurement across these time points [20, 21]. In contrast, absolute statistical measures of reliability/validity simply report the resemblance of two or more units within a group–in other words, it simply represents the individual variance components [66]. Clinically, an ICC is useful for a manager wanting to train a team of clinicians in the use of a mobile app, where the aim is to achieve a value as close to one. However, the individual variance components of between repetition and between sessions are more useful for the day-to-day practice of individual clinicians. Knowing the inherent variation in outcomes between each measurement repetition and between clinical visits, allows a clinician to judge the clinical importance of any kinematic change using a mobile app.

With respect to the validity of smartphones and apps to quantify ROM, it was apparent that the majority of studies included in this review assessed the validity of the smartphone app against a universal goniometer as the criterion test. However, it could be argued that the most appropriate criterion measure to determine joint ROM would be radiographic images such as x-ray or 3D motion capture. Only five studies utilised 3D motion capture as the criterion method [31, 41, 44, 55, 63]. All five of these studies demonstrated that the apps had adequate levels of relative validity with respect to 3D motion capture [31, 41, 44, 55, 63], with a similar result observed for all of the three studies assessing absolute metrics also reporting adequate validity [31, 41, 44]. It should also be noted that Charlton et al. [55] compared the relative validity of their smartphone app and inclinometer to the criterion method of 3D motion capture for assessing hip joint ROM. Based on the ICC threshold of 0.75 for sufficient validity, both devices were valid, with the smartphone exceeding this threshold for five of the six joint ROM and the inclinometer for all six. The use of 3D motion capture as a criterion measure may be more important when assessing dynamic rather than static joint ROM due to the inherent difficulties in maintaining correct position of the universal goniometer on the joint centre and its alignment with the proximal and distal joints during movement, especially at high movement velocities [11]. All the five 3D motion capture validity studies included in the present review assessed static ROM [31, 41, 44, 55, 63]–i.e. when range is recorded when a joint was positioned statically at its limit of motion. The lack of assessment of apps on dynamic ROM may not be surprising given that assessors need a joint to be held transiently in a static position to record the range from the app. For apps to measure dynamic ROM, it needs to sample a joint’s motion throughout the movement task, and this data needs to be post-processed to extract parameters of ROM–similar to how a 3D motion capture system quantifies ROM. Future studies are warranted to quantify the validity and reliability of smartphone apps in the assessment of dynamic ROM.

Another issue of major importance to clinicians is whether the smartphones and apps display adequate reliability and validity across all joints, joint actions and populations. It was heartening to see that most of these variables did not seem to influence the reliability and validity of the apps in measuring joint ROM. There was clear variation in the reliability and validity in different spinal joint movements, as well as a tendency for differences in reliability and validity between healthy and clinical populations and to a lesser extent smartphone models that the clinician should be aware.

When examining the 11 studies examining spinal ROM, it appeared that the assessment of flexion, extension and lateral flexion typically exhibited adequate relative reliability and/or validity [30, 31, 3846], although not all of these studies assessed absolute reliability and validity. Compared to the assessment of spinal flexion, extension and lateral flexion, the assessment of spinal axial rotation did not exhibit adequate reliability and validity in four [30, 40, 44, 45] of the nine studies. On this basis, it would appear that while the apps used within the studies reviewed in this manuscript typically had adequate reliability and validity for measuring spinal flexion, extension and lateral flexion, they are somewhat more questionable for measuring spinal rotation. Nevertheless, a recent study by Furness et al. [5] demonstrated comparable (or slightly better) reliability of an iPhone 6 and the Compass app to the universal goniometer for assessing thoracic rotation in healthy individuals. Unfortunately, while this study also demonstrated strong correlations between the Compass app and the universal goniometer, absolute validity was again inadequate as the limits of agreement between the two devices was ~25° [5]. Such findings suggest that the ability to perform a valid assessment of spinal rotation using devices that are feasible in clinical practice, be it goniometers or smartphone-based apps, may still remain somewhat questionable. Further research and/or additional clinical training into the use of these devices in this context is therefore warranted.

The comparatively poorer reliability and validity of smartphone apps measuring ROM in axial rotation compared to flexion-extension and lateral flexion could be attributed to several factors. First is the difference in smartphone sensor performance in different Cardinal planes [67]. Using performance testing of commercial Inertial Measurement Units (IMUs) as an example, the static error of the Xsens MT9 IMU was three times greater in the yaw (axial rotation) direction, than in the other two Cardinal Planes [68]. Second, is the reliance of different components of the smartphone sensor (e.g. magnetometer vs gyroscope) when measuring ROM in different Cardinal planes. Magnetometers are required when testing axial rotation in an anti-gravity position (e.g. sitting) [44, 45]. Compared to gravity-dependent gyroscopes, magnetometers are more sensitive to signal distortion arising from the environmental magnetic fields, potentially reducing their validity and reliability. In contrast, Pourahmadi et al. [30] tested cervical rotation in supine using the gravity-dependent gyroscope component of the smartphone sensor. This could explain the better validity and reliability of Pourahmadi et al. [30] compared to two other studies who reported poor reliability and validity [44, 45]. Third, is the issue of axis mis-alignment which occurs when the sensor’s coordinate axes are not aligned with anatomically meaningful axes [69]. There may be greater potential for axis mis-alignment, during axial rotation than in other movement directions [7072]. Given that spinal axial rotation commonly couples with secondary movement in other directions, maintaining a pure axial rotation may be difficult.

While most of the studies reviewed in this manuscript involved healthy participants, some recruited patients with joint pain. These studies included groups of individuals with neck pain [30, 31], shoulder pathology [27, 32, 33], various upper limb injuries [34, 35] or knee pain [36, 37]. The intra-rater and inter-rater reliability of the apps in these clinical populations was typically adequate in these nine studies, with the exception of Pereira et al. [37]. The validity of the apps in these populations was sufficiently high in six of the nine studies. For the three studies with insufficient validity [34, 36, 37], a variety of statistical approaches were used, with the results being CCC = 0.50–0.72, r = 0.68 and LoA ranging from -10 to +17.3o for the measured joint actions. Such results may suggest that using smartphones and apps can be quite reliable in a range of population groups, including some clinical populations presenting with musculoskeletal pathology.

The clinician should also be aware of the potential for how the make and model of the smartphone and the actual app can influence the reliability of assessment and how these two factors; as well as how the criterion test selected may influence the validity. While there was some variability between studies in the smartphone used (29 studies using iPhones, most commonly iPhone 4 or 5), there was little evidence of any effect of smartphone with the exception of one study [45]. Specifically, Tousignant-Laflamme et al. [45] reported adequate relative intra-rater reliability for an examiner with an iPhone 4, but not an examiner with an iPhone 3; with this ultimately resulting in poor inter-rater reliability. Further, the two examiners were unable to demonstrate adequate validity when compared to the CROM device, which is considered a criterion measure for measuring cervical ROM. Such results suggest that clinicians should use more recently developed smartphones, which are more likely to have improved sensor capacity than older smartphone models such as the iPhone 3.

With respect to the number of apps included in this review, there was a wide variety examined in these 37 studies. This was clearly demonstrated as only two apps were used in more than two studies, these being the Clinometer (n = 5) and Knee Goniometer (n = 3). The wide diversity of apps utilised in these studies and the general support for all of these apps’ reliability and validity demonstrated in this review, suggest that the clinician has multiple options when selecting the most appropriate app for measuring a particular joint ROM. However, it would still be recommended that clinicians utilise apps that have been demonstrated to be reliable and valid for measuring the particular joint action they wish to measure. We would also recommend that researchers need to continue to examine the reliability and validity of more recently developed apps and smartphones to determine if they offer advantages over those previously developed and assessed in the scientific literature.

This systematic review has several strengths and limitations that need to be acknowledged. A primary strength of this review in comparison to the literature [14, 15] is that it provides more detailed reporting of key aspects of the methodology and the actual relative and absolute intra-rater, inter-rater and validity outcomes for each joint action assessed in each study within our summary tables. The current study also appears to be the first systematic review on this topic to use a validated tool to assess the included studies’ methodological quality. By performing this assessment of study quality, it was determined that only two of the 37 studies were considered to be of low quality, based on a CAT score of less than 60% [53, 55]. Further, only two of the 13 CAT criteria were achieved in less than 50% of the studies (Criteria Six: Order of Examination and Criteria 13: Statistical Methods). The low score for Criteria Six: Order of Examination reflected the lack of randomisation and the potential for a learning or fatigue effect in many of the studies. The low score for Criteria 13 (Statistical Methods) tended to reflect the fact that most studies only reported relative reliability and/or validity statistics (e.g. r or ICC) without also reporting comparable absolute reliability and/or validity statistics (e.g. SEM, MDC or MD±LOA). As each of these three CAT criteria are highly important characteristics of strong psychometric study design, improvement in these areas would further strengthen the level of evidence described in this review.

The primary limitation of our review process reflected the manner in which sufficient validity and reliability was described. Specifically, we utilised a process in which a particular app was described as suitably reliable and/or valid when recommended statistical thresholds were achieved in more than 50% of the movements examined in each study. While this approach is useful as a generalised approach to describe the reliability and/or validity of an app, it is perhaps a little bit too simplistic due to the relatively high between-study variation in populations, joints, joint actions, smartphone and app (including software updates). This potential negative influence of software updates on the reliability and validity of apps has also been recently highlighted as a major issue in the use of global positioning systems (GPS) in sport [73]. Due to the limitation of our somewhat arbitrary greater than 50% reliability and validity threshold, we would suggest that clinicians should still examine the actual data summarised in this systematic review, as it is quite possible that different joint motions may demonstrate differences in their reliability and validity, even when assessed in the same population with the same smartphone and app. The final limitation of this systematic review is that we cannot be 100% certain that all eligible articles were identified and included in this systematic review.


The results of this systematic review provide relatively strong evidence regarding the intra-rater, inter-rater and validity of smartphones and apps to assess joint ROM; with these results tending to be observed across multiple joints, joint actions, populations, smartphones and apps. Such results suggest that clinicians may be able to use a relatively wide variety of smartphones and apps to quantify joint ROM. However, when absolute validity was assessed, they were often reasonably large differences in the angle determined by an app compared to a criterion measure such as 3D motion capture, goniometry or inclinometers. On this basis, it is imperative that the clinician does not switch between different assessment devices (such as a goniometer and a smartphone based apps) when assessing an individual across multiple time points. Clinical researchers should also aim to develop more reliable and valid protocols for using smartphones and apps, while continuing to collaborate with smartphone and app developers to further improve their reliability and validity for assessing joint ROM.


We would like to thank Mr David Honeyman, who is the Librarian for the Faculty of Health Sciences and Medicine, Bond University for assistance with the systematic review process.


  1. 1. Milanese S, Gordon S, Buettner P, Flavell C, Ruston S, Coe D, et al. Reliability and concurrent validity of knee angle measurement: smart phone app versus universal goniometer used by experienced and novice clinicians. Man Ther. 2014;19(6):569–74. Epub 2014/06/20. pmid:24942491.
  2. 2. Brosseau L, Tousignant M, Budd J, Chartier N, Duciaume L, Plamondon S, et al. Intratester and intertester reliability and criterion validity of the parallelogram and universal goniometers for active knee flexion in healthy subjects. Physiother Res Int. 1997;2(3):150–66. Epub 1997/01/01. pmid:9421820.
  3. 3. MacDermid JC, Chesworth BM, Patterson S, Roth JH. Intratester and intertester reliability of goniometric measurement of passive lateral shoulder rotation. J Hand Ther. 1999;12(3):187–92. pmid:10459526
  4. 4. May S, Chance-Larsen K, Littlewood C, Lomas D, Saad M. Reliability of physical examination tests used in the assessment of patients with shoulder problems: a systematic review. Physiotherapy. 2010;96(3):179–90. Epub 2010/08/03. pmid:20674649.
  5. 5. Furness J, Schram B, Cox AJ, Anderson SL, Keogh J. Reliability and concurrent validity of the iPhone® Compass application to measure thoracic rotation range of motion (ROM) in healthy participants. PeerJ. 2018;6:e4431. pmid:29568701
  6. 6. Gajdosik RL, Bohannon RW. Clinical measurement of range of motion. Review of goniometry emphasizing reliability and validity. Phys Ther. 1987;67(12):1867–72. Epub 1987/12/01. pmid:3685114.
  7. 7. Youdas JW, Carey JR, Garrett TR. Reliability of measurements of cervical spine range of motion—comparison of three methods. Phys Ther. 1991;71(2):98–106. Epub 1991/02/01. pmid:1989013.
  8. 8. Burdett RG, Brown KE, Fall MP. Reliability and validity of four instruments for measuring lumbar spine and pelvic positions. Phys Ther. 1986;66(5):677–84. Epub 1986/05/01. pmid:3703932.
  9. 9. Nitschke J, Nattrass C, Disler P, PhD F, Chou M, Ooi K. Reliability of the American Medical Association Guides' Model for Measuring Spinal Range of Motion: Its Implication for Whole-Person Impairment Rating. Spine. 1999;24(3):262–8. pmid:10025021
  10. 10. Cronin J, Nash M, Whatman C. Assessing dynamic knee joint range of motion using siliconcoach. Phys Ther Sport. 2006;7(4):191–4. Epub 2006/11/01. pmid:21663831.
  11. 11. Piriyaprasarth P, Morris ME, Winter A, Bialocerkowski AE. The reliability of knee joint position testing using electrogoniometry. BMC Musculoskeletal Disorders. 2008;9:6–. PMC2263037. pmid:18211714
  12. 12. Keogh JWL, Espinosa HG, Grigg J. Evolution of smart devices and human movement apps: recommendations for use in sports science education and practice. Journal of Fitness Research. 2016;5(Special Issue ASTN–Q Conference):14–5.
  13. 13. Gratton C, Jones I. Theories, concepts and variables. Research methods for sports studies. 2nd ed. London: Routledge; 2010. p. 77–99.
  14. 14. Milani P, Coccetta CA, Rabini A, Sciarra T, Massazza G, Ferriero G. Mobile smartphone applications for body position measurement in rehabilitation: a review of goniometric tools. PM R. 2014;6(11):1038–43. Epub 2014/05/23. pmid:24844445.
  15. 15. Rehan Youssef A, Gumaa M. Validity and reliability of smartphone applications for clinical assessment of the neuromusculoskeletal system. Expert Rev Med Devices. 2017;14(6):481–93. pmid:28462674
  16. 16. Brink Y, Louw QA. Clinical instruments: reliability and validity critical appraisal. J Eval Clin Pract. 2012;18(6):1126–32. Epub 2011/06/22. pmid:21689217.
  17. 17. Shiel F, Persson C, Furness J, Simas V, Pope R, Climstein M, et al. Dual energy X-ray absorptiometry positioning protocols in assessing body composition: A systematic review of the literature. Journal of science and medicine in sport. 2018;21(10):1038–44. Epub 2018/03/29. pmid:29588115.
  18. 18. Barrett E, McCreesh K, Lewis J. Reliability and validity of non-radiographic methods of thoracic kyphosis measurement: a systematic review. Man Ther. 2014;19(1):10–7. Epub 2013/11/20. pmid:24246907.
  19. 19. Landis JR, Koch GG. The Measurement of Observer Agreement for Categorical Data. Biometrics. 1977;33(1):159–74. pmid:843571
  20. 20. Fox B, Henwood T, Neville C, Keogh J. Relative and absolute reliability of functional performance measures for adults with dementia living in residential aged care. International Psychogeriatrics. 2014;26(10):1659–67. pmid:24989439
  21. 21. Liaw L-J, Hsieh C-L, Lo S-K, Chen H-M, Lee S, Lin J-H. The relative and absolute reliability of two balance performance measures in chronic stroke patients. Dis Rehabil. 2008;30(9):656–61. pmid:17852318
  22. 22. Fleiss J. The measurement of interrater agreement: statistical methods for rates and proportions. Sons JW, editor. New York1981.
  23. 23. Mukaka MM. A guide to appropriate use of Correlation coefficient in medical research. Malawi Med J. 2012;24(3):69–71. PMC3576830. pmid:23638278
  24. 24. McBride GB. A proposal for strength-of-agreement criteria for Lin's Concordance Correlation Coefficient. Hamilton, New Zealand: 2005.
  25. 25. McGinley JL, Baker R, Wolfe R, Morris ME. The reliability of three-dimensional kinematic gait measurements: A systematic review. Gait Posture. 2009;29(3):360–9. pmid:19013070
  26. 26. Wilken JM, Rodriguez KM, Brawner M, Darter BJ. Reliability and minimal detectible change values for gait kinematics and kinetics in healthy adults. Gait Posture. 2012;35(2):301–7. pmid:22041096
  27. 27. Shin SH, Ro du H, Lee OS, Oh JH, Kim SH. Within-day reliability of shoulder range of motion measurement with a smartphone. Man Ther. 2012;17(4):298–304. Epub 2012/03/17. pmid:22421186.
  28. 28. Werner BC, Holzgrefe RE, Griffin JW, Lyons ML, Cosgrove CT, Hart JM, et al. Validation of an innovative method of shoulder range-of-motion measurement using a smartphone clinometer application. J Shoulder Elbow Surg. 2014;23(11):e275–82. Epub 2014/06/14. pmid:24925699.
  29. 29. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62(10):e1–e34. pmid:19631507
  30. 30. Pourahmadi MR, Bagheri R, Taghipour M, Takamjani IE, Sarrafzadeh J, Mohseni-Bandpei MA. A new iPhone(R) application for measuring active craniocervical range of motion in patients with non-specific neck pain: a reliability and validity study. Spine J. 2017. Epub 2017/09/12. pmid:28890223.
  31. 31. Stenneberg MS, Busstra H, Eskes M, van Trijffel E, Cattrysse E, Scholten-Peeters GGM, et al. Concurrent validity and interrater reliability of a new smartphone application to assess 3D active cervical range of motion in patients with neck pain. Musculoskelet Sci Pract. 2018;34:59–65. Epub 2018/01/13. pmid:29328979.
  32. 32. Werner D, Willson J, Willy R, Barrios J. Validity, Reliability, and Normative Values for Clinically-Assessed Frontal Tibial Orientation as a Measure ofVarus-Valgus Knee Alignment. Int J Athl Ther Train. 2017;22(2):29–33. pmid:121526208. Language: English. Entry Date: 20170307. Revision Date: 20170309. Publication Type: Article. Note: For CE see Supplement pages 78–80. Journal Subset: Allied Health.
  33. 33. Mejia-Hernandez K, Chang A, Eardley-Harris N, Jaarsma R, Gill TK, McLean JM. Smartphone applications for the evaluation of pathologic shoulder range of motion and shoulder scores—a comparative study. JSES Open Access. 2018;2(1):109–14. pmid:30675577
  34. 34. Santos C, Pauchard N, Guilloteau A. Reliability assessment of measuring active wrist pronation and supination range of motion with a smartphone. Hand Surg Rehabil. 2017;36(5):338–45. Epub 2017/07/30. pmid:28754335.
  35. 35. Modest J, Clair B, DeMasi R, Meulenaere S, Howley A, Aubin M, et al. Self-measured wrist range of motion by wrist-injured and wrist-healthy study participants using a built-in iPhone feature as compared with a universal goniometer. J Hand Ther. 2018. Epub 2018/07/19. pmid:30017418.
  36. 36. Mehta SP, Barker K, Bowman B, Galloway H, Oliashirazi N, Oliashirazi A. Reliability, Concurrent Validity, and Minimal Detectable Change for iPhone Goniometer App in Assessing Knee Range of Motion. J Knee Surg. 2017;30(6):577–84. pmid:27894147
  37. 37. Pereira LC, Rwakabayiza S, Lecureux E, Jolles BM. Reliability of the Knee Smartphone-Application Goniometer in the Acute Orthopedic Setting. J Knee Surg. 2017;30(3):223–30. pmid:27218479.
  38. 38. Bedekar N, Suryawanshi M, Rairikar S, Sancheti P, Shyam A. Inter and intra-rater reliability of mobile device goniometer in measuring lumbar flexion range of motion. J Back Musculoskelet Rehabil. 2014;27(2):161–6. Epub 2013/09/14. pmid:24029833.
  39. 39. Furness J, Schram B, Cox AJ, Anderson SL, Keogh J. Reliability and concurrent validity of the iPhone((R)) Compass application to measure thoracic rotation range of motion (ROM) in healthy participants. PeerJ. 2018;6:e4431. Epub 2018/03/24. pmid:29568701; PubMed Central PMCID: PMCPMC5845564.
  40. 40. Grondin F, Hall T, von Piekartz H. Does altered mandibular position and dental occlusion influence upper cervical movement: A cross–sectional study in asymptomatic people. Musculoskeletal Science and Practice. 2017;27:85–90. pmid:27847242
  41. 41. Jung SH, Kwon OY, Jeon IC, Hwang UJ, Weon JH. Reliability and criterion validity of measurements using a smart phone-based measurement tool for the transverse rotation angle of the pelvis during single-leg lifting. Physiother Theory Pract. 2018;34(1):58–65. pmid:28922042.
  42. 42. Kolber MJ, Hanney WJ. The reliability and concurrent validity of shoulder mobility measurements using a digital inclinometer and goniometer: a technical report. Int J Sports Phys Ther. 2012;7(3):306–13. Epub 2012/06/06. pmid:22666645; PubMed Central PMCID: PMCPMC3362980.
  43. 43. Pourahmadi MR, Taghipour M, Jannati E, Mohseni-Bandpei MA, Ebrahimi Takamjani I, Rajabzadeh F. Reliability and validity of an iPhone((R)) application for the measurement of lumbar spine flexion and extension range of motion. PeerJ. 2016;4:e2355. Epub 2016/09/17. pmid:27635328; PubMed Central PMCID: PMCPMC5012335.
  44. 44. Quek J, Brauer SG, Treleaven J, Pua Y-H, Mentiplay B, Clark RA. Validity and intra-rater reliability of an Android phone application to measure cervical range-of-motion. J Neuroeng Rehabil. 2014;11:65–. PMC4021613. pmid:24742001
  45. 45. Tousignant-Laflamme Y, Boutin N, Dion AM, Vallée C-A. Reliability and criterion validity of two applications of the iPhone™ to measure cervical range of motion in healthy participants. J Neuroeng Rehabil. 2013;10(1):69. pmid:23829201
  46. 46. Ullucci PA Jr., Tudini F, Moran MF. Reliability of smartphone inclinometry to measure upper cervical range of motion. J Sport Rehabil. 2018:1–12. Epub 2018/07/25. pmid:30040023.
  47. 47. Lim JY, Kim TH, Lee JS. Reliability of measuring the passive range of shoulder horizontal adduction using a smartphone in the supine versus the side-lying position. J Phys Ther Sci. 2015;27(10):3119–22. Epub 2015/12/09. pmid:26644657; PubMed Central PMCID: PMCPMC4668148.
  48. 48. Mitchell K, Gutierrez SB, Sutton S, Morton S, Morgenthaler A. Reliability and validity of goniometric iPhone applications for the assessment of active shoulder external rotation. Physiother Theory Pract. 2014;30(7):521–5. Epub 2014/03/25. pmid:24654927.
  49. 49. Ramkumar PN, Haeberle HS, Navarro SM, Sultan AA, Mont MA, Ricchetti ET, et al. Mobile technology and telemedicine for shoulder range of motion: validation of a motion-based machine-learning software development kit. J Shoulder Elbow Surg. 2018;27(7):1198–204. Epub 2018/03/12. pmid:29525490.
  50. 50. Behnoush B, Tavakoli N, Bazmi E, Nateghi Fard F, Pourgharib Shahi MH, Okazi A, et al. Smartphone and Universal Goniometer for Measurement of Elbow Joint Motions: A Comparative Study. Asian J Sports Med. 2016;7(2):e30668. Epub 2016/09/15. pmid:27625754; PubMed Central PMCID: PMCPMC5003314.
  51. 51. Cruz J, Morais N. Intrarater Agreement of Elbow Extension Range of Motion in the Upper Limb Neurodynamic Test 1 Using a Smartphone Application. Arch Phys Med Rehabil. 2016;97(11):1880–6. pmid:27207436. Language: English. Entry Date: 20161028. Revision Date: 20170203. Publication Type: Article. Journal Subset: Allied Health.
  52. 52. Vauclair F, Aljurayyan A, Abduljabbar FH, Barimani B, Goetti P, Houghton F, et al. The smartphone inclinometer: A new tool to determine elbow range of motion? Eur J Orthop Surg Traumatol. 2018;28(3):415–21. Epub 2017/10/21. pmid:29052011.
  53. 53. Lendner N, Wells E, Lavi I, Kwok YY, Ho PC, Wollstein R. Utility of the iPhone 4 Gyroscope Application in the Measurement of Wrist Motion. Hand (N Y). 2017:1558944717730604. Epub 2017/09/19. pmid:28918662.
  54. 54. Pourahmadi MR, Ebrahimi Takamjani I, Sarrafzadeh J, Bahramian M, Mohseni-Bandpei MA, Rajabzadeh F, et al. Reliability and concurrent validity of a new iPhone(R) goniometric application for measuring active wrist range of motion: a cross-sectional study in asymptomatic subjects. J Anat. 2017;230(3):484–95. Epub 2016/12/03. pmid:27910103.
  55. 55. Charlton PC, Mentiplay BF, Pua YH, Clark RA. Reliability and concurrent validity of a Smartphone, bubble inclinometer and motion analysis system for measurement of hip joint range of motion. J Sci Med Sport. 2015;18(3):262–7. Epub 2014/05/17. pmid:24831757.
  56. 56. Derhon V, Santos RA, Brandalize M, Brandalize D, Rossi LP. Intra- and Inter-Examiner Reliability in Angular Measurements of the Knee with a Smartphone Application. Hum Movement. 2017;18(2):38–43. pmid:124799668.
  57. 57. Dos Santos RA, Derhon V, Brandalize M, Brandalize D, Rossi LP. Evaluation of knee range of motion: Correlation between measurements using a universal goniometer and a smartphone goniometric application. J Bodyw Mov Ther. 2017;21(3):699–703. pmid:28750987.
  58. 58. Hambly K, Sibley R, Ockendon M. Level of agreement between a novel smartphone applicaiton and a long arm goniometer for the assessment of maximum active knee flexion by an inexperienced tester. Int J Physiother Phys Rehabil. 2012;2:1–14.
  59. 59. Hancock GE, Hepworth T, Wembridge K. Accuracy and reliability of knee goniometry methods. J Exp Orthop. 2018;5(1):46. Epub 2018/10/21. pmid:30341552; PubMed Central PMCID: PMCPMC6195503.
  60. 60. Jones A, Sealey R, Crowe M, Gordon S. Concurrent validity and reliability of the Simple Goniometer iPhone app compared with the Universal Goniometer. Physiother Theory Pract. 2014;30(7):512–6. Epub 2014/03/29. pmid:24666408.
  61. 61. Ockendon M, Gilbert RE. Validation of a novel smartphone accelerometer-based knee goniometer. J Knee Surg. 2012;25(4):341–5. Epub 2012/11/15. pmid:23150162.
  62. 62. Romero Morales C, Calvo Lobo C, Rodriguez Sanz D, Sanz Corbalan I, Ruiz Ruiz B, Lopez Lopez D. The concurrent validity and reliability of the Leg Motion system for measuring ankle dorsiflexion range of motion in older adults. PeerJ. 2017;5:e2820. pmid:28070457.
  63. 63. Vohralik SL, Bowen AR, Burns J, Hiller CE, Nightingale EJ. Reliability and validity of a smartphone app to measure joint range. Am J Phys Med Rehabil. 2015;94(4):325–30. Epub 2014/10/10. pmid:25299533.
  64. 64. Williams CM, Caserta AJ, Haines TP. The TiltMeter app is a novel and accurate measurement tool for the weight bearing lunge test. J Sci Med Sport. 2013;16(5):392–5. pmid:23491138
  65. 65. Lendner N, Wells E, Lavi I, Kwok YY, Ho PC, Wollstein R. Utility of the iPhone 4 Gyroscope Application in the Measurement of Wrist Motion. Hand. 2017:1558944717730604. pmid:28918662.
  66. 66. Chia K, Sangeux M. Quantifying sources of variability in gait analysis. Gait Posture. 2017;56:68–75. Epub 2017/05/16. pmid:28505546.
  67. 67. Mourcou Q, Fleury A, Franco C, Klopcic F, Vuillerme N. Performance Evaluation of Smartphone Inertial Sensors Measurement for Range of Motion. Sensors (Basel). 2015;15(9):23168–87. Epub 2015/09/22. pmid:26389900; PubMed Central PMCID: PMCPMC4610531.
  68. 68. Brodie MA, Walmsley A, Page W. The static accuracy and calibration of inertial measurement units for 3D orientation. Comput Methods Biomech Biomed Engin. 2008;11(6):641–8. pmid:18688763
  69. 69. Seel T, Raisch J, Schauer T. IMU-based joint angle measurement for gait analysis. Sensors (Basel). 2014;14(4):6891–909. Epub 2014/04/20. pmid:24743160; PubMed Central PMCID: PMCPMC4029684.
  70. 70. Favre J, Jolles BM, Aissaoui R, Aminian K. Ambulatory measurement of 3D knee joint angle. J Biomech. 2008;41(5):1029–35. Epub 2008/01/29. pmid:18222459.
  71. 71. Luinge HJ, Veltink PH, Baten CTM. Ambulatory measurement of arm orientation. J Biomech. 2007;40(1):78–85. pmid:16455089
  72. 72. Vargas-Valencia LS, Elias A, Rocon E, Bastos-Filho T, Frizera A. An IMU-to-Body Alignment Method Applied to Human Gait Analysis. Sensors (Basel). 2016;16(12). Epub 2016/12/16. pmid:27973406; PubMed Central PMCID: PMCPMC5191070.
  73. 73. Malone JJ, Lovell R, Varley MC, Coutts AJ. Unpacking the Black Box: Applications and Considerations for Using GPS Devices in Sport. Int J Sports Physiol Perform. 2017;12(Suppl 2):S218–s26. Epub 2016/10/14. pmid:27736244.