Reliability and validity of clinically accessible smartphone applications to measure joint range of motion: A systematic review

Measuring joint range of motion is an important skill for many allied health professionals. While the Universal Goniometer is the most commonly utilised clinical tool for measuring joint range of motion, the evolution of smartphone technology and applications (apps) provides the clinician with more measurement options. However, the reliability and validity of these smartphones and apps is still somewhat uncertain. The aim of this study was to systematically review the literature regarding the intra- and inter-rater reliability and validity of smartphones and apps to measure joint range of motion. Eligible studies were published in English peer-reviewed journals with full text available, involving the assessment of reliability and/or validity of a non-videographic smartphone app to measure joint range of motion in participants >18 years old. An electronic search using PubMed, Medline via Ovid, EMBASE, CINAHL, and SPORTSDiscus was performed. The risk of bias was assessed using a standardised appraisal tool. Twenty-three of the eligible 25 studies exceeded the minimum 60% score to be classified as a low risk of bias, although 3 of the 13 criteria were not achieved in >50% of the studies. Most of the studies demonstrated adequate intra-rater or inter-rater reliability and/or validity for >50% of the range of motion tests across all joints assessed. However, this level of evidence appeared weaker for absolute (e.g. mean difference ± limit of agreement, minimal detectable change) than relative (e.g. intraclass correlation, correlation) measures; and for spinal rotation than spinal extension, flexion and lateral flexion. Our results provide clinicians with sufficient evidence to support the use of smartphones and apps in place of goniometers to measure joint motion. Future research should address some methodological limitations of the literature, especially including the inclusion of absolute and not just relative reliability and validity statistics.


Introduction
The measurement of joint range of motion (ROM) in static and dynamic, passive and active, human movements is an essential skill in the musculoskeletal assessments commonly performed by physiotherapists, as well as some strength and conditioning coaches, to examine joint function, detect joint asymmetry and evaluate treatment efficacy as an objective outcome measure [1]. In the present study, static ROM is defined as the range of a joint held motionless at either of its limit of movement. Dynamic ROM is the range a joint moved to and from the limits of movement. When a joint is moved passively by an assessor or external device, passive ROM is assessed. When a joint moves as a result of muscular contraction, active ROM is assessed. The universal goniometer has long been the preferred method of clinical ROM measurement (especially static ROM) due to its ease of use, low cost, and demonstrated reasonable levels of reliability and validity in numerous studies [2][3][4].
However, the universal goniometer is not without its drawbacks, even when assessing static joint ROM. When assessing static ROM such as the angle of hinge joints like the knee and elbow in adults, there may always be some degree of error due to the universal goniometer not typically being long enough to be aligned directly with the appropriate landmarks on both proximal and distal adjacent joints. Spinal rotation may also be difficult to measure with a universal goniometer due to the difficulty in palpating anatomical landmarks to use as a reference point [5][6][7]. It is perhaps no surprise then that reliability is reduced when measuring the spinal compared to upper and lower limb motion with a universal [6][7][8][9]. These potential issues highlighted for the use of the universal goniometer in assessing static joint ROM may be further exacerbated in inexperienced clinicians who have a relative inability to correctly locate anatomical landmarks; as well as the assessment of dynamic rather than static ROM [10,11].
The development of smartphone technology and software applications (apps), coupled with the ubiquity of smartphone ownership, now allows smartphones to measure joint ROM. Like the universal goniometer, smartphones are similarly easy to use, relatively inexpensive, and highly accessible [12]. Their inbuilt sensors such as an accelerometer, gyroscope, and magnetometer provide the necessary equipment to allow the smartphone to measure angles and displacements [12]. With the use of apps that can be downloaded onto the smartphone, these measurements can be transformed into meaningful assessment data such as joint ROM. One possible advantage of smartphone apps is that their use may circumvent some of the difficulties of using the universal goniometer regarding landmark identification and alignment. Where smartphone apps can altogether overcome the aforementioned drawbacks of the universal goniometer may depend upon the technology used and the experience of the clinician with this alternative approach. The emergence of smartphone apps therefore presents the clinical practitioners with a new set of tools to incorporate into clinical practice, especially for some of the more difficult joint ROMs to quantify.
In order for clinicians to be willing to replace the universal goniometer (at least in some contexts) with smartphone apps as a tool to clinically assess ROM, the validity and reliability of smartphone apps must be comparable or better than the universal goniometer. In psychometric terminology, reliability deals with the consistency in angle and displacement measures produced by smartphone apps, when used by multiple assessors (inter-rater), and when the same assessor performs multiple measurements (intra-rater) [13]. Validity deals with the extent that the measurement obtained from one device, such as smartphone apps, correlates or matches the criterion laboratory devices such as 3-D motion capture or criterion clinical tools such as the universal goniometer [13].
On the topic of synthesizing the psychometric properties of smartphone apps, a number of systematic reviews have been conducted [14,15]. However, the review of Milani et al. [14] is considered to be outdated due to the relative explosion of research into human movement analysis apps and as such, only included 12 studies assessing joint angle measurements. Further, while the review of Rehan Youssef and Gumaa [15] was more recent and well conducted in most aspects, there were several methodological limitations. First, their literature search was completed in August 2016 (including 15 studies and one case study assessing joint ROM) [15]. Second, they utilised a non-validated risk of bias assessment tool that they personally developed [15]. Third, there was a relative lack of reporting of specific reliability and validity data for each of the multiple actions that can occur at some joints such as the spine (trunk) and shoulder joints [15]. The relative lack of reporting specific data for each joint action is a major issue for clinicians, as it is quite possible that a particular smartphone and app may have sufficient reliability and/or validity or measuring some actions at a particular joint in certain planes of motion (e.g. flexion and extension in the sagittal plane) but that more complicated actions such as rotation in the transverse plane may be less reliable and/or valid.
The purpose of this systematic review was to address some of the limitations of the previous review in this area so as to better assist the clinician identify which smartphone apps may show adequate inter-rater and intra-rater reliability as well as validity for the measurement of ROM at particular joints and actions in clinical practice. This state-of-the-art review will assist clinical practitioners in deciding the appropriateness and choice of smartphone apps for clinical ROM assessment.

Search strategy
The protocol for this systematic review has not been registered. A database search of PubMed, Medline via Ovid, EMBASE, CINAHL, and SPORTSDiscus was initially performed on 20th October 2017 by two independent reviewers. This search was repeated on 20th December 2018 to maximise the currency of the findings of this review. The search strategy is described in Appendix 1.
Inclusion and exclusion criteria. Studies retrieved from the search process were determined by two independent reviewers, with a third reviewer used to assist with consensus, were any discrepancies being initially reported by the first two independent reviewers. The eligibility of the studies to be included in this review was determined by the following criteria: published in peer-reviewed journals; measure human participants aged over 18 years old; used a smartphone app to measure joint ROM and assessed validity and/or reliability of these apps; published from 2007 as this was the year the iPhone was launched; published in English and have full text available. Case studies, abstracts only or grey literature were not included. Smartphone apps which required either image/video recordings and/or post data collection analyses to generate joint angles were excluded, as such an approach is unlikely to be used in clinical practice due to privacy concerns with the storage of video footage and the additional analysis time that would be required.
Quality assessment The Critical Appraisal Tool (CAT), developed by Brink and Louw [16] was used to appraise the methodological quality of studies reporting a reliability and/or validity component. The included studies to be appraised were rated on a set of specific criteria involving 13 items that assessed a variety of methodological factors including subject and rater characteristics, rater blinding, testing order, criterion measure characteristics and statistical analyses performed [16]. Consistent with a recent study that has used the CAT [17], in order to satisfy Criteria 13 (Statistical Methods) the study had to report absolute reliability and/or validity statistics (e.g. SEM, MDC or MD±LOA) in addition to the more commonly reported relative reliability and/ or validity statistics (e.g. r or ICC). As not all included studies assessed validity, not all the CAT criteria were relevant to each study. In this case, each validity item was scored as not applicable (NA) and that criteria not included in the overall assessment of the particular study's risk of bias. Consistent with the use of the CAT in previous studies, a threshold of � 60% was considered as high quality, and a quality of < 60% was rated as poor quality, consistent with previous systematic reviews [4,17,18].
The methodological quality of the studies identified by the search was assessed by two independent reviewers. Across all the 13 items of the CAT, there was an overall agreement of 86.2% between the raters when reviewing the methodological quality of the 37 articles included in this review, resulting in a Cohen's unweighted Kappa statistic of 0.64, indicating good agreement between the two raters [19].
Data extraction Data was obtained from studies that met the inclusion and exclusion criteria, which included: the CAT assessment, participants, application and smartphone device, joint movement assessed and position that the participant was in whilst being assessed. Where applicable, data was extracted for intra-rater and inter-rater reliability as well as validity. Both relative and absolute reliability and validity statistics were reported where available to provide an index of the correlation or rank order (relative measure) and change/difference in the mean (absolute measure) [20,21]. Common measures of relative reliability and validity include the intra-class correlation coefficient (ICC), concordance correlation coefficient (CCC) and Pearson's product moment correlation (r). Alternatively, common measures of absolute reliability and validity include the standard error of measurement (SEM), minimal detectable change (MDC), mean difference (MD) and limits of agreement (LoA).
Data analysis A critical narrative approach was applied to synthesize and analyse the data. For each measure, the following criteria were used to judge the level of intra-rater and inter-rater reliability and validity. For relative measures, the following criteria were used ICC: Poor = ICC < 0.40, Fair = ICC 0.40-0.59, Good = ICC 0.60-0.74, Excellent � 0.75 [22]; for r: negligible r = 0-0.29, low r = 0.30-0.49, moderate r = 0.50-0.69, high r = 0.70-0.89, very high r = 0.90-1 [23]; and for CCC: Poor CCC < 0.90, Moderate CCC = 0.90-0.94, Substantial CCC = 0.95-0.99, Almost perfect CCC > 0.99 [24]. For absolute measures of reliability and validity, the following criteria were used SEM: Poor SEM > 5˚and Good SEM � 5˚ [25]; for MDC: Poor MDC > 5˚and Good MDC � 5˚ [25,26] for LOA, a standard deviation threshold of 5˚ [25][26][27][28] multiplied by 1.96 to derive the 95% LOA bandwidth: poor > ± 9.8˚and Good < ± 9.8˚. shoulder, wrist, elbow, ankle and hip. As the trunk and shoulder allow movement in more directions than other joints, the studies assessing the trunk and shoulder typically looked at a greater number of joint movements across the multiple planes of movement. For example, the studies assessing trunk motion typically looked at trunk flexion/extension, lateral flexion and  axial rotation; with the studies assessing shoulder motion typically examining flexion, abduction, horizontal adduction as well as external/internal rotation. The majority of studies involved healthy participants, although some studies involved patients with neck pain [30,31], shoulder pathology [27,32,33], various upper limb injuries [34,35] or knee pain [36,37]. A relatively wide variety of smartphones, applications and criterion devices (for the assessment of validity) were utilised in the studies. The most common smartphones were iPhones which were used in 28 studies, with the most common model being the iPhone 4 which was used in nine studies. Samsung phones were used in another six studies, with one study also using an iPod. A wide variety of apps were utilised, with only the most frequently used being the Clinometer (n = 5) and Knee Goniometer (n = 3). All other apps were used in either one or two studies. For the 30 studies that looked at some aspects of validity, the validity of the app was most commonly compared to goniometers (n = 19), 3D motion capture (n = 5) or inclinometers (n = 4).

Critical appraisal
A critical appraisal of the included articles is summarised in Table 2. The percentage of CAT score ranged from 55% [65] to 100% [32]. Papers with 'NA' in their appraisal were not assessed against that particular criteria. Two studies were considered to be of low quality with a score < 60% [53,55], with another one study close to this criteria with an overall quality score of 62% [38]. Only two of the CAT criteria were achieved in less than 50% of the studies (Criteria Six: Order of Examination and Criteria 13: Statistical Methods). This contrasted with one other criteria being achieved in all studies Criteria #10: Execution of the Index Test).

Reliability and validity
The reliability and validity of the assessments are summarised in Table 3. For the sake of simplicity, the following three text sections will summarise the key results for intra-rater reliability, inter-rater reliability and validity, respectively.
Intra-rater reliability. Twenty-six studies assessed aspects of intra-rater reliability, with 10 studies reporting relative metrics only, one study reporting absolute metrics only, and the remaining 15 studies reporting both relative and absolute metrics. Twenty-five of 26 studies reported excellent intra-rater relative reliability as defined by an ICC > 0.75 for more than 50% of the joint movements they examined, the only exception being Tousignant-Laflamme et al. [45]. However, this classification of poor relative intra-rater reliability for Tousignant-Laflamme et al. [45] was primarily due to the results of one examiner using an iPhone 3, compared to the other examiner who used an iPhone 4. If we were to consider all the studies that assessed relative intra-rater reliability with an iPhone 4, all six studies demonstrated that smartphone apps had adequate relative intra-rater reliability [1,34,37,42,48,64]. Thirteen of 17 studies reported good absolute intra-rater reliability as defined by a SEM or MDC < 5 o or LOA < ± 9.8 o for more than 50% of the joint movements they examined, with only four studies not satisfying this threshold [40,44,51,59]. It should however be noted that the study by Quek et al. [44] satisfied the criteria for more than 50% of the movements when quantified by the SEM (the three of the four movements) but failing this when reliability was assessed by the MDC for all four movements.
Inter-rater reliability. Twenty-five studies assessed aspects of inter-rater reliability, in which 13 studies reported relative metrics only, and 12 studies reported both relative and absolute metrics. Twenty-three of 25 studies demonstrated excellent inter-rater reliability as defined by an ICC > 0.75 for more than 50% of the joint movements they examined, with only two studies not satisfying this threshold for relative inter-rater reliability [37,45]. Six out of 11 studies reported good absolute inter-rater reliability as defined by a SEM or MDC < 5 o or LOA < ± 9.8 o for more than 50% of the joint movements they examined, with five studies not satisfying this criteria [27,30,31,39,42]. While Pourahmadi et al. [30] was deemed to not meet this threshold of absolute inter-rater reliability, this was based on all four MDC values > 5 o , although the SEM values for the same movements were all < 5 o .
Validity. Thirty studies measured some aspects of validity, of which seven studies reported relative metrics only, five studies reported absolute metrics only, and 18 studies reported both relative and absolute metrics. Twenty of 25 studies observed excellent/substantial relative validity as defined by ICC > 0.75, r > 0.9 or CCC > 0.95 for more than 50% of the joint movements examined, with five studies not meeting this threshold for this criteria [28,30,36,45,54]. Seventeen of 23 studies observed excellent/substantial absolute validity as defined by SEM or MDC < 5 o or LOA < ± 9.8 o for more than 50% of the joint movements they examined, with six studies not meeting this threshold of absolute validity [27,34,36,42,53,57].

Discussion
This study systematically reviewed the literature for studies which examined the reliability and/or validity of smartphones and apps to quantify joint ROM. Thirty-seven studies were found to be eligible, with the studies assessing joint ROM across most of the body's major joints. Specifically, the most common joints assessed were the spine/trunk (n = 11), knee (n = 9) and shoulder (n = 6), with a smaller number of studies examining the wrist (n = 4), elbow (n = 3), ankle (n = 3) and hip (n = 1) joints. The primary result of the systematic review was that the apps generally demonstrated adequate intra-rater and inter-rater reliability as well 1 = If human subjects were used, did the authors give a detailed description of the sample of subjects used to perform the (index) test?; 2 = Did the authors clarify the qualification, or competence of the rater(s) who performed the (index) test?; 3 = Was the reference standard explained?; 4 = If interrater reliability was tested, were raters blinded to the findings of other raters?; 5 = If intrarater reliability was tested, were raters blinded to their own prior findings of the test under evaluation?; 6 = Was the order of examination varied?; 7 = If human subjects were used, was the time period between the reference standard and the index test short enough to be reasonably sure that the target condition did not change between the two tests?; 8 = Was the stability (or theoretical stability) of the variable being measured taken into account when determining the suitability of the time interval between repeated measures?; 9 = Was the reference standard independent of the index test?; 10 = Was the execution of the (index) test described in sufficient detail to permit replication of the test?; 11 = Was the execution of the reference standard described in sufficient detail to permit its replication?; 12 = Were withdrawals from the study explained?; 13 = Were the statistical methods appropriate for the purpose of the study?       as validity when compared to criterion devices such as goniometers, inclinometers and 3D motion capture. However, there was a trend for the reliability outcomes that these results were somewhat stronger for relative (e.g. ICC, r) than absolute measures (e.g. SEM, MDC).
The tendency for the relative measures to be stronger than absolute measures is something that needs to be clearly understood by the clinician. Historically, many reliability and/or validity studies have only reported relative statistics such as the ICC and Pearson's product moment correlation [3,7,8]. Relative statistical measures are typically used to describe the resemblance of two or more units within a group (e.g. the similarity of measurements undertaken by two clinicians) as a function of the resemblance between different groups. ICC is thus operationalized as a ratio between two variance measures [66]. To illustrate, the inter-rater reliability ICC measure of Pourahmadi et al. [30] is derived by the ratio of variance between (1) the variance between two measurements from the same participant, repetition, and session, against (2) the variance between two measurements from the same participant, repetition, session, from different raters [66]. While these relative statistics provide important information regarding the correlation or rank order of two or more measurements, they provide no detail regarding the magnitude of change/difference in the measurement across these time points [20,21]. In contrast, absolute statistical measures of reliability/validity simply report the resemblance of two or more units within a group-in other words, it simply represents the individual variance components [66]. Clinically, an ICC is useful for a manager wanting to train a team of clinicians in the use of a mobile app, where the aim is to achieve a value as close to one. However, the individual variance components of between repetition and between sessions are more useful for the day-to-day practice of individual clinicians. Knowing the inherent variation in  outcomes between each measurement repetition and between clinical visits, allows a clinician to judge the clinical importance of any kinematic change using a mobile app.
With respect to the validity of smartphones and apps to quantify ROM, it was apparent that the majority of studies included in this review assessed the validity of the smartphone app against a universal goniometer as the criterion test. However, it could be argued that the most appropriate criterion measure to determine joint ROM would be radiographic images such as x-ray or 3D motion capture. Only five studies utilised 3D motion capture as the criterion method [31,41,44,55,63]. All five of these studies demonstrated that the apps had adequate levels of relative validity with respect to 3D motion capture [31,41,44,55,63], with a similar result observed for all of the three studies assessing absolute metrics also reporting adequate validity [31,41,44]. It should also be noted that Charlton et al. [55] compared the relative validity of their smartphone app and inclinometer to the criterion method of 3D motion capture for assessing hip joint ROM. Based on the ICC threshold of 0.75 for sufficient validity, both devices were valid, with the smartphone exceeding this threshold for five of the six joint ROM and the inclinometer for all six. The use of 3D motion capture as a criterion measure may be more important when assessing dynamic rather than static joint ROM due to the inherent difficulties in maintaining correct position of the universal goniometer on the joint centre and its alignment with the proximal and distal joints during movement, especially at high movement velocities [11]. All the five 3D motion capture validity studies included in the present review assessed static ROM [31,41,44,55,63]-i.e. when range is recorded when a joint was positioned statically at its limit of motion. The lack of assessment of apps on dynamic ROM may not be surprising given that assessors need a joint to be held transiently in a static position to record the range from the app. For apps to measure dynamic ROM, it needs to sample a joint's motion throughout the movement task, and this data needs to be post-processed to extract parameters of ROM-similar to how a 3D motion capture system quantifies ROM. Future studies are warranted to quantify the validity and reliability of smartphone apps in the assessment of dynamic ROM.
Another issue of major importance to clinicians is whether the smartphones and apps display adequate reliability and validity across all joints, joint actions and populations. It was heartening to see that most of these variables did not seem to influence the reliability and validity of the apps in measuring joint ROM. There was clear variation in the reliability and validity in different spinal joint movements, as well as a tendency for differences in reliability and validity between healthy and clinical populations and to a lesser extent smartphone models that the clinician should be aware.
When examining the 11 studies examining spinal ROM, it appeared that the assessment of flexion, extension and lateral flexion typically exhibited adequate relative reliability and/or validity [30,31,[38][39][40][41][42][43][44][45][46], although not all of these studies assessed absolute reliability and validity. Compared to the assessment of spinal flexion, extension and lateral flexion, the assessment of spinal axial rotation did not exhibit adequate reliability and validity in four [30,40,44,45] of the nine studies. On this basis, it would appear that while the apps used within the studies reviewed in this manuscript typically had adequate reliability and validity for measuring spinal flexion, extension and lateral flexion, they are somewhat more questionable for measuring spinal rotation. Nevertheless, a recent study by Furness et al. [5] demonstrated comparable (or slightly better) reliability of an iPhone 6 and the Compass app to the universal goniometer for assessing thoracic rotation in healthy individuals. Unfortunately, while this study also demonstrated strong correlations between the Compass app and the universal goniometer, absolute validity was again inadequate as the limits of agreement between the two devices was~25˚ [5]. Such findings suggest that the ability to perform a valid assessment of spinal rotation using devices that are feasible in clinical practice, be it goniometers or smartphone-based apps, may still remain somewhat questionable. Further research and/or additional clinical training into the use of these devices in this context is therefore warranted.
The comparatively poorer reliability and validity of smartphone apps measuring ROM in axial rotation compared to flexion-extension and lateral flexion could be attributed to several factors. First is the difference in smartphone sensor performance in different Cardinal planes [67]. Using performance testing of commercial Inertial Measurement Units (IMUs) as an example, the static error of the Xsens MT9 IMU was three times greater in the yaw (axial rotation) direction, than in the other two Cardinal Planes [68]. Second, is the reliance of different components of the smartphone sensor (e.g. magnetometer vs gyroscope) when measuring ROM in different Cardinal planes. Magnetometers are required when testing axial rotation in an anti-gravity position (e.g. sitting) [44,45]. Compared to gravity-dependent gyroscopes, magnetometers are more sensitive to signal distortion arising from the environmental magnetic fields, potentially reducing their validity and reliability. In contrast, Pourahmadi et al. [30] tested cervical rotation in supine using the gravity-dependent gyroscope component of the smartphone sensor. This could explain the better validity and reliability of Pourahmadi et al. [30] compared to two other studies who reported poor reliability and validity [44,45]. Third, is the issue of axis mis-alignment which occurs when the sensor's coordinate axes are not aligned with anatomically meaningful axes [69]. There may be greater potential for axis mis-alignment, during axial rotation than in other movement directions [70][71][72]. Given that spinal axial rotation commonly couples with secondary movement in other directions, maintaining a pure axial rotation may be difficult.
While most of the studies reviewed in this manuscript involved healthy participants, some recruited patients with joint pain. These studies included groups of individuals with neck pain [30,31], shoulder pathology [27,32,33], various upper limb injuries [34,35] or knee pain [36,37]. The intra-rater and inter-rater reliability of the apps in these clinical populations was typically adequate in these nine studies, with the exception of Pereira et al. [37]. The validity of the apps in these populations was sufficiently high in six of the nine studies. For the three studies with insufficient validity [34,36,37], a variety of statistical approaches were used, with the results being CCC = 0.50-0.72, r = 0.68 and LoA ranging from -10 to +17.3 o for the measured joint actions. Such results may suggest that using smartphones and apps can be quite reliable in a range of population groups, including some clinical populations presenting with musculoskeletal pathology.
The clinician should also be aware of the potential for how the make and model of the smartphone and the actual app can influence the reliability of assessment and how these two factors; as well as how the criterion test selected may influence the validity. While there was some variability between studies in the smartphone used (29 studies using iPhones, most commonly iPhone 4 or 5), there was little evidence of any effect of smartphone with the exception of one study [45]. Specifically, Tousignant-Laflamme et al. [45] reported adequate relative intra-rater reliability for an examiner with an iPhone 4, but not an examiner with an iPhone 3; with this ultimately resulting in poor inter-rater reliability. Further, the two examiners were unable to demonstrate adequate validity when compared to the CROM device, which is considered a criterion measure for measuring cervical ROM. Such results suggest that clinicians should use more recently developed smartphones, which are more likely to have improved sensor capacity than older smartphone models such as the iPhone 3.
With respect to the number of apps included in this review, there was a wide variety examined in these 37 studies. This was clearly demonstrated as only two apps were used in more than two studies, these being the Clinometer (n = 5) and Knee Goniometer (n = 3). The wide diversity of apps utilised in these studies and the general support for all of these apps' reliability and validity demonstrated in this review, suggest that the clinician has multiple options when selecting the most appropriate app for measuring a particular joint ROM. However, it would still be recommended that clinicians utilise apps that have been demonstrated to be reliable and valid for measuring the particular joint action they wish to measure. We would also recommend that researchers need to continue to examine the reliability and validity of more recently developed apps and smartphones to determine if they offer advantages over those previously developed and assessed in the scientific literature.
This systematic review has several strengths and limitations that need to be acknowledged. A primary strength of this review in comparison to the literature [14,15] is that it provides more detailed reporting of key aspects of the methodology and the actual relative and absolute intra-rater, inter-rater and validity outcomes for each joint action assessed in each study within our summary tables. The current study also appears to be the first systematic review on this topic to use a validated tool to assess the included studies' methodological quality. By performing this assessment of study quality, it was determined that only two of the 37 studies were considered to be of low quality, based on a CAT score of less than 60% [53,55]. Further, only two of the 13 CAT criteria were achieved in less than 50% of the studies (Criteria Six: Order of Examination and Criteria 13: Statistical Methods). The low score for Criteria Six: Order of Examination reflected the lack of randomisation and the potential for a learning or fatigue effect in many of the studies. The low score for Criteria 13 (Statistical Methods) tended to reflect the fact that most studies only reported relative reliability and/or validity statistics (e.g. r or ICC) without also reporting comparable absolute reliability and/or validity statistics (e.g. SEM, MDC or MD±LOA). As each of these three CAT criteria are highly important characteristics of strong psychometric study design, improvement in these areas would further strengthen the level of evidence described in this review.
The primary limitation of our review process reflected the manner in which sufficient validity and reliability was described. Specifically, we utilised a process in which a particular app was described as suitably reliable and/or valid when recommended statistical thresholds were achieved in more than 50% of the movements examined in each study. While this approach is useful as a generalised approach to describe the reliability and/or validity of an app, it is perhaps a little bit too simplistic due to the relatively high between-study variation in populations, joints, joint actions, smartphone and app (including software updates). This potential negative influence of software updates on the reliability and validity of apps has also been recently highlighted as a major issue in the use of global positioning systems (GPS) in sport [73]. Due to the limitation of our somewhat arbitrary greater than 50% reliability and validity threshold, we would suggest that clinicians should still examine the actual data summarised in this systematic review, as it is quite possible that different joint motions may demonstrate differences in their reliability and validity, even when assessed in the same population with the same smartphone and app. The final limitation of this systematic review is that we cannot be 100% certain that all eligible articles were identified and included in this systematic review.

Conclusion
The results of this systematic review provide relatively strong evidence regarding the intrarater, inter-rater and validity of smartphones and apps to assess joint ROM; with these results tending to be observed across multiple joints, joint actions, populations, smartphones and apps. Such results suggest that clinicians may be able to use a relatively wide variety of smartphones and apps to quantify joint ROM. However, when absolute validity was assessed, they were often reasonably large differences in the angle determined by an app compared to a criterion measure such as 3D motion capture, goniometry or inclinometers. On this basis, it is imperative that the clinician does not switch between different assessment devices (such as a goniometer and a smartphone based apps) when assessing an individual across multiple time points. Clinical researchers should also aim to develop more reliable and valid protocols for using smartphones and apps, while continuing to collaborate with smartphone and app developers to further improve their reliability and validity for assessing joint ROM.