Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Accuracy and reliability of self-administered visual acuity tests: Systematic review of pragmatic trials

  • Arun James Thirunavukarasu ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    ajt205@cantab.ac.uk

    Affiliations School of Clinical Medicine, University of Cambridge, Camybridge, United Kingdom, Corpus Christi College, University of Cambridge, Cambridge, United Kingdom

  • Refaat Hassan,

    Roles Data curation, Investigation, Methodology, Validation, Visualization, Writing – review & editing

    Affiliations School of Clinical Medicine, University of Cambridge, Camybridge, United Kingdom, Sidney Sussex College, University of Cambridge, Cambridge, United Kingdom

  • Aaron Limonard,

    Roles Data curation, Investigation, Writing – review & editing

    Affiliations School of Clinical Medicine, University of Cambridge, Camybridge, United Kingdom, St John’s College, University of Cambridge, Cambridge, United Kingdom

  • Shalom Vitreous Savant

    Roles Data curation, Investigation, Writing – review & editing

    Affiliations School of Clinical Medicine, University of Cambridge, Camybridge, United Kingdom, St John’s College, University of Cambridge, Cambridge, United Kingdom

Abstract

Background

Remote self-administered visual acuity (VA) tests have the potential to allow patients and non-specialists to assess vision without eye health professional input. Validation in pragmatic trials is necessary to demonstrate the accuracy and reliability of tests in relevant settings to justify deployment. Here, published pragmatic trials of these tests were synthesised to summarise the effectiveness of available options and appraise the quality of their supporting evidence.

Methods

A systematic review was undertaken in accordance with a preregistered protocol (CRD42022385045). The Cochrane Library, Embase, MEDLINE, and Scopus were searched. Screening was conducted according to the following criteria: (1) English language; (2) primary research article; (3) visual acuity test conducted out of eye clinic; (4) no clinical administration of remote test; (5) accuracy or reliability of remote test analysed. There were no restrictions on trial participants. Quality assessment was conducted with QUADAS-2.

Results

Of 1227 identified reports, 10 studies were ultimately included. One study was at high risk of bias and two studies exhibited concerning features of bias; all studies were applicable. Three trials—of DigiVis, iSight Professional, and Peek Acuity—from two studies suggested that accuracy of the remote tests is comparable to clinical assessment. All other trials exhibited inferior accuracy, including conflicting results from a pooled study of iSight Professional and Peek Acuity. Two studies evaluated test-retest agreement—one trial provided evidence that DigiVis is as reliable as clinical assessment. The three most accurate tests required access to digital devices. Reporting was inconsistent and often incomplete, particularly with regards to describing methods and conducting statistical analysis.

Conclusions

Remote self-administered VA tests appear promising, but further pragmatic trials are indicated to justify deployment in carefully defined contexts to facilitate patient or non-specialist led assessment. Deployment could augment teleophthalmology, non-specialist eye assessment, pre-consultation triage, and autonomous long-term monitoring of vision.

Introduction

Visual acuity (VA) is a measure of the functional resolution of vision, and is assessed before every ophthalmological, optometric, and orthoptic examination to inform decision making. Generally, distance VA assessment involves a clinician appraising the smallest optotype the patient can read while at a standard distance away from an illuminated chart. VA is reported in one of three forms: Snellen fraction, where the numerator denotes the distance between participant and chart and denominator denotes the distance at which ‘ideal’ sight can distinguish the smallest letter identified by the patient (6/6 or 20/20 being ideal, higher denominators corresponding to worse vision); logarithm of the minimum angle of resolution (logMAR) expressed as a real number (0 logMAR being ideal, higher numbers corresponding to worse vision); or letters read, a positive integer where 1 letter is the equivalent of 0.02 logMAR progression (85 letters being ideal, lower numbers corresponding to worse vision). The latter two measures are generated by using the Early Treatment for Diabetic Retinopathy Study (ETDRS) chart, whereas the former measure is generated when using the older Snellen chart. Below, VA is referred to in terms of logMAR throughout.

Self-administered VA tests provide patients with a means of monitoring their vision without having to be examined by an eye health professional. These tests may augment telehealth services, as VA assessment is an integral part of any eye examination. Adoption of self-administered VA tests may reduce the burden on strained ophthalmology resources by enabling non-specialists to triage with knowledge of visual function; by improving referral quality with provision of VA data; and by facilitating autonomous monitoring of vision by patients with chronic eye conditions (who otherwise require frequent clinic appointments) [13].

Many remote visual acuity tests have been developed, but most have been validated with administration in real time by a trained clinician, as required with conventional VA assessment with Snellen or ETDRS chart [46]. As the requirement for clinical examination limits the usefulness of ophthalmic telehealth services, platforms facilitating further examination without physical attendance will serve as important components of any improved suite for remote consultation [3,6]. Pragmatic trials are essential to demonstrate that remote tests are useful for generating actionable VA data without skilled supervision—artificial environments are expected to inflate accuracy and reliability [7,8]. Validation data generated in unrealistic settings provides weaker justification for subsequent clinical deployment than results generated in real-world conditions [8]. The aim of pragmatic trials is to gauge effectiveness—performance in real world conditions—rather than efficacy, or performance in an ideal environment.

Here, a systematic review was undertaken to identify pragmatic trials of remote self-administered VA tests; appraise the quality of their validation data; and compare these tests to conventional visual acuity testing. Specifically, the accuracy and reliability of VA self-tests were gauged to help establish the clinical utility of available platforms. All trials were pragmatic in that remote tests were administered without real-time clinical input, away from idealised but artificial conditions. This evidence synthesis serves as a point of reference for clinicians, patients, and policy makers interested in identifying appropriate platforms to facilitate visual acuity assessment without requiring eye health service involvement.

Materials and methods

Search and screening

This systematic review adhered to PRISMA guidance, according to a prospectively registered protocol on PROPERO (identifier CRD42022385045). On 23 December 2022, The Cochrane Library, Embase (via OVID), MEDLINE (via PubMed), and Scopus were searched for the following: ("visual acuity") AND ("remot*" OR "portable" OR "home based") AND ("test" OR "assessment" OR "examination"). Previously published reviews were also searched for relevant studies [46,9]. Duplicates were removed by a single researcher using Zotero (version 6.0.19-beta.15+6374aea1c; Digital Scholar, Vienna, Virginia, USA). Abstract and full text screening were undertaken by two independent researchers in Rayyan, with a third researcher acting as arbiter to resolve disagreement [10]. The following inclusion criteria were employed, with no restrictions on participant characteristics or test modality:

  1. Record is written in the English language
  2. Record is a peer-reviewed primary research article
  3. Study examines a visual acuity test undertaken out of eye clinic (i.e. remotely).
  4. Remote test does not require a clinically trained administrator (i.e. patient-led).
  5. Remote patient-led test is compared to clinical or repeated remote visual acuity measurements to assess accuracy or reliability, respectively.

Data extraction and analysis

Risk of bias and concerns regarding applicability were appraised with the QUADAS-2 framework by a single researcher, with a second researcher verifying each appraisal [11]. One researcher undertook data extraction for each included study, with a second independent researcher verifying every entry. Data gathered included details about participants, index tests, reference tests, measured outcomes, and study designs; and for index test-retest reliability and accuracy (i.e. comparison to clinical reference test), the bias and limits of agreement of Bland-Altman plots, intraclass correlation coefficients (ICCs) and p value, and t-test p value. ICCs were only reported for test-retest agreement, as they are a poor method for comparing different tests [12,13]. For consistency, bias was expressed as the mean difference between reference and index test, such that positive values indicated that the reference test tended to provide a higher value (i.e. where the index test overestimated visual acuity). Where studies provided individual participants’ VA data without further analysis, the two-way random effects intraclass correlation coefficient (ICC) was calculated, and unpaired two-samples t-test was conducted. For studies exhibiting Bland-Altman plots without reporting figures for the bias and limits of agreement, manual interpolation was conducted with WebPlotDigitizer (version 4.6.0; Ankit Rohatgi, Pacifica, California, USA). Meta-analysis was planned but ultimately precluded by a lack of trials testing the same platform. Data extraction and quality assessment were conducted in in Microsoft Excel for Mac (version 16.57; Microsoft Corporation, Redmond, Washington, USA). Data analysis was conducted in R (version 4.1.2; R Foundation for Statistical Computing, Vienna, Austria) [1416]. Tables were produced in Microsoft Excel for Mac. Figures were produced in R and modified with Affinity Designer (version 1.10.4; Pantone LLC, Carlstadt, New Jersey, USA).

Results

The undertaken literature search and screening process is summarised in Fig 1. Ten studies were included from 1227 identified reports [1726]. Fulfilling criterion (3) necessitated that trials were pragmatic in that remote tests were conducted out of the eye clinic [27]. Hyperacuity tests and survey-based self-assessment were excluded [2831]. To fulfil criterion (4), tests had to be patient-led: while tests administered by parents for paediatric patients were acceptable, involvement of clinicians or other trained personnel justified exclusion [3234]. Criterion 5 mandated exclusion of studies involving tests which did not provide visual acuity measurements which could be compared to conventional clinical assessment or repeated remote measurement [3537].

thumbnail
Fig 1. PRISMA flowchart.

Illustrating the literature search, screening process, and articles included in this review. PRISMA = Preferred Reporting Items for Systematic Reviews and Meta Analyses; MEDLINE = Medical Literature Analysis and Retrieval System Online.

https://doi.org/10.1371/journal.pone.0281847.g001

Study characteristics are summarised in Table 1. Most studies were prospective cross-sectional surveys, with just one retrospective case-control study. 6 of 10 studies reported conflicts of interest, suggesting that many validation studies were not undertaken by research teams independent from the trialled product—a potential source of reporting bias. However, none of the included studies received private funding, such as from product manufacturers. The number of participants ranged from 7 to 148 (median = 50.5). Reported participant age ranged from 3 to 95 years old—spanning most of the paediatric and adult ophthalmology case load. Most trialled tests required access to digital devices: exceptions required a paper chart or custom-built e-device; both provided by the investigators [19,26]. One study required patients to print a physical chart sent to their digital device [24]. Risk of bias judged with QUADAS-2 was generally low, as illustrated in Figs 2 and S1. No major concerns regarding applicability were highlighted during QUADAS-2 appraisal, likely due to stringent inclusion criteria ensuring all studies applied patient-led tests remotely.

thumbnail
Fig 2. Risk of bias and inapplicability appraisals for each included study.

Appraised with the QUADAS-2 framework. QUADAS-2 = Quality Assessment of Diagnostic Accuracy Studies 2; RoB = risk of bias; CrA = concerns regarding applicability.

https://doi.org/10.1371/journal.pone.0281847.g002

thumbnail
Table 1. Characteristics of each of the included studies.

https://doi.org/10.1371/journal.pone.0281847.t001

All studies gauged accuracy by comparing remote measurements to assessment in clinic (Table 2). The reference test was not consistently defined in three studies [18,22,25], and Snellen chart was used in four studies [1921,23,26]; as opposed to the gold-standard Early Treatment for Diabetic Retinopathy Study (ETDRS) chart which was used consistently in just one study [24]. One study trialling FrACT provided individualised data which enabled calculation of the bias and intraclass correlation coefficient, but its small sample size and retrospective design were discussed by the authors as significant limitations necessitating further validation; and statistics were not calculated by the authors themselves as their clinical measurements were not recent enough to serve as a fair control [18]. One trial of a custom e-device did not report any statistical analysis or individual data [26].

Eight studies provided Bland-Altman statistics, corresponding to trials of twelve remote VA tests (Fig 3) [17,1925]. Of these, six studies (ten trials) provided 95% lower and upper limits of agreement (LLOA and ULOA respectively) [17,1922,25]. LOA of Isight pro, Peek Acuity, and DigiVis lay within ±0.2 logMAR in three trials [17,25]. The remaining seven trials corresponded to University of Arizona/Banner Eye Health Chart, Verna Vision Test, Farsight.care, Acustat, Letter Distance Chart PDF document (twice), and Isight pro or Peek Acuity pooled together [1922]. One study did not report the bias; of the remaining nine studies, three (containing six trials) provided 95% confidence intervals [17,19,25]. Isight pro and Peek Acuity exhibited significantly higher bias than 0 logMAR (index test estimated worse acuity) [17]; University of Arizona/Banner Eye Health Chart, Verana Vision Test, and Farsight.care exhibited significantly lower bias than 0 logMAR (index test estimated better acuity) [19]; and DigiVis exhibited no statistically significant bias [25]. Two studies (4 trials) reported correlation coefficients, but these cannot be used to appraise agreement between different tests [12]. Four studies’ (five trials) t-tests comparing measurement methods all reported p-values above 0.25 [18,2023].

thumbnail
Fig 3. Forest plot summarising Bland-Altman analyses of accuracy.

LLOA = lower 95% limit of agreement; ULOA = upper 95% limit of agreement; PDF = portable document format; ETDRS = Early Treatment of Diabetic Retinopathy Study; logMAR = logarithm of the minimum angle of resolution.

https://doi.org/10.1371/journal.pone.0281847.g003

Two trials reported test-retest reliability: one trialling DigiVis [25], and one trialling Isight pro and Peek Acuity in a pooled analysis [22]. The former reported Bland-Altman statistics and ICC, whereas the latter only reported the coefficient of repeatability (CoR) (Table 3). DigiVis exhibited a bias equivalent to 0, LOA of ±0.12 logMAR (6 letters), and ICC of 0.922 [25]. In a pooled analysis, Isight pro and Peek Acuity exhibited a CoR of 0.03 logMAR [22].

Discussion

To justify adoption of remote self-administered VA tests, there must be convincing evidence that the proposed platform meets regulatory safety standards, is effective enough to fulfil its clinical function, is accessible to patients—with appropriate mechanisms to serve those unable to use the platform, and is economically viable [38]. Facilities for VA self-assessment may be useful in a number of domains: improving the capacity and capability of teleophthalmology clinics, empowering patients with the ability to monitor their own vision rather than attend regular appointments; enabling non-eye specialists to obtain useful information for a referral to ophthalmology; and giving eye units a tool to facilitate pre-attendance triage of eye casualty cases [2,3]. In all cases, it is essential that tests are accurate and reliable, exhibiting agreement with clinical assessment and with repeated remote measurement, respectively.

In ideal conditions, chart-based VA still exhibits considerable variation, with 95% LOA approaching 0.09 logMAR; and in clinical settings LOA broaden to at least ±0.15 logMAR [7,39]. Clinical variation is greater as different examinations may be more or less demanding of patient effort, and may or may not test to majority failure (i.e. ≥3 errors on 1 line) [40]. Where both index and reference test exhibit variation, the utility of analyses restricted to t-tests or correlation coefficients is limited. Bland-Altman analysis compensates for bivariate variation by quantifying 95% LOA, which provides metrics of measurement dispersal which can be compared to gold-standard tests. Studies failing to conduct appropriate analyses fail to provide evidence of validation—it is not possible to ascertain whether observed variation is clinically acceptable or not. Acceptable 95% LOA should compare well with those exhibited by conventional clinical chart-based tests: below ±0.2 logMAR and ideally approaching ±0.15 logMAR [39,40]. Bias should be close to zero—statistically significant deviation (e.g. if confidence intervals do not cross zero) indicates a systematic error. High correlation is expected—over 0.7 in terms of Pearson’s or intraclass correlation coefficients [39,41].

Here, DigiVis was the only test exhibiting undisputed 95% LOA within 0.2 logMAR, no significant bias, and high correlation between remotely and clinically assessed VA [25]. iSight Professional and Peek Acuity exhibited 95% LOA within 0.2 logMAR in one of two studies, but this study was judged to be at a high risk of bias [17]. However, in the trial finding greater LOA, pooling of results from both tests may have affected calculated accuracy [22]. Just two studies reported test-retest agreement. One study indicated that DigiVis measurements are very reliable [25]; while another indicated good agreement between repeated iSight Professional and Peek Acuity measurements, albeit with fewer statistics provided [22]. Again, pooling of iSight Professional and Peek Acuity data may have affected the result.

All three tests with positive validation data had no requirement for real-time administration by a trained clinician. Therefore, all three may be used to improve the capability of telehealth services and eye assessment by non-specialists such as general practitioners and emergency department clinicians. However, as some patients in the DigiVis trial conducted the remote test in clinical settings, it is difficult to conclude with certainty that deployment for home-based assessment is justified [25]. All three tests relied on digital devices, accessible by most of the world’s population [42]. However, as uptake of smartphone-based vision tests correlates negatively with older age and worse vision, healthcare providers should be mindful of patients’ capacity to access and complete remote VA assessment to ensure their care and outcomes are not adversely affected [37].

This review was limited by three factors: (1) Inconsistent and incomplete statistical analysis made establishing the accuracy and reliability of trialled VA tests challenging. Deduction of the direction of bias was often based on limited prose descriptions—this is a potential source of error but would not affect conclusions significantly as bias was always close to zero. (2) Descriptions of the setting of the remote index test was often unclear, making the full-text screening process more difficult. Included studies all mentioned a test undertaken outside the eye clinic and did not state that all tests were conducted in clinical or ideal settings. (3) Most studies did not use Bailey-Lovie or ETDRS charts which are accepted as more accurate and precise for clinical research. While this may inflate variability in the reference test and consequently inflate calculated accuracy of the remote index tests, use of Snellen chart may not be a specific weakness as it remains widespread in clinics around the world [43,44].

Although promising technology has been developed to remotely assess VA, very few studies have demonstrated that patient-led assessment outside the eye clinic is feasible. DigiVis, iSight Professional, and Peek Acuity all have validation data demonstrating equivalence with clinical assessment, with the former being best justified due to conflicting results regarding the latter two tests. Further pragmatic trials are required to demonstrate the accuracy and reliability of remote VA assessment to justify deployment at scale, ideally using gold standard clinical assessments to maximise the validity of conclusions—ongoing trials and more recent reports may fill this gap in the literature base [4547]. However, as these trials are often organised by test manufacturers, owners, or patent-holders, independent researchers may seek to run their own studies to ensure validation data are unbiased. Further work is also required to establish the precise populations in which tests exhibit acceptable accuracy and reliability, as this may vary over range of vision, disease state, and age. Finally, work is indicated to explore the feasible use-cases of remote VA tests: in-person examination remains essential for a comprehensive ophthalmological assessment, but remote VA tests may nevertheless improve service provision and reduce the strain on limited clinic resources—particularly if incorporated alongside other emerging digital health tools [48]. Validated self-administered VA tests have the potential to augment teleophthalmology services, pre-consultation triage, long-term monitoring, as well as non-specialist assessment and reporting of eye problems [3].

Supporting information

S1 Fig. Summarised risk of bias and inapplicability.

Appraised with the QUADAS-2 framework. QUADAS-2 = Quality Assessment of Diagnostic Accuracy Studies 2; RoB = risk of bias; CrA = concerns regarding applicability.

https://doi.org/10.1371/journal.pone.0281847.s002

(TIF)

References

  1. 1. Walsh L., Hong S.C., Chalakkal R.J., Ogbuehi K.C., A Systematic Review of Current Teleophthalmology Services in New Zealand Compared to the Four Comparable Countries of the United Kingdom, Australia, United States of America (USA) and Canada, Clin Ophthalmol. 15 (2021) 4015–4027. https://doi.org/10.2147/OPTH.S294428.
  2. 2. Caffery L.J., Taylor M., Gole G., Smith A.C., Models of care in tele-ophthalmology: A scoping review, J Telemed Telecare. 25 (2019) 106–122. pmid:29165005
  3. 3. Li J.-P.O., Liu H., Ting D.S.J., Jeon S., Chan R.V.P., Kim J.E., et al, Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective, Progress in Retinal and Eye Research. 82 (2021) 100900. https://doi.org/10.1016/j.preteyeres.2020.100900.
  4. 4. Samanta A., Mauntana S., Barsi Z., Yarlagadda B., Nelson P.C., Is your vision blurry? A systematic review of home-based visual acuity for telemedicine, J Telemed Telecare. (2020) 1357633X20970398. https://doi.org/10.1177/1357633X20970398.
  5. 5. Claessens J.L.J., Geuvers J.R., Imhof S.M., Wisse R.P.L., Digital Tools for the Self-Assessment of Visual Acuity: A Systematic Review, Ophthalmol Ther. (2021) 1–16. https://doi.org/10.1007/s40123-021-00360-3.
  6. 6. Kawamoto K., Stanojcic N., Li J.-P.O., Thomas P.B.M., Visual Acuity Apps for Rapid Integration in Teleconsultation Services in all Resource Settings: A Review, The Asia-Pacific Journal of Ophthalmology. 10 (2021) 350–354. https://doi.org/10.1097/APO.0000000000000384.
  7. 7. Arditi A., Cagenello R., On the Statistical Reliability of Letter-Chart Visual Acuity Measurements, Investigative Ophthalmology. 34 (1993) 10. pmid:8425819
  8. 8. Ford I., Norrie J., Pragmatic Trials, New England Journal of Medicine. 375 (2016) 454–463. https://doi.org/10.1056/NEJMra1510059.
  9. 9. Yeung W.K., Dawes P., Pye A., Charalambous A.-P., Neil M., Aslam T., et al, eHealth tools for the self-testing of visual acuity: a scoping review, Npj Digital Medicine. 2 (2019) 1–7. https://doi.org/10.1038/s41746-019-0154-5.
  10. 10. Ouzzani M., Hammady H., Fedorowicz Z., Elmagarmid A., Rayyan—a web and mobile app for systematic reviews, Systematic Reviews. 5 (2016) 210. https://doi.org/10.1186/s13643-016-0384-4.
  11. 11. Whiting P.F., QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies, Ann Intern Med. 155 (2011) 529. https://doi.org/10.7326/0003-4819-155-8-201110180-00009.
  12. 12. McAlinden C., Khadka J., Pesudovs K., Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology, Ophthalmic and Physiological Optics. 31 (2011) 330–338. https://doi.org/10.1111/j.1475-1313.2011.00851.x.
  13. 13. Patton N., Aslam T., Murray G., Statistical strategies to assess reliability in ophthalmology, Eye. 20 (2006) 749–754. https://doi.org/10.1038/sj.eye.6702097.
  14. 14. McGuinness L.A., Higgins J.P.T., Risk-of-bias VISualization (robvis): An R package and Shiny web app for visualizing risk-of-bias assessments, Research Synthesis Methods. n/a (2020). https://doi.org/10.1002/jrsm.1411.
  15. 15. Wickham H., Averick M., Bryan J., Chang W., McGowan L., François R., et al, Welcome to the Tidyverse, Journal of Open Source Software. 4 (2019) 1686. https://doi.org/10.21105/joss.01686.
  16. 16. Dayim A., forestploter, (2023). https://github.com/adayim/forestploter (accessed January 3, 2023).
  17. 17. Adyanthaya S., A. B *, Comparison of visual acuity measured by ETDRS based smart phone applications I sight pro and Peek acuity versus traditional Snellen’s chart visual acuity in children 6–14 years in a tertiary care institute in India, Indian Journal of Clinical and Experimental Ophthalmology. 7 (2022) 634–637. https://doi.org/10.18231/j.ijceo.2021.127.
  18. 18. Almagati R., Kran B.S., Implications of a Remote Study of Children With Cerebral Visual Impairment for Conducting Virtual Pediatric Eye Care Research: Virtual Assessment Is Possible for Children With CVI, Front. Human Neurosci. 15 (2021) 733179. https://doi.org/10.3389/fnhum.2021.733179.
  19. 19. Bellsmith K.N., Gale M.J., Yang S., Nguyen I.B., Prentiss C.J., Nguyen L.T., et al, Validation of Home Visual Acuity Tests for Telehealth in the COVID-19 Era, JAMA Ophthalmology. (2022). https://doi.org/10.1001/jamaophthalmol.2022.0396.
  20. 20. Chen E., Mills M., Gallagher T., Ianchulev S., Habash R., Gentile R.C., (The Macustat Study Group), Remote patient monitoring of central retinal function with MACUSTAT R: A multi-modal macular function scan., Digit Health. 8 (2022) 20552076221132105. https://doi.org/10.1177/20552076221132105.
  21. 21. Chen T.A., Li J., Schallhorn J.M., Sun C.Q., Comparing a Home Vision Self-Assessment Test to Office-Based Snellen Visual Acuity, OPTH. 15 (2021) 3205–3211. pmid:34349497
  22. 22. Painter S., Ramm L., Wadlow L., O’Connor M., Sond B., Parental Home Vision Testing of Children During Covid-19 Pandemic, British and Irish Orthoptic Journal. 17 (2021) 13–19. https://doi.org/10.22599/bioj.157.
  23. 23. Pathipati A.S., Wood E.H., Lam C.K., Sáles C.S., Moshfeghi D.M., Visual acuity measured with a smartphone app is more accurate than Snellen testing by emergency department providers, Graefes Arch Clin Exp Ophthalmol. 254 (2016) 1175–1180. https://doi.org/10.1007/s00417-016-3291-4.
  24. 24. Siktberg J., Hamdan S., Liu Y., Chen Q., Donahue S.P., Patel S.N., et al, Validation of a Standardized Home Visual Acuity Test for Teleophthalmology, Ophthalmology Science. 1 (2021) 100007. https://doi.org/10.1016/j.xops.2021.100007.
  25. 25. Thirunavukarasu A.J., Mullinger D., Rufus-Toye R.M., Farrell S., Allen L.E., Clinical validation of a novel web-application for remote assessment of distance visual acuity, Eye. 36 (2022) 2057–2061. https://doi.org/10.1038/s41433-021-01760-2.
  26. 26. Van Der Star L., Mulders-Al-Saady R., Phan A., Truong B., Suen B., Krijgsman M., et al, First Clinical Experience with Ophthalmic e-Device for Unaided Patient Self-Examination during COVID-19 Lockdown, Cornea. 41 (2022) 353–358. https://doi.org/10.1097/ICO.0000000000002945.
  27. 27. Allen L., Thirunavukarasu A.J., Podgorski S., Mullinger D., Novel web application for self-assessment of distance visual acuity to support remote consultation: a real-world validation study in children, BMJ Open Ophthalmology. 6 (2021) e000801. https://doi.org/10.1136/bmjophth-2021-000801.
  28. 28. Faes L., Islam M., Bachmann L.M., Lienhard K.R., Schmid M.K., Sim D.A., False alarms and the positive predictive value of smartphone-based hyperacuity home monitoring for the progression of macular disease: a prospective cohort study, Eye. 35 (2021) 3035–3040. https://doi.org/10.1038/s41433-020-01356-2.
  29. 29. Haanes G.G., Kirkevold M., Hofoss D., Eilertsen G., Discrepancy between self-assessments and standardised tests of vision and hearing abilities in older people living at home: an ROC curve analysis., J Clin Nurs. 24 (2015) 3380–8. pmid:26335133
  30. 30. Kaiser P.K., Wang Y.-Z., He Y.-G., Weisberger A., Wolf S., Smith C.H., FEasibility of a novel remote daily monitoring system for age-related macular degeneration using mobile handheld deviCES: Results of a pilot study, Retina. 33 (2013) 1863–1870. https://doi.org/10.1097/IAE.0b013e3182899258.
  31. 31. Wang Y.-Z., He Y.-G., Mitzel G., Zhang S., Bartlett M., Handheld shape discrimination hyperacuity test on a mobile device for remote monitoring of visual function in maculopathy, Invest. Ophthalmol. Vis. Sci. 54 (2013) 5497–5504. https://doi.org/10.1167/iovs.13-12037.
  32. 32. Rono H.K., Bastawrous A., Macleod D., Wanjala E., Tanna G.L.D., Weiss H.A., et al, Smartphone-based screening for visual impairment in Kenyan school children: a cluster randomised controlled trial, The Lancet Global Health. 6 (2018) e924–e932. https://doi.org/10.1016/S2214-109X(18)30244-4.
  33. 33. Han X., Scheetz J., Keel S., Liao C., Liu C., Jiang Y., et al, Development and Validation of a Smartphone-Based Visual Acuity Test (Vision at Home), Transl Vis Sci Technol. 8 (2019) 27. https://doi.org/10.1167/tvst.8.4.27.
  34. 34. Bastawrous A., Rono H., Livingstone I.A., Weiss H.A., Jordan S., Kuper H., et al, The Development and Validation of a Smartphone Visual Acuity Test (Peek Acuity) for Clinical Practice and Community-Based Fieldwork, JAMA Ophthalmol. 133 (2015) 930–937. https://doi.org/10.1001/jamaophthalmol.2015.1468.
  35. 35. Guigou S., Michel T., Merite P.-Y., Coupier L., Meyer F., Home vision monitoring in patients with maculopathy: Real-life study of the OdySight application, J Fr Ophtalmol. 44 (2021) 873–881. https://doi.org/10.1016/j.jfo.2020.09.034.
  36. 36. Harada S., Nakashima Y., Uematsu M., Morimoto S., Mohamed Y.H., Kitaoka T., et al, Effectiveness of a photoscreener in identifying undiagnosed unilateral amblyopia at vision screening of 3-year-old children in Japan, Jpn. J. Ophthalmol. 66 (2022) 193–198. https://doi.org/10.1007/s10384-021-00896-8.
  37. 37. Korot E., Pontikos N., Drawnel F.M., Jaber A., Fu D.J., Zhang G., et al, Enablers and Barriers to Deployment of Smartphone-Based Home Vision Monitoring in Clinical Practice Settings, JAMA Ophthalmol. 140 (2022) 153–160. https://doi.org/10.1001/jamaophthalmol.2021.5269.
  38. 38. Zur D., Loewenstein A., Development in Smartphone Technologies and the Advancement of Home Vision Monitoring, JAMA Ophthalmology. 140 (2022) 161. https://doi.org/10.1001/jamaophthalmol.2021.5270.
  39. 39. Siderov J., Tiu A.L., Variability of measurements of visual acuity in a large eye clinic, Acta Ophthalmol Scand. 77 (1999) 673–676. https://doi.org/10.1034/j.1600-0420.1999.770613.x.
  40. 40. Rosser D.A., Cousens S.N., Murdoch I.E., Fitzke F.W., Laidlaw D.A.H., How Sensitive to Clinical Change are ETDRS logMAR Visual Acuity Measurements?, Invest. Ophthalmol. Vis. Sci. 44 (2003) 3278. pmid:12882770
  41. 41. Koo T.K., Li M.Y., A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research, Journal of Chiropractic Medicine. 15 (2016) 155–163. https://doi.org/10.1016/j.jcm.2016.02.012.
  42. 42. Smartphone users in the World 2028, Statista. (n.d.). https://www.statista.com/forecasts/1143723/smartphone-users-in-the-world (accessed January 31, 2023).
  43. 43. Yu H.J., Kaiser P.K., Zamora D., Bocanegra M., Cone C., Brown D.M., et al, Visual Acuity Variability: Comparing Discrepancies between Snellen and ETDRS Measurements among Subjects Entering Prospective Trials, Ophthalmol Retina. 5 (2021) 224–233. https://doi.org/10.1016/j.oret.2020.04.011.
  44. 44. Lovie-Kitchin J.E., Is it time to confine Snellen charts to the annals of history?, Ophthalmic Physiol Opt. 35 (2015) 631–636. pmid:26497296
  45. 45. Healthcare Tilak, Post-market Study for At-home Evaluation of Near Visual Acuity With OdySight, a Smartphone Based Medical Application in Comparison to a Standardized Method (TIL002), clinicaltrials.gov, 2022. https://clinicaltrials.gov/ct2/show/NCT05510479 (accessed January 29, 2023).
  46. 46. Gobiquity Mobile Health, Comparison of Visual Acuity Performed in Office Versus In Residence, clinicaltrials.gov, 2022. https://clinicaltrials.gov/ct2/show/NCT05250986 (accessed January 29, 2023).
  47. 47. Claessens J., van Egmond J., Wanten J., Bauer N., Nuijts R., Wisse R., The Accuracy of a Web-Based Visual Acuity Self-assessment Tool Performed Independently by Eye Care Patients at Home: Method Comparison Study, JMIR Formative Research. 7 (2023) e41045. pmid:36696171
  48. 48. Thirunavukarasu A.J., Large language models will not replace healthcare professionals: curbing popular fears and hype, J R Soc Med. (2023). https://doi.org/10.1177/01410768231173123.