Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Comparison of visual function analysis of people with low vision using three different models of augmented reality devices

  • Sarika Gopalakrishnan ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing – original draft

    sarikaoptometrist11@gmail.com

    Affiliation Postdoctoral Researcher, Envision Research Institute, United States of America

  • Arathy Kartha,

    Roles Writing – review & editing

    Affiliation Assistant Professor, SUNY, Department of Biological and Vision Sciences, SUNY College of Optometry, New York, United States of America

  • Ronald Schuchard,

    Roles Conceptualization, Methodology, Project administration, Writing – review & editing

    Affiliation Biotechnical R&D Consultant, United States of America

  • Donald Fletcher

    Roles Writing – review & editing

    Affiliations Medical Director, Envision Vision Rehabilitation Center, Wichita, Kansas, United States of America,, Smith-Kettlewell Eye Research Institute, San Francisco, California, United States of America

Abstract

We compared visual function in individuals with low vision (>20/60) using three different models of augmented reality (AR) devices: Ziru, IrisVision, and NuEyes-Pro3. The distance visual acuity (VA) was measured in high luminance high contrast (HLHC), high luminance low contrast (HLLC), low luminance high contrast (LLHC), and low luminance low contrast (LLLC) settings. The other tests were near VA, distance and near contrast sensitivity (CS), color vision, depth perception and indoor navigation. The change in visual function without and with AR devices was analyzed. Out of 27 participants, 17 were female. The mean age was 66.7 ± 18.2 years. The median baseline VA was 0.66 (0.79) logMAR in HLHC, 0.87 (0.54) logMAR in HLLC, 0.84 (0.67) logMAR in LLHC and 1.04 (0.34) logMAR in LLLC. The median baseline near VA was 0.55(0.4) logMAR, distance and near CS was 1.10(0.26) logCS, and 1.20(0.30) logCS respectively. Distance and near vision showed significant differences with both Ziru and IrisVision (p < 0.01), but not with NuEyes. There was a significant change in CS using Ziru and IrisVision for both distance and near (p < 0.05) but both reduced significantly with NuEyes (p < 0.01). During the objective identification task on the indoor environment using AR devices, head-level objects were missed more compared to waist or floor-level objects across all three models. Majority of the visual functions improved with Ziru and IrisVision, with limited improvement in certain lighting condition of distance visual acuity with NuEyes.

Introduction

Globally, an estimated 252.6 (95% CI, 111.4–424.5) million people live with low vision, i.e. a best-corrected visual acuity of 20/60 or worse in the better-seeing eye [1]. It is estimated that 7.08 million Americans have vision impairment, of whom 1.08 million people are living with blindness [2]. Ocular pathologies like diabetic retinopathy, glaucoma, age-related macular degeneration and other conditions can cause low vision [5]. Depending on the pathology, people with low vision would experience central field loss or peripheral field loss and/or generalized vision loss [7].

People with low vision can have one or more decreased visual functions such as visual acuity, contrast sensitivity, color perception, depth perception and/or loss of visual field [8]. They may also have decreased functional vision such as difficulty in viewing distant objects, reading and writing tasks, independent mobility, photosensitivity, and performing household activities [9]. When these individuals find difficulty in visual tasks, sometimes they need to depend on caregivers for daily activities or they will stop doing certain tasks. Consequently, low vision can have a significant impact on their participation in activities of interest, independence, social interactions, quality of life, physical, emotional and mental health [1012]. People fear losing vision more than memory, hearing, or speech, and consider visual loss among the top four worst things that could happen to them [5]. Rein et al estimated an economic burden of vision loss of $134.2 billion, those with vision loss incurred $16838 per year in incremental burden [3]. More than half of the visually impaired population do not even enter the workforce.

There is enough evidence that even though people with low vision have decreased form vision, they have some residual vision to perform certain range of visual activities [6]. Low vision rehabilitation services which include prescribing conventional optical devices and electronic devices are beneficial in improving the functional ability of people with low vision [7]. Yet, people face challenges when they need to use more than one low vision device for performing a variety of visual tasks at different distances for their daily living activities. To address these challenges, a range of assistive devices have been developed, including augmented reality (AR) head mounted displays which can be used for various visual tasks [8,13].

Augmented reality refers to graphic overlays on, or graphic objects inserted in, live images of the real environment. Augmented reality head-mounted displays (AR-HMDs) enable users to see real images of the outside world and visualize virtual information generated by a computer at any time and from any location, making them useful for various applications. The manufacture of AR-HMDs combines the fields of optical engineering, optical materials, optical coating, precision manufacturing, electronic science, computer science, physiology, ergonomics, etc [14,15]. The basic principle of the AR device is to enhance the image cast by the world onto the retina with an enhanced view [16]. The cameras facing outward capture live video of the real world in front of the user, this video is processed to increase visibility via magnification and/or contrast enhancement and then shown in real-time to the user through a pair of micro-displays positioned in front of the eyes. This is called ‘video see-through display’ because even though the system is using a mobile phone, the users’ eyes are covered by opaque screens [16,17].

Augmented reality devices are designed for different environments, both indoors and outdoors, and strive to be lightweight and comfortable to facilitate continuous and long-term wear. These devices are designed for use throughout the day while the user is engaged in other activities [4,20]. There are many AR devices developed to improve the visual function of people with low vision which include LVES, e-sight, Iris Vision, Hololens, Oxsight, NuEyes, Vision Buddy, etc.[18,24]. In previous studies, there is some information on one or two visual functions assessed with a single virtual reality (VR) or augmented reality (AR) device. However, there are no studies that compared different models of AR devices. Few studies reviewed head-mounted displays (HMDs) for visual impairment, highlighting their advantages over conventional desk-mounted or handheld low vision devices [16,227]. Crossland et al. suggested that 47% of the study participants were ready to use a device like Sightplus [25], while Yeo et al reported 94% of the participants were ready to use similar device called Relumino [22]. Htike et al. reported that Hololens was useful for navigational task, depth perception and object recognition tasks [20] and Dylan et al. showed that augmentation helps in obstacle identification [26]. However, no study has compared the depth perception, object identification, or navigation tasks among different models of AR devices, which would provide varying experiences with different device models.

While there are different generations of AR devices and a variety of models, it is important to examine how the visual abilities of people with low vision vary while using different models of AR devices. It is essential for eye care professionals, manufacturers, and individuals with vision loss to understand how visual function varies among people with low vision when using different models of augmented reality (AR) devices. This understanding will ultimately contribute to improving the functionality of these devices in the future based on performance measurements. The visual function of individuals with low vision may vary among different models of AR devices, which is less explored in the literature. While these systems seem promising, there is less evidence on comparing different models of these devices in improving the visual functions of people with low vision [18]. Therefore, to understand the visual abilities of people with low vision, we aim to explore what and how they could use different models of AR devices in both stationary and mobile scenarios. We address the following research questions in this paper: 1. What is the change in visual function while using three models of AR devices? 2. Which model of AR device helps better for which type of functional vision? 3. How do AR devices help in identifying objects while moving in an indoor environment?

Methods

This was a prospective study design, and all study participants were recruited between 02/01/2023 and 31/05/2023 from the participant database of the Envision Research Institute and Envision Vision Rehabilitation Center.

Participants

Participants with low vision who had best corrected visual acuity of worse than 20/60, central field loss (CFL), or peripheral field loss (PFL) were recruited for the study. Those with cognitive impairment, physical disabilities, and/or multiple disabilities were excluded. The informed consent form explained the study protocol, time commitment, risks, and benefits to the participants. All the participants provided written consent for participation before the study. The study adhered to the tenets of the Declaration of Helsinki, and it was approved by the institutional review board and ethics committee of Envision Research Institute and Wichita State University, USA.

Procedure

The visual functions were evaluated without any AR device with the participant’s best refractive correction. The concept of augmented reality technology was explained to the participants and a detailed demonstration of all three models of AR devices was given before starting the experiment. They were trained to adjust their desired magnification level, viewing/ contrast mode, and other functions of the devices. All the visual tasks were measured without AR devices and with all three models of AR devices in a randomized order. Participants were presented with three models of AR devices in a random order to perform each visual function.

AR devices

We used three models of augmented reality devices to analyze the visual functions and functional vision of people with low vision in this study. The devices were, a) Ziru (Snapdragon model, Dodrutu, Oxford, UK) a smartphone Samsung Galaxy S9 fitted on a 3D printed hardware case that is fixed on a golf cap, b) Iris Vision (Iris Vision live model, IrisVision Global, Inc. CA, USA) provides a display on a smartphone Samsung S10 fitted on a Samsung gear VR headset and c) NuEyes (NuEyes Pro3, NuEyes Technologies Inc., CA, USA) is a spectacle model device with a display which is connected to a smartphone Samsung S20 model. The field of view was calculated using the formula mentioned by Lynn et al.[19] to verify the specifications reported by the manufacturer. The features of all three devices are listed in Table 1 and the devices along with their graphical representations of different viewing modes, are shown in Fig 1.

thumbnail
Table 1. Technical specifications of the augmented reality devices.

https://doi.org/10.1371/journal.pone.0332085.t001

thumbnail
Fig 1. Graphical representation of viewing modes in augmented reality devices: Ziru, IrisVision Live and NuEyes Pro3.

Figure A. Ziru. a. Clear, b. Gray, c. Boost, d. Cogni, e. Outline – Black Outline white background, f. White outline black background, g. Green outline red background, h. Yellow outline black background, i. White outline blue background, j. Red outline black background, k. Green outline black background, l. Green outline blue background. Figure B. IrisVision Live. a. Scene, b. Scene with bubble, c. RP, d. Television, e. Bioptic, f. Reading, g. Reading line, h. Reading inverted, i. Reading yellow, j. Reading green. Figure C. NuEyes Pro 3. a. Regular, b. Black, c. White, d. Yellow, e. Orange, f. Blue.

https://doi.org/10.1371/journal.pone.0332085.g001

Ziru device provides up to 8 times magnification with five different viewing modes such as ‘CLEAR’- color lens designed to reproduce true colors and object proportions and sizes, ‘BOOST’- composite lens combining CLEAR lens output with OUTLINE lens, ‘OUTLINE’- lens that filters out everything but object outlines, ‘COGNI’- composite lens combining CLEAR lens with a number of other lenses designed to enhance visual cognition, and ‘GREY’- gray scale lens with adaptive brightness and contrast. The graphical representation of the viewing modes can be seen in Fig 1A. It is a head-mounted display that has autofocus capacity from near to far distance and image stabilization processing which will help match the head movement of the user along with the device movement. It has a low-latency video pipeline with a 4K camera system and a 2K display system. The user can wear his or her spectacle prescription while wearing this device. It accepts voice commands from users to change the viewing modes, brightness, magnification, and other most frequently used functions, in addition to that, the magnification level can be adjusted using a physical button on top of the device or through voice commands. It also has optical character recognition (OCR) function which helps user to read out any printed material, sometimes it can read handwritten material. Ziru can be calibrated for each user, and it has the feature of adjustable interpupillary distance, therefore it can be used from a young age to elderly people. It does not need an internet or Wi-Fi connection for it to work, which makes it more functional in a wide variety of environments.

IrisVision provides up to 14 times magnification with 10 different viewing modes such as, ‘SCENE’- regular mode, ‘SCENE WITH BUBBLE’- regular mode with additional magnified area in the shape of a bubble, ‘RP’ – scene mode with a reduced field of view, ‘TELEVISION’ – designed specifically for watching television, ‘BIOPTIC’ – scene mode with an additional magnified view in a rectangle-shaped box, ‘READING’ – Magnified version designed for reading tasks, ‘READING INVERTED’-reading mode with reverse contrast as white text on black background, ‘READING LINE’ – reading mode with a line to fixate, ‘READING GREEN’ – green text on black background AND ‘READING YELLOW’- yellow text on black background. The representation can be viewed in Fig 1B. The head-mounted display uses a touch panel to specify operations to be performed. It has features of contrast enhancement, image remapping strategies, motion compensation strategies, augmented reality strategies, and OCR functions. It does need a network connection to update the software regularly.

NuEyes Pro3 provides up to 12 times magnification with five color filters such as ‘Black’ – Black on white, ‘White’ – white on black, ‘Yellow’ – yellow on black, ‘Orange’ – orange on black, and ‘Blue’ – blue on yellow apart from the regular viewing mode as showed in Fig 1C. It presents three resolution levels high, medium, and low. The spectacle display has an extended wire that connects to a smartphone on which the operations are performed. The user has an option of fitting an additional clip-on lens in front of the camera on the spectacle to focus objects at a far distance.

The luminance on the visual acuity chart was measured using Konica Minolta Spot Photometer F (Digital Light Exposure Spot Meter, JAPAN) on a uniform background in a controlled environment without and with all three AR devices. Ten recordings were taken in cd/m2 to measure the luminance on the display screen of each of the AR devices. The background room illumination was measured in lux which was noted to be 273.4 lux for high luminance conditions and 0.01 lux for low luminance conditions.

Visual functions

The distance visual acuity was measured with internally illuminated ETDRS high and low contrast 4m logMAR charts. The visual acuity was evaluated in four conditions, i) High luminance high contrast (HLHC), ii) High luminance low contrast (HLLC), iii) Low luminance high contrast (LLHC), and iv) Low luminance low contrast (LLLC). The high and low luminance levels used on the illuminated cabinet were 160 cd/m2 and 3 cd/m2 respectively. The contrast levels used were 5%, 10% and 25% for low contrast conditions and 100% for high contrast conditions. Initially, the participants were tested with 5% contrast level logMAR chart for low contrast condition, if they were not able to identify a single letter on the topmost line of the 5% contrast level chart, then 10% contrast chart was used. If they still had difficulty identifying even single letter on the topmost line, then 25% contrast chart was used. The regular viewing mode was used on all AR devices to ensure uniformity. The participants were encouraged to adjust the level of magnification until they were able to recognize the smallest possible optotypes on the logMAR chart. The near visual acuity was tested with the SK Read chart at their habitual reading distance. The distance and near contrast sensitivity were evaluated using distance and near Pelli Robson contrast sensitivity charts respectively.

Depth perception task

Depth perception was evaluated with the help of a setup designed for this study. Five equally sized (L x W x H = 5 x 5 x 5 inch ~12.7 cm) cube-shaped boxes were suspended from the ceiling of the lab using clear fishing line at different distances from the participant’s viewing point (Fig 2). The boxes were labeled with random numbers in no specific order and were separated by 1 feet (~0.3m) distance from each other box. The participant was positioned at 12 feet (~3.7m) from the closest box. The top view of the scene was obscured using black charts so that the fishing lines connecting the boxes to the ceiling were not visible to the participants, to avoid any additional depth cues. Using a 2- alternative forced choice, each participant was asked to look at two boxes in a pairwise combination, and the participant was asked to respond which box was seen closer when compared to the other box. Likewise, ten sets of combinations were presented to the participants based on which ten responses were received from each participant. Participants performed the task without and with all three AR devices.

thumbnail
Fig 2. Depth perception test. Description: X – Position of the participant, five equal-sized cube-shaped boxes were hanging from the room ceiling at different distances from the participant.

https://doi.org/10.1371/journal.pone.0332085.g002

Object identification task

The object identification task was done on a course designed in a quiet room (20x20 feet ~ 6m). The center of the room had four black curtains that were used to create a cross-shaped barrier, therefore the participants had four quadrants of space for them to explore. This barrier facilitated obscuring other quadrants of the room so that the participants could not see all the objects at the same time. The participants were instructed to walk at their usual walking speed following the directions given in the room. The participants were supposed to identify the ten toys (soft or stuffed or plastic toys) positioned at floor level, waist level, and ceiling level while finding their way to the exit point. They had to identify a toy, point it out, and describe it shortly as what type of toy, what color, and what is the material of the toy in a few words. There were distractors like boxes, drums, and chairs in the room and pictures on the wall and curtain (all distractors were not toys) in all levels which added an adequate level of cognitive load to the task. Four different set ups were designed by changing the position of the toys at different levels and direction (Fig 3). Initially, the task was performed without any AR device and then it was repeated with all three augmented reality devices in a randomized order. The time taken, accuracy, number of objects identified, number of objects missed at each level, and feedback from participants were documented. Participants were allowed to use their habitual cane for the task and they were accompanied by the experimenter at all times to ensure safety while walking.

thumbnail
Fig 3. Representation of the room for object identification task. Description: The participant entered the room at the entrance point, followed the direction signs, identified the objects at head level, waist level and floor level and after completing the task came out of the room through the same entrance point.

https://doi.org/10.1371/journal.pone.0332085.g003

Statistical analysis

Statistical analyses were conducted using SPSS software version 20 (RRID:SCR_002865). Descriptive measures, including participants’ demographic and clinical characteristics, were summarized as means and standard deviations, and medians and interquartile ranges, and by counts and percentages as appropriate. The repeated measures of the AR devices were analyzed using the Friedman test and the Wilcoxon sign rank test was performed to measure the differences between groups using Bonferroni correction. The alpha level was set at 0.05 for initial analysis and based on Bonferroni correction, the new alpha value was set at 0.01.

Results

Out of 27 patients, 17 (63%) were female. The mean age of the participants was 66.7 (SD, 18.2) ranging from 25 to 92 years of age, with the majority of them (40.7%, n = 11) ranging from 61 to 80 years of age. The presenting visual acuity of the participants ranged from 0.10 logMAR (20/25) to 1.62 logMAR (20/800). In terms of underlying visual conditions, the majority (six participants) presented with macular degeneration, four of them presented with Stargardt’s disease, three had retinitis pigmentosa, three had diabetic retinopathy, three had optic nerve pathologies, two had glaucoma, and others with various inherited retinal degenerations and related disorders. Thirteen people were noted to have ocular pathology causing central field loss, 9 had peripheral field loss, 2 of them had central and peripheral field loss and 3 had generalized vision loss. Most of them, 55.6% (n = 15), were retired, 25.9% (n = 7) were artists and the rest 18.5% (n = 5) were involved in private jobs. Nearly half of them, 48.1% (n = 13), had physical morbidities, 22.2% (n = 6) had hearing impairment and 48.1% (n = 13) were using mobility assistance. More than half of them 14 (51.9%) had previous experience using low vision devices. Most of them, (44.4%, n = 12), had a duration of visual impairment of less than 10 years, followed by 10–20 years (22.2%, n = 6) as shown in Table 2. Mean results with standard deviations in parenthesis are reported for the measurements. Statistically significant differences were determined at p < 0.05.

thumbnail
Table 2. Demographic characteristics of study participants.

https://doi.org/10.1371/journal.pone.0332085.t002

There was a statistically significant difference in the number of lines read without and while using three models of augmented reality devices on all the lighting conditions, HLHC χ2(3)=60.26, p < 0.0001, HLLC χ2(3)=16.42, p = 0.001, LLHC χ2(3)=53.25, p < 0.0001, LLLC χ2(3)=10.57, p = 0.014. Post hoc analysis with Wilcoxon signed-rank tests was conducted with Bonferroni correction applied, resulting in a significance level set at p < 0.05. The median (IQR) distance visual acuity in HLHC was 0.66 (0.79) logMAR in the HLHC condition and they could read 0.20 (0.44) logMAR with Ziru with a maximum of 8x magnification (Z = −4.541, p < 0.01) and 0.16 (0.25) logMAR with Iris vision with a maximum magnification of 14x (Z = −4.469, p < 0.01), but with NuEyes, the median visual acuity of 0.74 (0.47) logMAR was not significantly different when compared to the presenting visual acuity (Z = −1.562, p = 0.12). The median distance visual acuity in HLLC was 0.78 (0.54) logMAR and they could read 0.72 (0.52) with Ziru (Z = −3.125, p = 0.002) and 0.74 (0.50) with Iris vision (Z = −2.627, p = 0.009) but with NuEyes, it was 0.82 (0.34) logMAR which was not significantly different (Z = −1.014, p = 0.311). The median distance visual acuity in LLHC condition was 0.84 (0.67) logMAR, they were able to read 0.24 (0.24) with Ziru (Z = −4.458, p < 0.01), 0.24 (0.29) with Iris Vision (Z = −4.542, p < 0.01), and 0.60 (0.44) with NuEyes (Z = −3.036, p = 0.002), the number of lines read were significantly different with all the three AR devices when compared to the presenting visual acuity. The median visual acuity in the LLLC condition was 1.04 (0.34) logMAR, and they were able to read 0.70 (0.32) with Ziru (Z = −3.365, p = 0.001), 0.68 (0.37) with Iris vision (Z = −2.772, p = 0.006) and 0.94 (0.26) with NuEyes (Z = −2.499, p = 0.012), which was statistically significant with all three devices.

The mean luminance without AR devices was measured as 73.8 (9.1) cd/m2, it was 76.4 (11.6) cd/m2 with Ziru which was not significantly different (p = 0.47) when compared to luminance without any AR device, but it was 13.3 (0.6) cd/m2 with IrisVision and 6.2 (3.7) cd/m2 with NuEyes which were significantly different (p < 0.05). There was also a significant difference in the luminances between all the three devices (p < 0.01). On HLLC condition, more than half of the participants 60% were able to read the chart at 25% contrast, 24% of them read at 10% contrast and 12% of them read at 5% contrast without any AR device. Interestingly, a maximum number of participants were able to read 5% low contrast level chart with Ziru (28%), followed by IrisVision (16%), none of the participants were able to read the 5% contrast level chart with NuEyes. With IrisVision, most of them 64% were able to respond to 10% contrast chart and with NuEyes, 40% of them could respond to 25% contrast level. In case of LLLC condition, 60% of them read the chart at 25% contrast, 16% read at 10% contrast and 12% read at 5% contrast level. Similar to the previous situation, the maximum number of participants read 5% low contrast level with Ziru (28%), followed by IrisVision (20%), none of them were able to read the 5% chart with NuEyes. Using IrisVision, 60% of them were able to read the chart at 10% level and with NuEyes, 32% were able to read at 25% contrast level.

There was a statistically significant difference in the near visual acuity without and while using three models of augmented reality devices, χ2(3)=51.17, p < 0.0001. Post hoc analysis with Wilcoxon signed-rank tests was conducted resulting in a significance level set at p < 0.05. The median (IQR) near visual acuity was 0.55 (0.40) logMAR, they could read 0.10 (0.30) logMAR with Ziru (Z = −4.376, p < 0.0001) and 0.10 (0.30) logMAR with IrisVision (Z = −4.375, p < 0.0001) which was significantly different with a p-value of <0.05, while it significantly reduced to 0.80 (0.20) logMAR with NuEyes (Z = −2.593, p = 0.01) as shown in Table 3. There was a statistically significant difference in the distance contrast sensitivity χ2(3)=38.36, p < 0.0001 and near contrast sensitivity χ2(3)=45.42, p < 0.0001 without and while using three models of augmented reality devices. The median (IQR) distance contrast sensitivity was 1.10 (0.26) logCS without the AR device, it was 1.20 (0.33) with Ziru (Z = −1.832, p = 0.01), and 1.16 (0.43) with IrisVision (Z = −0.750, p = 0.04) which was significantly different and it reduced significantly to 0.75 (0.45) with NuEyes (Z = −3.519, p < 0.001). The median (IQR) near contrast sensitivity was 1.20 (0.30) logCS without using any AR device, it was 1.43 (0.41) with Ziru (Z = −2.589, p = 0.01) and 1.35 (0.30) with Iris vision (Z = −2.267, p = 0.02) which was significantly different (p < 0.05) and it reduced significantly to 0.75 (0.64) with NuEyes (Z = −3.866, p < 0.001). Fig 4 illustrates the relationship between contrast sensitivity and luminance levels when using three AR devices, with median values plotted for each condition.

thumbnail
Table 3. Visual functions of people with low vision while using three different models of augmented reality devices.

https://doi.org/10.1371/journal.pone.0332085.t003

thumbnail
Fig 4. Comparison of the relationship between contrast sensitivity and luminance level while using three AR devices.

Description: This graph illustrates the differences in luminance levels among the three AR devices in relation to their respective contrast sensitivity. The median contrast sensitivity values for each device are plotted.

https://doi.org/10.1371/journal.pone.0332085.g004

The mean correct response of the depth perception task was 46.2% without using any AR device, 50.4% with Ziru, 48.8% with Iris Vision, and 48.8% with NuEyes. In the PV-16 color vision test, 37% of the participants arranged the color caps without any errors, and this result was not altered by any of the AR devices. During the baseline assessment, 40.7% of the participants made major errors and 22.2% made minor errors. With Ziru, there were 25.9% major errors and 37% minor error, for IrisVision, the numbers were 44.4% major errors and 18.5% minor errors and with NuEyes, the findings were 29.6% major errors and 33.3% minor errors. No specific color vision deficiency with any viewing condition was noted for any participant. Three participants were able to respond to a 6 mm Frisby plate and their stereo acuity was 215, 340, and 600 arc sec without any AR device, of which one participant responded to 3 mm and 1.5 mm with 170 and 55 arc sec respectively. There was no difference noted in stereo acuity with any of the AR devices when compared to without AR devices.

Object identification task

All participants were able to complete the navigation task without and with all models of AR devices. Two participants reported mild symptoms of motion sickness while using IrisVision which was not reported while using Ziru or NuEyes. Out of 10 objects that were positioned at different heights, the participants were able to identify most of the objects using AR devices. The number of objects identified without and with AR devices was significantly different (p < 0.05). On post-hoc analysis, the number of objects identified without AR and with Ziru was not significantly different (p = 0.03) while the number of objects identified using IrisVision (p < 0.01) and NuEyes (p < 0.01) were significantly different. The average time taken for completion of this task was not significantly different. The number of objects missed without using any AR devices and while using Ziru was not significantly different (p > 0.01), but it was significantly different while using IrisVision and NuEyes (p < 0.01) as shown in Table 4.

thumbnail
Table 4. Object identification task using augmented reality devices.

https://doi.org/10.1371/journal.pone.0332085.t004

Table 4 description: Each participant was presented with 10 objects per condition such as Baseline, Ziru, IrisVision, and NuEyes, resulting in a total of 270 objects across all participants. The counts from all participants were combined for the analysis and calculations.

However, head-level objects (n = 16,21,23 with Ziru, IrisVision and NuEyes) were missed more when compared to waist and floor levels with the AR devices (Fig 5), which was statistically significant (p < 0.01). This could be because of the field of view limitations in head-mounted displays. While wearing the AR headsets, the number of objects missed at head level was equal or higher among participants with CFL when compared to those with PFL, this could be because head-level objects are visualized using the central field of vision. Similarly, the number of objects missed at floor level was equal or higher among those with PFL when compared to those with CFL. However, the differences between the CFL and PFL did not reach statistical significance. Furthermore, the participants expressed that they could observe the fine details of the objects in the room much more clearly when using the AR devices compared to viewing them with the naked eyes. For example, they mentioned a pink-colored plush toy wearing a scarf around its neck.

thumbnail
Fig 5. Object identification task performed without and with AR devices.

This figure demonstrates the number of objects missed at head level, waist level, and floor level. The error bars represent the standard error of the mean. AR: Augmented reality.

https://doi.org/10.1371/journal.pone.0332085.g005

Discussion

Many studies have reported a change in the visual functioning of people with low vision using conventional optical low vision devices (LVD) and electronic devices [7,8], however no study compared the performance of head-mounted displays (HMD). The augmented reality devices might replace the use of multiple low vision devices in the future. Augmented reality technology provides image enhancement that allows users to see the real world with enhanced brightness, magnification, contrast, and a combination of other elements [8]. Since many people with low vision have reduced contrast sensitivity, visual acuity, and/or loss of field of vision, it might be difficult or even impossible to have a “one size fits all” solution. With all the technological advances in the field of AR, it might provide more options for the visually impaired as explored in this study [21]. There are a few studies that provide evidence that augmented reality devices could help improve visual acuity, contrast sensitivity, reading ability of people with low vision [2225], however the studies aim at one or two visual functions which are not sufficient to conclude how these devices would be beneficial for performing daily living activities (functional vision). There is not enough literature comparing the visual functions of individuals with low vision while using different models of augmented reality (AR) devices, therefore this study highlights the comparison of three AR devices to determine what and how people with low vision could see with these devices. In addition, we have also assessed performance on an object identification task using all three devices.

All 27 participants were able to perform the visual task while using three models of AR devices, except that none of the participants were able to focus on small reading prints while using NuEyes, this implies that NuEyes cannot be used for reading fine prints. The majority of them were elderly people above 60 years of age, and most of them were found to have central field loss secondary to ocular pathology. Many studies have attempted to report the change in visual acuity of people with low vision while using AR devices [1618,2025], yet this study compares the visual acuity in four different lighting conditions such as HLHC, HLLC, LLHC and LLLC. The change in the number of lines read was significantly different with Ziru and Iris Vision, but not with NuEyes in HLHC and HLLC condition. Interestingly, the change was significantly different with all three devices on low luminance high contrast (LLHC) and low luminance low contrast (LLLC) conditions which could be due to the nullifying effect of the background room illumination of 0.01lux. The display on the AR devices starts flickering only while viewing back illuminated charts at a certain level of magnification, which was not found while seeing externally illuminated charts or any other objects. This flickering could impede their performance at high magnification settings. The majority of the participants were able to read the 5% low contrast chart using Ziru both on high luminance and on low luminance backgrounds which could be due to the higher luminance level of the screen display of Ziru. Interestingly, none of the participants were able to attempt reading the 5% contrast level chart with NuEyes. NuEyes had a notably low luminance level of 6.2 cd/m², which is significantly lower than that of other augmented reality (AR) devices. This made it challenging to see clearly and achieve sufficient contrast for object recognition. Additionally, the image quality observed through NuEyes was not very good, likely due to its camera resolution. Unlike the other two AR devices that utilize phone camera systems, NuEyes employed a separate camera mounted on the spectacle frame, which captures the image displayed on the device’s screen. Additionally, there was a clip-on lens designed for enhanced viewing of distant objects. Despite the removal of the clip-on lens, the device was unable to achieve clear focus on fine print. This limitation affected the usability of the device for near-vision tasks such as reading small fonts. Most of the participants responded to 10% contrast level chart with IrisVision which implies that it was helpful seeing the medium level of contrast. The reason IrisVision is easier to see despite its considerably low luminance might be because it can properly represent black due to its high contrast. Similar to many other studies [18,21], we found a significant enhancement in near vision with Ziru and Iris Vision, but with NuEyes, the near vision reduced significantly which could be due to the focusing limitations of the device camera. Even though the NuEyes Pro 3 model was presented with an additional clip-on lens to focus farthest objects, it was not designed to read fine prints like newspapers, magazines, etc. Although, the mean distance and near contrast sensitivity improved significantly with Ziru and Iris Vision, it reduced significantly with NuEyes, this demonstrated that AR devices help improve the contrast sensitivity which is similar to previous studies [16,17,22,23]. The luminance level was not cut down significantly while using Ziru compared to IrisVision and NuEyes, which could be the reason for participants reading more letters on distance and near contrast sensitivity chart with Ziru. Ziru is a non-enclosed device with open sides, allowing for higher perceived brightness when measured with a photometer. In contrast, the fully enclosed device, IrisVision showed significantly lower luminance, even at maximum brightness. Although all three AR devices use similar Samsung phones, differences in brightness and image quality may also result from their software applications and other unknown factors. The image quality of all the three devices can be visualized on the Fig 1, it was obvious that the image quality was better in Ziru and IrisVision when compared to the NuEyes. We believe that the image quality was influenced by the camera resolution, image enhancement process, application properties, image processing methods and others for which further research is needed. Even though there were different contrast combination views available, all the participants preferred regular viewing mode on all the AR devices while performing the visual tasks.

The depth perception test revealed that none of the AR devices helped in perceiving the depth between the boxes when compared to seeing without any AR device. Even though the mean correct responses showed a better score with Ziru when compared to the other two devices, none of the devices had a statistically significant difference. Similarly, except for three participants, no one was able to respond to the Frisby stereo plates. Previous reports explain that color vision has not been studied widely while using AR devices [21], in this study these AR devices were not noted to have any major difference in color and depth perception.

Even though the spectacle model AR device looks cosmetically appealing, and everyone prefers to choose the design when compared to the head mounted model, the field of vision and luminance has a significant reduction which is an inbuilt limitation of the spectacle model AR devices. Therefore, while we have enough evidence that magnification of font size is important for improving visual performance, it is also important to consider the field of view, contrast, brightness, and image quality of the devices before suggesting a device to people with low vision. We have attempted to compare the different models of AR devices to provide an insight to the developers about the fact that visual function changes with each model of device, which is not widely recognized. In addition, Ophthalmologists, Optometrists, LVTs, OTs and people with low vision need to be cautious before choosing spectacle model AR devices for improving visual functions, especially for those with ocular conditions like retinitis pigmentosa, glaucoma and other ocular conditions that cause reduced field of vision.

The Htike et al study stated that there is no good evidence that HMDs improve walking efficiency [20], while this study attempts to report a small component of walking during object identification task. During this task, all the participants were able to wear all three AR devices and identify objects at head level, waist level, and floor level including the signs for walking directions and distractors in the room. The number of objects missed was not significantly different while using Ziru among all three devices, which proved that there was no dampening of performance while using Ziru compared to without AR devices. The average time taken to identify objects was not significantly different between using AR devices and not using them. This suggests that the use of AR devices does not reduce the time required for object identification.

Field of view of the AR devices and the type of visual field defects have important implications for visual performance using AR devices. It is interesting that the head level objects were missed more when compared to other levels with all three devices. The head level objects that were straight ahead, could have been hard to focus and identify by the majority of our participants who were older and with central field loss. For depth perception and object identification tasks, we used the minimum magnification level (1x) across all devices to ensure consistent comparison. Higher magnification can improve static visual function assessments such as visual acuity by enlarging details but significantly reduced the field of view in dynamic tasks, reducing spatial awareness and scanning ability. To optimize both detail visibility and situational awareness, especially in functional vision assessments, the magnification level was kept at a minimum. Apart from the potential of AR devices in general which has been established already by many researchers, using these devices as accessibility tools has many advantages including distance viewing, reading, contrast enhancement, and other features [1618,2025]. The cost of AR devices differs a lot widely, however, it is important to consider the performance of the device for improving the visual functions of people with low vision. Surprisingly, Ziru which costs $700 functions equally well and sometimes better when compared to IrisVision $2999 or NuEyes $3999, which brings in a hope of availability of AR devices at very affordable cost.

In this study, we have used a heterogeneous group of individuals with a wide range of ocular conditions. While such a group is representative of the general low vision population, the wide variety of ocular pathologies and the limited sample size prevent drawing further conclusions about which type of AR devices are more effective for specific ocular conditions. However, both stand-alone and head-mounted AR models were found to be more effective in improving visual function compared to spectacle model AR devices. The time taken for exploring the AR devices is a limitation in this study which is similar to findings from previous studies, people with low vision need adequate time to explore these devices which would be helpful in receiving more feedback. However, further follow-up study is recommended where the AR devices can be given to the participants for a home loan to understand their perception about the devices. Further research can explore how these findings can be applied to improve the home and work environment of people with low vision. Of note, in this study we compared object detection using the three AR devices. Even though, head-mounted displays HMDs are becoming increasingly popular as augmented reality or virtual reality based assistive technologies are being developed for various activities of daily life including navigation and wayfinding [4,26,27], most HMDs are not recommended for mobility and presents safety concerns for use while moving since the magnification and limited field of view can interfere with distance judgements and perception. We found that our participants benefitted from using the devices for detecting and identifying objects in an indoor setting.

Conclusion

The majority of the visual functions had improvement with all three devices, yet the distance vision had the most enhancement in all the illumination settings using the Ziru and IrisVision, whereas the enhancement was limited to certain lighting conditions with NuEyes. The near visual acuity, distance contrast sensitivity, and near contrast sensitivity had improvement with Ziru and IrisVision, but not with NuEyes. While there are few changes in visual functions with all three models of AR devices, there was a huge difference in terms of cost. Researchers and individuals with low vision expect that augmented reality (AR) devices in the form of spectacles would be a good option; however, this expectation was not supported by the study in terms of improving visual functions. The study found that stand-alone and head-mounted AR models were more effective in improving visual functions than spectacle models. While augmented reality devices do help in enhancing visual performance, it is important for them to have a simple design with the most essential functions.

References

  1. 1. Thylefors B, Négrel AD, Pararajasegaram R, Dadzie KY. Global data on blindness. Bull World Health Organ. 1995;73(1):115–21. pmid:7704921
  2. 2. Flaxman AD, Wittenborn JS, Robalik T, Gulia R, Gerzoff RB, Lundeen EA, et al. Prevalence of visual acuity loss or blindness in the US: a Bayesian meta-analysis. JAMA Ophthalmol. 2021;139(7):717–23.
  3. 3. Rein DB, Wittenborn JS, Zhang P, Sublett F, Lamuda PA, Lundeen EA, et al. The Economic Burden of Vision Loss and Blindness in the United States. Ophthalmology. 2022;129(4):369–78. pmid:34560128
  4. 4. Zhao Y, Hu M, Hashash S, Azenkot S. Understanding low vision people’s visual perception on commercial augmented reality glasses. In: Conf. Hum. Factors Comput. Syst; 2017. 4170–81.
  5. 5. Abdulhussein D, Abdul Hussein M. WHO Vision 2020: Have We Done It?. Ophthalmic Epidemiol. 2023;30(4):331–9. pmid:36178293
  6. 6. Kartha A, Sadeghi R, Bradley C, Tran C, Gee W, Dagnelie G. Measuring visual information gathering in individuals with ultra low vision using virtual reality. Sci Rep. 2023;13(1):3143. pmid:36823360
  7. 7. Gopalakrishnan S, Paramasivan G, Sathyaprasath M, Raman R. Preference of low vision devices in patients with central field loss and peripheral field loss. Saudi J Ophthalmol. 2021;35(4):286–92.
  8. 8. Deemer AD, Bradley CK, Ross NC, Natale DM, Itthipanichpong R, Werblin FS, et al. Low vision enhancement with head-mounted video display systems: are we there yet? Optom Vis Sci. 2018;95(9):694–703.
  9. 9. Massof RW, Fletcher DC. Evaluation of the NEI visual functioning questionnaire as an interval measure of visual ability in low vision. Vision Res. 2001;41(3):397–413. pmid:11164454
  10. 10. Garcia GA, Khoshnevis M, Gale J, Frousiakis SE, Hwang TJ, Poincenot L, et al. Profound vision loss impairs psychological well-being in young and middle-aged individuals. Clin Ophthalmol. 2017:417–27.
  11. 11. Wolffsohn JS, Cochrane AL. Design of the low vision quality-of-life questionnaire (LVQOL) and measuring the outcome of low-vision rehabilitation. Am J Ophthalmol. 2000;130(6):793–802. pmid:11124300
  12. 12. Evans JR, Fletcher AE, Wormald RPL. Depression and anxiety in visually impaired older people. Ophthalmology. 2007;114(2):283–8. pmid:17270678
  13. 13. Deemer AD, Swenor BK, Fujiwara K, Deremeik JT, Ross NC, Natale DM, et al. Preliminary Evaluation of Two Digital Image Processing Strategies for Head-Mounted Magnification for Low Vision Patients. Transl Vis Sci Technol. 2019;8(1):23. pmid:30834171
  14. 14. Peli E, Luo G, Bowers A, Rensing N. Development and evaluation of vision multiplexing devices for vision impairments. Int J Artif Intell Tools. 2009;18(3):365–78. pmid:20161449
  15. 15. Cheng D, Wang Q, Liu Y, Chen H, Ni D, Wang X, et al. Design and manufacture AR head-mounted displays: A review and outlook. Light: Adv Manuf. 2021;2(3):350–69.
  16. 16. Kinateder M, Gualtieri J, Dunn MJ, Jarosz W, Yang XD, Cooper EA. Using an augmented reality device as a distance-based vision aid-promise and limitations. Optom Vis Sci. 2018;95(9):727–37.
  17. 17. Culham LE, Chabra A, Rubin GS. Clinical performance of electronic, head-mounted, low-vision devices. Ophthalmic Physiol Opt. 2004;24(4):281–90. pmid:15228505
  18. 18. Miller A, Crossland MD, Macnaughton J, Latham K. Are wearable electronic vision enhancement systems (wEVES) beneficial for people with age-related macular degeneration? A scoping review. Ophthalmic Physiol Opt. 2023;43(4):680–701. pmid:36876427
  19. 19. Lynn MH, Luo G, Tomasi M, Pundlik S, E Houston K. Measuring Virtual Reality Headset Resolution and Field of View: Implications for Vision Care Applications. Optom Vis Sci. 2020;97(8):573–82. pmid:32769841
  20. 20. Htike MH, Margrain H, Lai YK, Eslambolchilar P. Augmented reality glasses as an orientation and mobility aid for people with low vision: a feasibility study of experiences and requirements. In: Conf. Hum. Factors Comput. Syst; 2021. 1–15.
  21. 21. Li Y, Kim K, Erickson A, Norouzi N, Jules J, Bruder G, et al. A scoping review of assistance and therapy with head-mounted displays for people who are visually impaired. ACM Transactions on Accessible Computing. 2022;15(3):1–28.
  22. 22. Yeo JH, Bae SH, Lee SH, Kim KW, Moon NJ. Clinical performance of a smartphone-based low vision aid. Sci Rep. 2022;12(1):10752. pmid:35750770
  23. 23. Wittich W, Lorenzini MC, Markowitz SN, Tolentino M, Gartner SA, Goldstein JE, et al. The effect of a head-mounted low vision device on visual function. Optom Vis Sci. 2018;95(9):774–84.
  24. 24. Gopalakrishnan S, Suwalal SC, Bhaskaran G, Raman R. Use of augmented reality technology for improving visual acuity of individuals with low vision. Indian J Ophthalmol. 2020;68(6):1136–42.
  25. 25. Crossland MD, Starke SD, Imielski P, Wolffsohn JS, Webster AR. Benefit of an electronic head-mounted low vision aid. Ophthalmic Physiol Opt. 2019;39(6):422–31. pmid:31696539
  26. 26. Fox DR, Ahmadzada A, Friedman CT, Azenkot S, Chu MA, Manduchi R, et al. Using augmented reality to cue obstacles for people with low vision. Opt Express. 2023;31(4):6827–48.
  27. 27. Hamilton-Fletcher G, Liu M, Sheng D, Feng C, Hudson TE, Rizzo JR, et al. Accuracy and usability of smartphone-based distance estimation approaches for visual assistive technology development. IEEE OJEMB. 2024.