Fig 1.
The robotic body surrogate (Willow Garage PR2).
(A) The PR2 robot. (B) One of the robot’s seven DoF arms, including the tactile-sensing fabric skin (gray) and foam padding (black) on the metallic gripper. (C) The base of the robot, including tactile-sensing fabric skin (blue), placed atop foam padding.
Fig 2.
Enabling system operation through single-button mouse-type input simplifies design and provides broad accessibility.
Individuals with diverse disease or injury conditions likely have diverse and possibly changing levels of impairment. These individuals may choose to use a variety of commercially-available, off-the-shelf input devices that enable single-button mouse-type input, which can be used to operate our robotic body surrogate. The many possible combinations of disease/injury, impairment, and usable computer interface are connected here by gray lines. These devices make our system accessible across a range of sources of impairment and personal preferences. Also, system developers only need to support a single mode of interaction, reducing development and support effort. Examples: (Blue line) An individual with ALS may have limited hand function and choose to use a head-tracking mouse; (Orange line) An individual with spinal muscular atrophy (SMA) may experience upper-extremity weakness, and prefer the use of a voice-controlled mouse; (Green line) An individual with a spinal cord injury (SCI) may only retain voluntary eye movement, and use an eye-gaze based mouse. All three of these individuals can operate our system without modification, making it accessible across types and sources of motor impairment.
Fig 3.
The end-effector position control ring augmented reality interface with virtual preview (yellow) and goal (green) gripper displays.
(A) The control ring’s rotation remains aligned with the robot’s body. (B) The control ring appears parallel to the floor to convey vertical height. (C) A yellow virtual gripper ‘previews’ commands by displaying the pose the gripper will attempt to reach if commanded. (D) A green virtual gripper displays the gripper’s current goal, and disappears once it reaches this goal.
Fig 4.
Contact displays overlaid on the video interface based on data from the fabric-based tactile sensors.
(A) Contact on the forearm against the table edge. (B) Contact between the robot’s base and the wheelchair. (C) Contact with the robot’s base behind the current field of view.
Fig 5.
The 3D Peek feature showing the 3D point cloud over the live camera feed, rotated to provide depth perception.
(A) The view before the 3D Peek. (B) ≈ 0.1s into the 3D Peek. (C) ≈ 0.3s into the 3D Peek. (D) 3D Peek view (holds for 2.8s).
Fig 6.
The interface used to operate the robotic body surrogate.
(A) ‘Looking’ mode. (B) ‘Spine’ mode. (C) ‘Driving’ mode. (D) ‘Hand position’ mode. (E) ‘Hand rotation’ mode. (F) ‘3D Peek’ depth display.
Fig 7.
The end-effector orientation control augmented reality interface with virtual preview (yellow) and goal (green) gripper displays.
(A) 3D virtual orientation controls around end effector. (B) Hovering over the blue arrow hides other arrows and shows yellow preview. (C) After sending a command, a green virtual gripper shows the active goal. (D) Gripper position after rotating to left from (A). (E) Hovering over green arrow hides arrows, shows preview. (F) Gripper position after rotating upward from (E).
Fig 8.
Locations of the 15 participants with profound motor deficits who remotely operated a robotic body surrogate in Atlanta, GA (star) across long distances.
This evaluation used participants’ own, existing computer hardware and Internet connections, demonstrating our system’s performance with real-world bandwidth and latency constraints. Darker states indicate two participants from that state, lighter states indicate one participant.
Fig 9.
15 participants with profound motor deficits operated the robotic body surrogate over long distances to perform the Action Research Arm Test (ARAT).
(A) A participant remotely performing an item from the ARAT with the robotic body surrogate: grasping, lifting, and placing a 7.5 cm wooden block. (B) Comparison of participant ARAT scores without (left) and with (right) the robot (n = 15, W = 120, p = 0.00035). (C) ARAT score improvements vs. minimal clinically important difference (MCID) reported in literature [60] (MCID = 12, n = 15, W = 96, p = 0.00147).
Fig 10.
Participants indicated that the robotic body surrogate would provide a significantly meaningful improvement in their ability to perform both manipulation tasks (n = 15, W = 105, p = 0.00036) and self-care tasks (n = 15, W = 120, p = 0.00024).
Participants were asked to complete the sentence “using the robotic system rather than my own arms would make my ability to perform [manipulation tasks / self-care tasks]…” using a seven-point scale, with possible responses (based on [60]: 1. Much worse, 2. Meaningfully worse, 3. A little worse, but not meaningfully, 4. Neither better nor worse, 5. A little better, but not meaningfully, 6. Meaningfully better, and 7. Much better. The charts shows the distribution of responses to each form of this question. Significance was evaluated using a 1-tailed, 1-sample Wilcoxon signed rank test vs. a rating of 5–‘A little better, but not meaningfully’.
Fig 11.
Participants significantly agreed that the system was both useful (use) and easy to use (ease) for both manipulation tasks (manip.) and self-care tasks (self).
(A) Use-Manip.: W = 120, p = 0.00026. (B) Use-Self: W = 105, p = 0.0004. (C) Ease-Manip.: W = 74, p = 0.0026. (D) Ease-Self: W = 87.5, p = 0.0014. n = 15 for all. Participants were asked to rate their agreement with the statements “the robotic system is (easy to use / useful) for performing (manipulation/self care) tasks” using a seven-point scale. Allowed responses were: 1. Strongly disagree, 2. Disagree, 3. Somewhat disagree, 4. Neither agree nor disagree, 5. Somewhat agree, 6. Agree, and 7. Strongly agree. Charts show the distribution of responses to each combination of (useful/easy to use) and (manipulation tasks/self care tasks). Significant was evaluated using a 1-tailed, 1-sample Wilcoxon signed rank test vs. a rating of 4–‘Neither agree nor disagree.’
Fig 12.
15 participants with profound motor deficits operated the robotic body surrogate over long distances to simulate getting themselves a drink.
(A) The layout of the task room at the beginning of the tasks. The bottle (left) is placed on a shelf, approximately two meters in front of the robot, and the mannequin in a wheelchair is placed nearby. The observing researcher sits in the back of the room. (B) A participant remotely retrieving the water bottle. (C) A participant reaching and rotating the grasped bottle toward the mannequin’s mouth. (D) The straw in the bottle at the center of the mannequin’s mouth, showing the small screw adhered to the magnet behind the mannequin’s mouth, indicating successful completion of the task.
Fig 13.
Henry Evans performed 59 separate tasks, including ten distinct types of self-care task and seven distinct types of household task, during the seven-day in-home evaluation.
This figure shows Henry Evans performing a selection of these tasks, including: (A) Wiping his face. (B) Shaving his face. (C) Flipping a light switch (Henry visible in background). (D) Feeding himself yogurt. (E) Scratching his head. (F) Applying lotion to his legs.