Fig 1.
Two control interfaces (a,b) and two VR experiments (c,d).
(a) The gamepad is a conventional device with which pitch and yaw movements can be controlled by hand. (b) On the Limbic Chair, the pitch and yaw of users’ legs are translated into movements through the virtual environment. Images of (c) the city navigation scenario and (d) the flight simulation scenario as displayed in the Oculus Rift [8].
Fig 2.
Our VR system is composed of an Oculus Rift, a Limbic Chair, a gamepad, a workstation, and a horizontal bar fixed to the workbench.
The user can move his left and right legs independently while balancing on the chair.
Fig 3.
(a) A screenshot of the city navigation scenario from Experiment 1. Participants collected five red balls while navigating through the city using a real-time map displayed on a virtual tablet. (b) An overview of one of the cities from a bird’s eye perspective.
Fig 4.
Sixteen steps of the procedure for Experiments 1 and 2.
The first six steps included informed consent, baseline measurements for body sway and the questionnaires, and a general introduction to VR. The next five steps included training and testing for one of the control interfaces, and the final five steps included training and testing for the other control interface. The order of control interfaces was counterbalanced across participants.
Fig 5.
Categories and codes of the content analysis and the measures and questionnaires to which they correspond.
Fig 6.
Bar chart of all dependent measures from Experiment 1.
For this visualization only, each dependent measure was normalized to a value between 0 and 1 by subtracting the minimum value from each individual value and then dividing by the maximum value. Error bars represent ±2 standard errors of the mean (SEMs). Asterisks mark a significant main effect of control interface. Larger values are desirable for the presence questionnaire (PQ), system usability scale (SUS), and Accuracy. Smaller values are desirable for workload (NASA TLX), simulator sickness questionnaire (SSQ), Time, and Body sway.
Table 1.
Questionnaire results from Experiment 1.
Significant results (p < 0.05) are bold.
Table 2.
Analysis of performance and additional measures from Experiment 1.
Significant results (p < 0.05) are bold.
Fig 7.
The results of the content analysis from Experiment 1.
The total numbers of negative (red) and positive (green) statements are shown on the horizontal axis and encoded in three categories (Control, Task, and User) and for each control interface (vertical axis).
Fig 8.
(a) A screenshot of the flight simulation scenario from Experiment 2. Participants flew through fifteen rings and counted the number of birds. The next target ring was highlighted in yellow. A series of rings shown from (b) orthographic-frontal and (c) top views.
Fig 9.
Bar chart of all dependent measures from Experiment 2.
For this visualization only, each dependent measure was normalized to a value between 0 and 1 by subtracting the minimum value from each individual value and then dividing by the maximum value. Error bars represent ±2 SEMs. Asterisks denote a significant main effect of control interface. Larger values are desirable for the presence questionnaire (PQ), system usability scale (SUS), Accuracy, and Bird counts. Smaller values are desirable for workload (NASA TLX), simulator sickness questionnaire (SSQ), Time, and Body sway.
Table 3.
Questionnaire results from Experiment 2.
Significant results (p < 0.05) are bold.
Table 4.
Analysis of performance and additional measures from Experiment 2.
Significant results (p < 0.05) are bold.
Fig 10.
The content analysis of Experiment 2.
The total numbers of negative (red) and positive (green) statements are shown on the horizontal axis and encoded in three categories (Control, Task, and User) and for each control interface (vertical axis).