Fig 1.
A visual reorientation illusion.
A person lying down in an environment that has visual cues indicating one upright (blue arrow) and gravity cues indicating another upright (red arrow). The arrows point in the direction of up signaled by each cue. In (A) the person is supine and in (B) they are prone. If the visually-indicated upright were to dominate, the person would experience a visual reorientation illusion (VRI) and perceive themselves as upright (top thought clouds). If gravity cues were to dominate, they would perceive themselves supine or prone (bottom thought clouds).
Fig 2.
Screen captures of the hallway environment (A and B) and the starfield environment (C and D). A and C are what the scenes looked like before movement while the target (vertical red line) was visible and the walls were invisible, B and D are what the scenes looked like once the participant had clicked the left mouse button and the optic flow started to simulate motion down the hallway. The hallway environment was used in Experiments 1, 2, and 3, the starfield environment was used only in Experiment 3.
Fig 3.
Gravity, vision, and perceived up by posture.
The figure compares how gravity and vision (top panel) and perceived up (bottom panel) are affected by body and head posture. The red arrows in the top panel (A-F) show the visual and gravitational accelerations experienced by the participant in the same body postures as in the bottom panel G-L. When looking up (A, C, E) the direction of the visually simulated self-motion is in the same direction as the reaction to the gravitational acceleration; when looking down they are opposed (B, D, F). The bottom panel G-L shows the contributions to the cues to upright (blue arrows) while wearing a VR HMD with visual cues consistent with “up” being towards the top of the head (see Fig 2A and 2B), and with the head, or head and body tilted. In the starfield environment (see Fig 2C and 2D) there is no “visual up” (K, L).
Fig 4.
An example of the hand positions used to help clarify the questionnaire.
Column A demonstrates the hand posture shown for option 1 in the questionnaire. Column B demonstrates option 2, and Column C demonstrates option 3.
Fig 5.
The supine similar and prone similar postures.
A demonstrates how the participant’s head was positioned while in the supine-similar position. B shows what this looked like with the oculus on. C shows how the head was positioned during the prone-similar position. In both cases the back was kept as straight as possible. The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these images.
Table 1.
Overall VRI rates for the three experiments.
Fig 6.
The average simulated travel distance needed to reach each target for each posture for each experiment.
Blue bars indicate responses while standing, orange while supine, and grey while prone. Error bars are ± standard error.
Table 2.
The mean gain for each condition.
Fig 7.
The gain ratio as a function of VRI likelihood.
Both figures show the changes in gain while supine and prone relative to when standing. The solid horizontal line at 1 indicates no change with posture. In both figures blue refers to the supine data and orange refers to the prone data. Error bars are ± standard error. A shows the change in gain calculated for each participant in each experiment and environment and then averaged. B shows the same gains relative to standing in A plotted as a function of the likelihood of experiencing a VRI for all three experiments and all environments. The lines are best fit linear regressions.
Fig 8.
A model of how the perceived orientation, influenced by visual cues present in an environment, might affect perceived motion. VIS refers the optic flow information. The left side indicates the situation when all cues are aligned. The right side indicates how the detection of conflict may alter perceived travel distance.