Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Evaluation of a conceptual framework for predicting navigation performance in virtual reality

  • Jascha Grübel ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    jgruebel@student.ethz.ch

    Affiliation Department of Humanities, Social and Political Sciences, ETH Zürich, Switzerland

  • Tyler Thrash,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Humanities, Social and Political Sciences, ETH Zürich, Switzerland

  • Christoph Hölscher,

    Roles Funding acquisition, Investigation, Project administration, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Department of Humanities, Social and Political Sciences, ETH Zürich, Switzerland

  • Victor R. Schinazi

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Humanities, Social and Political Sciences, ETH Zürich, Switzerland

Abstract

Previous research in spatial cognition has often relied on simple spatial tasks in static environments in order to draw inferences regarding navigation performance. These tasks are typically divided into categories (e.g., egocentric or allocentric) that reflect different two-systems theories. Unfortunately, this two-systems approach has been insufficient for reliably predicting navigation performance in virtual reality (VR). In the present experiment, participants were asked to learn and navigate towards goal locations in a virtual city and then perform eight simple spatial tasks in a separate environment. These eight tasks were organised along four orthogonal dimensions (static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction). We employed confirmatory and exploratory analyses in order to assess the relationship between navigation performance and performances on these simple tasks. We provide evidence that a dynamic task (i.e., intercepting a moving object) is capable of predicting navigation performance in a familiar virtual environment better than several categories of static tasks. These results have important implications for studies on navigation in VR that tend to over-emphasise the role of spatial memory. Given that our dynamic tasks required efficient interaction with the human interface device (HID), they were more closely aligned with the perceptuomotor processes associated with locomotion than wayfinding. In the future, researchers should consider training participants on HIDs using a dynamic task prior to conducting a navigation experiment. Performances on dynamic tasks should also be assessed in order to avoid confounding skill with an HID and spatial knowledge acquisition.

Introduction

Researchers in spatial cognition have frequently relied on virtual reality (VR) in order to conduct experiments on human navigation [1, 2]. Some researchers have investigated the use of different human interface devices (HIDs; e.g., joystick, mouse and keyboard) with respect to navigation performance in virtual environments [36]. However, the specific aspects of spatial behaviour that mediate the relationship between skill at manipulating the HID and navigation performance have yet to be determined. Interaction with an HID may be related to navigation differently than natural walking through a real environment because the HID involves an additional layer of abstraction between an intended action and its perceptual consequences [7, 8]. This mapping between action and perception may be learned incrementally in a similar way as movements in real environments [9], but people generally have more experience with natural walking than with manipulating an HID. In addition, experience with a specific HID may explain performance differences for various navigation tasks [3, 6]. The present study assesses the manner in which participants’ skills with an HID relates to navigation performance in a virtual environment.

According to Montello [10, p. 258-260], navigation can be decomposed into locomotion (i.e., manoeuvring through a large-scale environment) and wayfinding (i.e., spatial decision-making). Traditionally, spatial cognition research has focused on the importance of spatial memory for wayfinding tasks and may have overlooked the importance of locomotion for large-scale navigation. Following Gibson [11], Heft [12] has characterized the process of navigation as apprehending the invariant structure of an environment during locomotion through a sequence of vistas (i.e., the features available to an observer from a particular viewpoint) separated by transitions (i.e., points along a route at which a previously occluded vista gradually comes into view). However, there is insufficient evidence to suggest that a locomotion-based theory can explain navigation more generally (but see [13]).

Spatial behaviour has also been characterised along other dichotomies, including perception-action and cognitive components [1417], fine-grained and categorical spatial representations [1820], coordinate and categorical spatial representations [2124], taxon and locale systems [2528], online and offline processes [29], and egocentric and allocentric reference frames [30]. Allen and Haun [31] have ascribed some of these distinctions to the same two spatial processing systems but note that alternative theories with more systems may be appropriate (cf., [8]). Rather than presuming the alignment of different two-systems theories, the framework used for the present study constructs several orthogonal dimensions based on existing systems in order to predict navigation performance. These dimensions consist of static and dynamic stimuli, perceived and remembered information, egocentric and allocentric reference frames, and distance and direction judgements.

In VR, the user tends to be dynamic, but distinct stimuli (i.e., buildings, trees) in the virtual environment can be either static or dynamic. For example, a parked car can be considered a static stimulus, and a car moving down the street can be considered a dynamic stimulus. With respect to optic flow, static stimuli result in invariant spatial information in the visual field relative to their surroundings [11]. In contrast, dynamic stimuli can move through the visual field independently of changes in optic flow that result from self-motion [11]. Previous research in VR has often employed static stimuli in order to investigate navigation [1]. These studies have successfully demonstrated the role of spatial memory for navigation through static environments. For example, spatial memory may be assessed in terms of participants’ abilities to shortcut [32], build models [33], and conduct judgements of relative direction [34] However, the focus on static environments may have resulted in a bias towards tasks that rely on the integration of spatial information over time in memory [35, 36] and neglected the potential importance of dynamic stimuli perceived during navigation ([37]; but see [38, 39]). Responses to dynamic stimuli in VR may require more skill at manoeuvring the HID than responses to static stimuli when the stimuli move in an unpredictable manner. Thus, tasks with dynamic stimuli may tap previously unidentified individual differences in locomotion behaviour during navigation.

The static/dynamic dimension may also be disentangled from a perceived/remembered dimension during navigation because the perception (and not necessarily representation) of static objects is critical for many spatial behaviours [33, 40, 41]. Indeed, these spatial behaviours are often used to infer differences in mental representations but could also indicate difference in the initial perception of the objects, even when they are no longer visible. This distinction between perceived and remembered spatial information has important consequences for spatial reasoning with respect to immediate and remote environments [14, 42], especially when perception and memory are considered along a continuum. In this context, recently learned environments would lie between immediate and remote environments. For example, Waller and Hodgson [42] found that the representation of a remote environment can be relatively less accurate and less precise than the representation of an immediate environment [42]. However, in aggregate, less precise representations may lead to more accurate localisations [18]. On the other hand, responses to recently learned information tend to be relatively more precise [42] and more accurate (depending on response modality; [14]) than very familiar information.

According to Avraamides and Kelly [29], perceived information typically involves an egocentric reference frame, but remembered information may be egocentric (e.g., during scene recognition; [43]; [44]; as during pointing responses; [45]) or allocentric (whether intrinsic; [46]; or environmental; [47]; for a review, see [30, 48]). However, some researchers claim that remembered information is also primarily egocentric [49, 50]. During navigation, individuals may rely on external representations that are either egocentric (e.g., route instructions; [51]) or allocentric (e.g., maps; [52]) and may be employed to enforce a navigator’s choice of reference frame. The ease with which one uses either type of external representation during navigation can also indicate the format of the corresponding internal representation [52].

Distance and direction estimates have been used to infer egocentric representations based on a static, perceived environment (e.g., [14, 40, 53]); egocentric representations based on a static, remembered environment (e.g., [14, 42, 45, 47, 5355]); egocentric representations based on a dynamic, perceived environment (e.g., [36, 38, 39]); and allocentric representations based on a static, remembered environment (e.g., [42, 47, 52, 54, 55]. In addition, distance and direction judgements may also reflect two different spatial abilities because of differences in how translations and rotations are perceived and remembered (e.g., [4, 40, 5659]). For example, Easton and Sholl [56] found that rotations and translations led to different performance profiles in regularly (but not irregularly) structured arrays of objects. Thus, this distinction between direction and distance may represent an additional dimension of spatial task and may be orthogonal to static/dynamic, perceived/remembered, and egocentric/allocentric dimensions.

The present study investigates the manner in which these four orthogonal dimensions of spatial tasks can be used to predict navigation through a virtual environment. Specifically, we expect tasks with dynamic stimuli to be the best predictors of navigation behaviour in a familiar virtual environment because these tasks are closely associated with participants’ skills when using an HID. Towards this end, we designed eight simple tasks that systematically assess different points along these dimensions. We related performance on these eight tasks to navigation through a virtual reality replica of a university campus [33]. To anticipate, we found that an egocentric task in which participants chased a moving object predicted goal-directed navigation better than all four dimensions taken together.

Methods

Participants

Twenty-three participants were recruited for the experiment from the University Registration Center for Study Participants (https://www.uast.uzh.ch/) via the ETH Decision Science Laboratory (DeSciL). Three participants (two female) experienced simulator sickness and were excluded from the analyses. Of the remaining 20 participants, 11 were female. The age of the participants ranged from 18 to 28 years (M = 21.8, SD = 3.01).

Ethics statement.

The experiment was approved by the ETH Zurich Ethics Commission (EK 2013-N-73). Prior to starting the experiment, written informed consent was obtained from all participants. The participants were paid 30 CHF per hour. Participants that aborted the experiment due to simulator sickness were compensated with 20 CHF.

Materials

Hardware.

The technical setup for the experiment consisted of a WorldViz CAVE setup with three computers. Each system was equipped with a Core i7-3820 at 3.6 GHz with 12 GB of RAM and an Nvidia Quadro K4000 with 3 GB RAM. The CAVE consisted of three ultra short throw projectors NEC U310W running at a 1680 x 1050 resolution during 3D projection. To enable 3D perception, alternate frame sequencing shutter glasses of the type Volfoni 3DGE RF were used. The WorldViz PPT Real-Time Motion Tracking System was used for tracking head position and orientation. The tracking system was connected to a separate computer to reduce the computational load on the main machines. Participants were seated in a chair that was located in the middle of the CAVE facing towards the middle screen. A small table was mounted on the arm rests to comfortably place the joystick (Logitech Extreme 3D Pro) for the participants.

The motion sensors attached to the participants’ head provided the orientation of the participant in relation to the CAVE and was used to determine the participants’ orientation in the virtual environment. The head orientation together with the joystick was used to turn and move within the virtual environment. Translational movements were executed by pushing the joystick in the desired direction (i.e., forward, backward, left, and right) while rotations were performed by twisting the joystick left or right and turning the head. However, there was a subtle difference in how the joystick and head trackers were used to control rotation. When using the joystick, the projected virtual environment rotated to display the desired view direction. In contrast, turning the head merely changed the virtual direction from which we recorded the observer’s viewing direction. A visual “catchment area” was provided in order to facilitate the interaction with elements in the environment. This catchment area consisted of a yellow semi-transparent circle on the ground that moved with the participant’s position and head rotation (yaw axis) to indicate the location where we consider an interaction to occur. All translational movements were performed relative to the viewing direction (i.e., pushing the joystick forward always resulted in the expansion of optic flow from the point of focus).

Software.

We used custom-designed software [60] for conducting experiments with a Vizard CAVE system. This software provided automatic data storage (i.e., logging the position of the observer and static/dynamic elements) and logic units to setup the experiment. The obtained data was stored in a MySQL database (version 5.6.16) and subsequently exported to Matlab 8.2.0.29 (R2017a) for further processing and analysis.

Virtual environments.

Two different virtual environments were used in this experiment. One environment (the Sphere Environment) consisted of a small meadow (40 meters x 40 meters) with randomly placed spheres. Each sphere had a radius of 0.25 meters, floated 0.25 meters above ground, and had a minimum distance of 2 meters to the nearest sphere. The other environment (the Virtual SILCton Environment) consisted of a small road network, 22 buildings, and some additional structures (e.g., statue, benches). Six locations were selected for the navigation task. A sign with each location’s name was placed in front of each target (Fig 1). The digital model of Virtual SILCton has been used in previous spatial navigation research [33]. The model was originally created in Sketchup and then exported to Vizard as a collada file.

thumbnail
Fig 1. Overview of the Virtual SILCton Environment and the target locations.

A top-down perspective of Virtual SILCton with the six target locations (red). The ID of each location does not correspond to the order of visits during the experiment.

https://doi.org/10.1371/journal.pone.0184682.g001

Procedure

Upon arriving at the lab, participants were presented with a document describing the main goals and experimental procedure (see S1 File on page 21) and were asked to complete the consent form. Participants then completed the Santa Barbara Sense of Direction scale (SBSOD) [61]. Before each phase, they were given written instructions regarding each of the VR tasks (see S1 File). A small set of written questions was also given to the participants in order to ensure that they read and understood the instructions. At this stage, participants were also given time to ask questions about the experiment and procedure. Participants were then seated in the middle of the CAVE and given the joystick. The full protocol of the experiment is available online on protocol.io [62].

Participants completed a Training phase in the Sphere Environment, a Navigation Phase in the Virtual SILCton Environment and a Simple Tasks Phase in the Sphere Environment. During the training task, participants were also allowed to ask questions regarding the joystick and follow-up tasks but were asked to refrain from asking questions during testing. A video representing the entire experimental procedure is available online as supplementary material (see S1 Video).

Training Phase.

The training phase was used to familiarise the participants with the VR setup and joystick. Participates were asked to use the joystick to move around and collect 10 of 40 randomly coloured and placed spheres. The visual “catchment area” was provided in order to facilitate the collection task (Fig 2a). To collect a sphere, participants were asked to place the sphere within the catchment area and press the trigger button on the joystick. A counter at the top of the screen indicated when they collected a sphere.

thumbnail
Fig 2. Training and navigation.

(a) Screenshot of a participant collecting a sphere during the training phase. The yellow catchment area surrounds the intended target. For this figure, the catchment area appears slightly brighter than in the actual experiment. (b) Screenshot of a participant calling the arrow during the navigation phase. The destination (Tobler Museum) is indicated in the top-left corner of the screen. The energy bar is placed at the top of the screen.

https://doi.org/10.1371/journal.pone.0184682.g002

Navigation Phase.

In this phase, participants were asked to find a series of goal locations in the Virtual SILCton environment. Participants were unfamiliar with this virtual environment at the beginning of the Navigation Phase, so the first block of trials constituted a search task. During navigation, participants could press the trigger button on the joystick in order to call up a 3D arrow that pointed in the straight-line direction of the target locations (ignoring any potential obstacle along the way). The arrow did not guide the participants along a predefined route to the target location. An energy bar was used to limit participants’ interactions with the arrow (Fig 2b). Energy was consumed as the participants pressed the trigger. When the energy was depleted, participants were required to wait 10 seconds before they could trigger the arrow again. This mechanism prevented participants from continuously pressing the trigger but allowed them to use it primarily when they were disoriented.

The process of visiting all target locations was repeated over four blocks. During each of the first three learning blocks, participants were asked to visit the six locations in a random order. At the beginning of each trial, a large text appeared at the centre of the middle screen of the CAVE that indicated the name of the destination. Once displayed, the name of the destination remained at the top-left corner of the middle screen until participants reached the destination. During the fourth testing block, participants were asked to find the six target locations but without the help of the arrow. During testing, the visiting order of the target locations was fixed. This fixed order was designed to allow for comparisons across participants.

Simple Tasks Phase.

Participants performed a set of eight different tasks in random order in each of five blocks. Twenty white floating spheres were used for each of the simple tasks. For each task, target spheres were coloured blue. A pause screen appeared before each task and displayed a short description of the upcoming task. After each task, participants were rotated by a random angle and the spheres moved to new random locations. Participants used the joystick trigger to indicate that they completed the task to the best of their ability. No other feedback was provided to the participants. The name of the current task was displayed at the top-left corner of the main screen. Similar to the training task, a catchment area indicated the participants’ positions and head directions. For some tasks, an additional top-down map of the environment was displayed at the top-right corner of the middle screen that occupied 20% of the width and height of that screen.

Below are descriptions of each of the eight simple tasks. Fig 3 includes images of selected exemplary tasks.

thumbnail
Fig 3. Simple task example trials.

Images representing examples of the different tasks from the participant’s perspective. (a) Image of the Rotate (ROT) task from a first-person perspective. (b) Image of the Move (MOV) task from a first-person perspective. (c) Image of the Rotate with map (RWM) task from a first-person perspective with a north-up, top-down map. (d) Image of the Chase with map (CWM) task from a first-person perspective with a north-up, top-down map. This selection exhibits different components present in all tasks. For this figure, the catchment area appears slightly brighter than in the actual experiment and the size of the spheres on the map has been increased to be more visible to the reader.

https://doi.org/10.1371/journal.pone.0184682.g003

Rotate (ROT): Participants were asked to rotate to a target blue sphere. A successful trial consisted of turning until the blue sphere was in front of the participant’s head. Translations were disabled throughout this task.

Move (MOV): Participants initially faced a target blue sphere and were asked to walk towards it as accurately as possible.

Rotate with map (RWM): A north-facing, top-down map was displayed at the top-right corner of the middle screen. This map did not provide any indication of the participant’s position in the virtual world. Participants were asked to turn towards the target sphere that was coloured blue on the map. The white spheres were also visible on the map. The target sphere was blue only on the map and was not visibly distinguishable from the other (white) spheres from the first-person perspective. Translations were disabled throughout this task.

Move with map (MWM): A north-facing, top-down map was displayed at the top-right corner of the middle screen. The map did not provide any indication of the participant’s position in the virtual world. Participants were asked to walk to the location of the blue-coloured target sphere on the map. The white spheres were also visible on the map. The target sphere was blue only on the map and was not visibly distinguishable from the other (white) spheres from the first-person perspective.

Rotate from memory (RFM): Participants were asked to rotate sequentially to two blue target spheres. After the second rotation, all the spheres disappeared, and participants were asked to rotate back towards the direction of the first target sphere. Translations were disabled throughout this task.

Move from memory (MFM): Participants started the task facing a blue-coloured target sphere. Once participants started moving, all the spheres disappeared. Participants were asked to stop moving when they reached the previous location of the target sphere.

Chase (CHA): All spheres moved randomly within the virtual field. Participants were asked to move and intercept the blue target sphere as quickly as possible.

Chase with map (CWM): All spheres moved randomly within the virtual field. A north-facing, top-down map was displayed at the top-right corner of the middle screen. The map indicated the participant’s position in the virtual world with a red arrow. The location on the map was continuously updated. Participants were asked to move and intercept the blue target sphere shown on the map. The white spheres were also visible on the map. The target sphere was blue only on the map and was not visibly distinguishable from the other (white) spheres from the first-person perspective.

Design and analysis

The eight simple tasks were designed to represent different combinations of the static/dynamic, perceived/remembered, egocentric/allocentric, and distance/direction dimensions described above.

Static versus Dynamic Stimuli. In each task with static objects, no spheres in the environment could be moved or move on their own. In contrast, tasks with dynamic objects contained spheres that moved independently of participants’ actions.

Perceived versus Remembered Information. Each task was defined as to whether participants could complete the task based on the immediate environment or based on a mental representation of the environment.

Egocentric versus Allocentric Reference Frame. Tasks that emphasised egocentric reference frames only presented information from a first-person perspective. In contrast, tasks that emphasised allocentric reference frames included a map of the environment from a top-down perspective.

Direction versus Distance. Tasks were also defined as to whether participants performed translations or rotations towards the target sphere.

Task selection.

The relationships among the eight simple tasks in terms of the four orthogonal dimensions can be visualised as a tree (see Fig 4).

thumbnail
Fig 4. Task classification tree.

This tree represents the variable assignments for each of the eight tasks. Independent variables are inner nodes, and tasks are presented as leaves.

https://doi.org/10.1371/journal.pone.0184682.g004

Out of 16 possible variants of the four orthogonal dimensions, eight variants are not suitable. First, the combination of dynamic stimuli and remembered information is not suitable because it is unclear how participants could predict the movement of a randomly moving sphere. Second, the combination of remembered information and allocentric reference frame is not suitable because participants could use either egocentric or allocentric mental representations to complete the task.

Measurements.

Participants’ performances in the Navigation and Simple Task Phases were measured with respect to the time required to complete each task and deviation in terms of angle and distance from the correct path. For the Navigation Phase, this required logging of the participant’s position and orientation within the virtual environment and the ID of each location in the scene. We also recorded the number of trigger presses (calling the arrow) as a measure of learning during navigation. For the Simple Task Phase, we logged the participants’ positions, orientations, and trigger presses (indicating task completion). Here, we also logged the position(s) of the sphere(s) with which participants were interacting. Over 600,000 data points were collected throughout the experiment and were directly logged into the database.

Analysis.

Data from the SBSOD and virtual environments was imported to Matlab and SPSS for analysis. For details on the database, refer to S1 Data. In a first preprocessing step, the raw data was grouped by participant and experiment scene. This data was then split according to indicator variables that marked the beginning and end of each task. For the dynamic sphere tasks, the data points were resampled at a fixed time step to obtain uniform samples. Weighted linear interpolation between two objectively measured points was used to obtain a complete sample at the required time steps (see S1 Code). We next computed error measures for both Navigation and Simple Task Phases. We also conducted a Regularised Exploratory Factor Analysis (REFA) [63, 64] for assessing the relationships among the various tasks and attempted to predict navigation performance using both the four orthogonal dimensions and the REFA factors. Additional statistical analyses were performed with SPSS (see S2 Data)

Task errors.

As a metric for performance in the Navigation Phase, we used ArcGIS [65] to measure the optimal route distance dr between target locations and compared them to the actual distances dp walked by participants in the virtual environment. The ratio rd was considered the error measure as shown in Eq (1). (1)

Four error measures were devised to account for the participants’ overall performance and their accumulated error within each of the eight simple tasks. Good performance was indicated by a score close to or equal to 0, and bad performance was indicated by a score close to or equal to 1.

Error measures were deviations in either rotation or distance from the optimal choice. To score the performance at the end of a static task, we computed the final deviation to the optimal outcome (e.g., looking in the target direction or standing at the target location). Scores on the dynamic tasks were computed by accumulating error at each time step based on whether the participants’ action was optimal (e.g., bringing them closer to view the target direction or moving them closer to the goal location; see S1 Code).

In order to calculate the final error of a participants’ rotation, we defined the absolute value for the angle εr between the participants’ viewing direction αp and the direction towards the goal from their location αg. In addition, we mapped degrees onto the interval [0, 1] as shown in Eq (2). The error measure semantically defines 0 as looking directly at the goal and 1 as looking in the exact opposite direction of the goal. (2)

To measure a term εΔrt of the cumulative direction error εΔr, a simple sign function δr returned 1 if the participants rotated towards the goal, −1 if they turned away from the goal, and 0 if they remained static. Here again, we map the sign function results onto the interval [0, 1]. This refers to the rotation in degrees that participants performed between two sequential measures in time [t, t + 1] as shown in Eq (3). To obtain the cumulative direction error, the direction error at each time step εΔrt is summed and divided by the number of time steps T in a task, as shown in Eq (4). (3) (4)

The final distance error is the ratio of the participant’s start distance to the goal ds (i.e., the location at the beginning of the task) to their end distance to the goal de (i.e., the location at the time when the participant pressed the trigger to indicate the completion of a task). The starting point refers to the participants’ location at the beginning of the task, and the end point refers to the participants’ location at the time when they pressed the trigger indicating that they completed the task. In addition, an offset of δc = 4m (equivalent to the distance between the participant and the centre of the catchment area) was used to account for the catchment area. The resulting error measure εd, as shown in Eq (5), was also mapped onto the interval [0, 1]. An error of 1 indicated that the participants kept a distance equal to or larger than the start distance ds to the goal. An error of 0 indicated that a participant reached the goal up to the precision of the catchment area. (5)

To measure the term εΔdt in the cumulative distance error εΔd, the optimal distance dopt that participants’ could have reached with Δdp was compared to the actual distance that they reached in the following time step. Here, Δdp refers to the distance in meters that the participant moved between two sequential measures in time. The results are mapped onto the interval [0, 1] by dividing by 2Δdp (see Eq (6)). To obtain the cumulative direction error, each εΔdt is summed up and divided by the number of time steps T in a task (see Eq (7)). (6) (7)

Regularised Exploratory Factor Analysis.

Developed by Jung and Takane [63], Regularised Exploratory Factor Analysis (REFA) can be used with small sample sizes (n < 50) that may cause erratic behaviour in other types of Exploratory Factor Analysis (EFA) or Principal Component Analysis (PCA). With small sample sizes, the sample covariance matrix tends to be near singular and numerically ill-conditioned, which makes the application of EFA difficult. Furthermore, PCA is not always appropriate because it does not model measurement errors [64, 66]. For REFA, it is assumed that the unique variance Ψ is proportional to a tentative estimate of Ψ. This estimate is adjusted via the regularisation parameter λ [63]. For the present study, we adopted the one-parameter maximum likelihood (ML) estimation method under the anti-image assumption (ML REFA) [64]. ML REFA produces better results for small samples than other approaches [63] including unbiased estimates of factor loadings, smaller standard deviations, and smaller mean squared errors (MSEs). To estimate the number of factors, permutation tests (equivalent to parallel analysis) were employed [64, 67]. The resulting factors were then rotated using an oblique geomin rotation [68].

We applied REFA in order to identify the underlying factors of participants’ performance in the eight tasks. For each of the eight simple tasks, a standard score zpi was aggregated for the five repetitions. The error εpi (see Eq (8)) was used to compute the standardised score (see Eq (9)). For purely directional tasks, the sum of the final direction error and cumulative direction error equals zero. Thus, those tasks were divided by 2 rather than 4. (8) (9)

We used the standardised scores across all eight tasks as input to the REFA Matlab library provided by [64] and computed communalities to assess the quality of the factor analysis. The communality hi indicates the variance of a task i explained by the loading lj i in all m factors [69] (see Eq (10)). We then computed the total communality ht (see Eq (11)) and the mean communality hm (see Eq (12)) that indicate the total variance that the factors can explain. (10) (11) (12)

Results

The results are divided into three sections. First, we present the results of the Navigation and Simple Tasks Phases. Then, we relate performance from the Simple Tasks Phase to performance in the Navigation Phase using both REFA and regression analysis.

Navigation Phase

Given that we deliberately randomised the order of trials during learning (but not testing), we could not compare navigation performance across blocks in terms of time or deviations from the optimal path. To test for learning in the Navigation Phase, we performed a repeated measures ANOVA with a Greenhouse-Geisser correction [70] for a violation of sphericity and found a difference among the three blocks in terms of the number of trigger presses, (F(1.39, 26.35) = 53.86, MSE = 2324.42, p < .001). Two-tailed pairwise contrasts revealed significant differences between trigger presses in block 1 (M = 24.15, SD = 8.88) and block 2 (M = 12.10, SD = 7.75; F1,19 = 40.48, MSE = 71.73, p < .001, d = 1.45) and between trigger presses in block 2 and block 3 (M = 6.60, SD = 7.38; F(1, 19) = 28.81, MSE = 21.00, p < .001, d = 0.73).

Participants required a mean of 65.69 seconds (SD = 4.85) to complete the testing block with a mean distance error ratio of 1.13 (SD = 0.09). A two-tailed, one-sample t-test comparing the average distance error ratio to one revealed a significant difference (t19 = 6.86, se = 0.02, p < .001, d = 1.53).

The two-tailed correlation between SBSOD and mean distance error ratio from the testing block was not significant (r18 = .22, p = .35).

Fig 5 presents the best and worst performing participants’ routes in the testing phase.

thumbnail
Fig 5. Paths of best and worst participants during testing.

The blue line traces the path of the best-performing participant. No wrong turn was taken and nearly no deviations occurred on straight paths. The red line traces the path of the worst-performing participant. Many wrong turns and unnecessary deviations from straight paths can be observed.

https://doi.org/10.1371/journal.pone.0184682.g005

Simple Tasks Phase

In order to obtain a better estimate of participants’ performances for the eight simple tasks, we consider performance aggregated across all trial types (see Table 1).

In terms of both performance and time, participants tended to have more difficulties (i.e., less accurate and slower) with the allocentric and memory tasks than with the egocentric and perceptual tasks. However, these differences must be interpreted with caution because there are exceptions. For example, performance on the rotate task was lower than the rotate with map task, although participants were fastest when completing the rotate task overall. In addition, the rotate, move, and chase tasks were very similar in terms of completion time but exhibit very different spatial error patterns (see Fig 6). For this reason, we will focus the remaining analyses on performance error. Rather than comparing these tasks directly, we will assess them with respect to their abilities to predict navigation performance.

thumbnail
Fig 6. Three exemplary trial results from the simple task phase.

The blue circle indicates the target sphere. The green circle/line indicates the position of the participant. The orange line indicates the viewing direction via the location of the catchment area. If a participant or sphere moves, the S indicates the starting location and the E indicates the end location. (a) Rotation from Memory (RFM): The participant rotates back and forth between two spheres. (b) Move with Map (MWM): The participant rotates in the beginning, moves towards the target, and then rotates again. In the end, the participant is slightly inaccurate with respect to the target’s location. (c) Chase (CHA): The participant rotates in the beginning to find the sphere and then intercepts it.

https://doi.org/10.1371/journal.pone.0184682.g006

Relationship between navigation and simple tasks

We performed REFA to attempt to reduce the dimensionality of the data from the eight simple tasks. Permutation tests [64] suggested that the first three factors of the REFA were significant. A simple structure [71, p. 140ff] for a factor analysis reduces dependency between the factors by rotating all of them by the same amount. A rotated factor is considered simply structured if some dimensions are zero (or close to zero in a more relaxed form), and better rotations produce the higher number of zero elements in all factors [72, p. 115ff]. With a geomin rotation [68], we obtained three sets of factors that satisfy the simple structure assumption. All three sets equally represent the underlying factor solution [72], but we focus on Set 1 for two reasons. First, following the goal of Thurstone’s simple structure assumption [71], we can provide a theoretical interpretation for the underlying factors in Set 1. Second, Set 1 was the only set of factors that produced a significant result under robust regression. See Table 2 for the communalities of all three sets, and Table 3 for the results of the robust regressions.

We considered loading strengths above a conservative threshold of.6 [73, p. 101]. According to Jung and Lee [64], the factor analysis resulted in a relatively wide range of communalities (from .339 to .746; see Table 2). Total communality (ht = 4.567) indicated that our factors explain 57.1% of overall variation in participants’ performance. Three of the tasks resulted in high communality (rotate with map, move with map, and rotate from memory) above the high threshold of .6, and four of the other tasks resulted in communalities above the low threshold of .4 (rotate, move, move from memory, and chase with map). The chase task was the only task with a communality below .4 (.339), suggesting a low correlation with each of the other tasks.

The REFA results for Set 1 exhibit two notable patterns that are also reflected in the correlation matrix of performances on the eight simple tasks (Fig 7). First, the rotate from memory and chase with map tasks both have high loadings for the second factor for each REFA set. The correlation between rotate from memory and chase with map performances is also significant (r18 = .549, p = .012). Second, the chase task is the only task that did not correlate with any other tasks (all ps > .16) and is also the only task with a high loading for factor three of the first REFA set. Move from memory is the only task with a high loading for factor three of the third REFA set, although move from memory was significantly correlated with move (r18 = .466, p = .039) and move with map (r18 = .517, p = .017).

thumbnail
Fig 7. Correlations and REFA factors.

This visualisation exhibits the first set of REFA factor loadings at the top and the correlation matrix at the bottom. Each column in the top part corresponds to the same column in the matrix. The loading strengths of each factor are colour-coded according to whether they are positive (blue) or negative (orange). The conservative loading threshold of .6 [73] is shown as a dashed line, and any loading above that threshold is considered meaningful. In the correlation matrix, any significant correlation (p < .05, abs(ρ20) = .42) is coloured. The visualisation is based on [74] (see S2 Code).

https://doi.org/10.1371/journal.pone.0184682.g007

We then performed separate regressions for predefined categories of tasks (e.g., egocentric, perceived) and each of the REFA factors on the mean distance error ratios from the Navigation Phase. White’s test for heteroscedasticity [75] revealed that the residuals resulting from the regression of the third REFA factor from the first set on mean distance error ratios was heteroscedastic (r2 = .37, White = 7.47, p = .02). Individual White’s tests on the residuals of all other predefined and REFA factors (for all three sets) were not significant (all ps > .2). For consistency, we used robust regressions to test for relationships between each predefined and REFA factor and mean distance error ratios. Robust regressions for the effects of seven of the eight predefined factors on the mean distance error ratios were not significant (see Table 3). However, a robust regression for the effect of mean performance for the dynamic tasks on the mean distance error ratios was significant (, β = 0.80, p = .019, uncorrected for multiple comparisons). Additional robust regressions for the effects of each REFA factor of each set were not significant (see Table 3), except for the third factor of the first set (, β = .85, p = .001). This relationship survives a Šidák correction [76] for alpha inflation (α = .016). Consistent with the significant effect of dynamic tasks on mean distance error ratios, this REFA factor represents only the chase task.

Discussion

In this study, we investigated the relationships between eight spatial tasks and navigation performance in virtual reality (VR). These eight tasks were designed in accordance with four orthogonal dimensions based on previous research (static/dynamic, [37]; perceived/remembered, [14]; egocentric/allocentric, [30]; distance/direction, [56]). This approach was adopted in order to provide evidence for or against particular two-systems theories and to determine whether theories of navigation can be reduced to one predictor or require additional factors. Together with this confirmatory analysis, we also attempted to reduce the dimensionality of the model by conducting a regularised exploratory factor analysis (REFA). Both the confirmatory and exploratory factors were then used to predict the participants’ navigation performance with robust regressions. The confirmatory analysis determined that only the dynamic factor (composed of chase and chase with map tasks) significantly predicted navigation (uncorrected for multiple comparisons). In addition, the exploratory analysis revealed that the chase task by itself was the only significant predictor after a Šidák correction. These results suggest that navigation in VR may be best explained by a dynamic, egocentric task that requires the perception of distances and directions.

Unlike previous studies [77], we explicitly devised an error score that accounts for both accumulated and final errors for all eight simple tasks, see Eq 8. This error score includes accumulated error as a means of revealing the process of solving the task. For example, participants sometimes accumulated large errors in the static direction tasks by rotating in place more than was necessary before responding. Such behaviour would not have been detected by considering only the final error. Because we weighted the various error score components (see Eq 8), no advantage was given to the dynamic tasks. At the same time, cumulative error was necessary for scoring the dynamic tasks given that they required the continuous integration of distance and direction information.

Previous research has largely neglected dynamic spatial tasks and has focused instead on tasks in static environments in which only the user moves [1, 3234]. This work has been critical for investigations of spatial memory but may overemphasise the role of representation (compared to the role of direct perception) during navigation [12]. Our results suggest that, even in a familiar environment, a dynamic chase task that relied primarily on locomotion was a better predictor of navigation performance than typical measures of spatial memory (e.g., distance and direction estimation). Because participants could not have predicted the direction of the target sphere’s movement, the chase task did not rely on spatial memory. Rather than implying that spatial memory is not important for navigation (as in [12]), participants most likely developed representations that were relatively basic but consistent with each other. At the same time, participants’ performances on the chase task required the coordination of visual input with the manipulation of the HID and may have been more variable. Similar to homing in aviation, the interception of a randomly moving sphere required the observer to orient so that the target was at the centre of the expanding optic flow [36]. Instead of spatial memory, the chase task relied on a combination of perceived distances and directions, which is typical of locomotion in real environments.

The extent to which a chase task may predict navigation through a real environment has yet to be investigated independent of spatial memory. Conceptually, such a chase task could resemble the avoidance of other people in crowded environments during locomotion (e.g., [78, 79]). For example, Moussaid and colleagues [79] have developed a cognitively inspired model of pedestrian dynamics in order to explain crowd phenomena such as spontaneous lane formation. These experiments constitute an important aspect of research in spatial cognition but have not been studied in the context of large-scale navigation. Future studies could relate the avoidance of crowds to navigation behaviours (e.g., route choice) in a large public space (e.g., a shopping mall).

In real environments, locomotion is nearly automatic because walking is typically learned at an early age and continuously reinforced. However, the interaction between the user and a virtual environment is mediated by a human interface device (HID). Indeed, this additional layer of abstraction must be learned before users can efficiently interact with the virtual environment [5, 80]. For example, McKinley, McIntire, and Funke [80] found that expert video game players can control a virtual unmanned aerial system to a similar level as trained pilots and better than people with little to no gaming experience. This pattern of performances suggests that prior experience with an HID (for both pilots and gamers) can facilitate interaction with a virtual environment.

In the context of navigation, individual differences in users’ abilities to mannoeuvre with an HID may confound differences in spatial learning. In other words, inferences regarding the development of spatial representation with navigation experience in VR may sometimes be attributable to participants’ abilities to interact with an HID. The relationship between HID interaction and navigation performance may be especially relevant when the virtual environment is over-learned. In the present study, participants were highly familiar with the virtual environment before the beginning of the testing block of the Navigation Phase. This is indicated by the monotonic decrease in trigger presses across training blocks. Indeed, some participants were able to complete the third training block without calling the guiding arrow.

Future studies should ensure that participants are well-trained with the HID and that their abilities to use the HID is properly assessed. Training may reduce the HID’s impact on navigation performance in VR, while assessment can allow researchers to draw inferences regarding spatial learning. Here, our chase task and cumulative error may be especially useful. This approach may also be used for ambulatory VR setups (e.g., treadmills, [81]; large-tracking spaces, [82]). These setups have the advantage of more realistic control over the observer’s movement by providing proprioceptive feedback [83]. For example, Kearns and colleagues [83] found that optic flow can be sufficient for solving a triangle completion task with a joystick, but proprioceptive feedback during walking reduced variability in participants’ responses. Despite this advantage, most ambulatory VR setups are limited in space or still require the user to adapt their gait (e.g., walking in place, [84]; redirected walking, [85]). As such, training and assessment with an HID may be necessary for any experiment involving navigation in VR.

Supporting information

S1 File. Instructions for participants.

Text handed out to the participants before the experiment.

https://doi.org/10.1371/journal.pone.0184682.s001

(ODT)

S1 Video. Summary of tasks in all phases.

In four minutes, we show extract of all the phases the participants completed and show exemplary tasks within the phases.

https://doi.org/10.1371/journal.pone.0184682.s002

(MP4)

S1 Data. Database export.

Export of the participant data, ready for loading into Matlab.

https://doi.org/10.1371/journal.pone.0184682.s003

(MAT)

S2 Data. CSV data set.

Transformed data ready for analysis in statistical software such as R or SPSS.

https://doi.org/10.1371/journal.pone.0184682.s004

(CSV)

S1 Code. Matlab code.

Code used for data processing in Matlab.

https://doi.org/10.1371/journal.pone.0184682.s005

(ZIP)

S2 Code. R script for correlation/loading visualisation.

Short script to visualise the factor loadings and correlation matrix based on the design used by [74] and adapted for our purpose. Detailed instructions on how to create such a visualisation can be found at http://rpubs.com/danmirman/plotting_factor_analysis.

https://doi.org/10.1371/journal.pone.0184682.s006

(R)

Acknowledgments

We thank Dario Meloni for running most of the participants involved in the study. The scripts for the REFA analysis were kindly provided by Sunho Jung and are available on request [63, 64, 66]. We also thank William G. Jacoby for discussing the analysis and presentation of the factors with us.

References

  1. 1. Montello DR, Waller D, Hegarty M, Richardson AE. Spatial memory of real environments, virtual environments, and maps. Human spatial memory: Remembering where. 2004; p. 251–285.
  2. 2. Loomis JM, Blascovich JJ, Beall AC. Immersive virtual environment technology as a basic research tool in psychology. Behavior Research Methods, Instruments, & Computers. 1999;31(4):557–564.
  3. 3. Lapointe JF, Savard P, Vinson NG. A comparative study of four input devices for desktop virtual walkthroughs. Computers in Human Behavior. 2011;27(6):2186–2191.
  4. 4. Riecke BE, Bodenheimer B, McNamara TP, Williams B, Peng P, Feuereissen D. Do we need to walk for effective virtual reality navigation? physical rotations alone may suffice. In: International Conference on Spatial Cognition. Springer; 2010. p. 234–247.
  5. 5. Ruddle RA, Volkova E, Bülthoff HH. Learning to walk in virtual reality. ACM Transactions on Applied Perception (TAP). 2013;10(2):11.
  6. 6. Thrash T, Kapadia M, Moussaid M, Wilhelm C, Helbing D, Sumner RW, et al. Evaluation of control interfaces for desktop virtual environments. Presence. 2015;24(4):322–334.
  7. 7. Chen JL, Stanney KM. A theoretical model of wayfinding in virtual environments: Proposed strategies for navigational aiding. Presence: Teleoperators and Virtual Environments. 1999;8(6):671–685.
  8. 8. Ruddle RA, Lessels S. Three levels of metric for evaluating wayfinding. Presence: Teleoperators and Virtual Environments. 2006;15(6):637–654.
  9. 9. Piaget JI, Inhelder B. B 1967. The Child’s Conception of Space. The Child’s Conception of Space WW Norton, New York. 1967;.
  10. 10. Montello DR. Navigation. Shah P, Miyake A, editors. Cambridge University Press; 2005.
  11. 11. Gibson JJ. The ecological approach to visual perception. Routeledge; 1979.
  12. 12. Heft H. The ecological approach to navigation: A Gibsonian perspective. The construction of cognitive maps. 1996; p. 105–132.
  13. 13. Heft H. Way-finding as the perception of information over time. Population and Environment. 1983;6(3):133–150.
  14. 14. Creem SH, Proffitt DR. Two memories for geographical slant: Separation and interdependence of action and awareness. Psychonomic Bulletin & Review. 1998;5(1):22–36.
  15. 15. Creem SH, Proffitt DR. Defining the cortical visual systems:“what” “wheer”, and “how”. Acta psychologica. 2001;107(1):43–68. pmid:11388142
  16. 16. Creem SH, Proffitt DR. Grasping objects by their handles: a necessary interaction between cognition and action. Journal of Experimental Psychology: Human Perception and Performance. 2001;27(1):218. pmid:11248935
  17. 17. Wraga M, Creem SH, Proffitt DR. Perception-action dissociations of a walkable Müller-Lyer configuration. Psychological Science. 2000;11(3):239–243. pmid:11273410
  18. 18. Huttenlocher J, Hedges LV, Duncan S. Categories and particulars: Prototype effects in estimating spatial location. Psychological review. 1991;98(3):352. pmid:1891523
  19. 19. Huttenlocher J, Hedges LV, Vevea JL. Why do categories affect stimulus judgment? Journal of experimental psychology: General. 2000;129(2):220.
  20. 20. Newcombe NS, Huttenlocher J. Learning, development, and conceptual change. Making space: The development of spatial representation and reasoning; 2000.
  21. 21. Chabris CF, Kosslyn SM. How do the cerebral hemispheres contribute to encoding spatial relations? Current Directions in Psychological Science. 1998;7(1):8–14.
  22. 22. Kosslyn SM. Seeing and imagining in the cerebral hemispheres: A computational approach. Psychological review. 1987;94(2):148. pmid:3575583
  23. 23. Kosslyn SM, Chabris CF, Marsolek CJ, Koenig O. Categorical versus coordinate spatial relations: computational analyses and computer simulations. Journal of Experimental Psychology: Human Perception and Performance. 1992;18(2):562. pmid:1593235
  24. 24. Kosslyn SM, Pylyshyn Z. Image and brain: The resolution of the imagery debate. Nature. 1994;372(6503):289.
  25. 25. Byrne RW. Geographical knowledge and orientation. Normality and pathology in cognitive functions. 1982; p. 239–264.
  26. 26. O’keefe J, Nadel L. The hippocampus as a cognitive map. Oxford: Clarendon Press; 1978.
  27. 27. Shemyakin FN. Orientation in space. Psychological Science in the USSR. 1962;1:186–255.
  28. 28. Siegel AW, White SH. The development of spatial representations of large-scale environments. Advances in child development and behavior. 1975;10:9–55. pmid:1101663
  29. 29. Avraamides MN, Kelly JW. Multiple systems of spatial memory and action. Cognitive processing. 2008;9(2):93–106. pmid:17899235
  30. 30. Burgess N. Spatial memory: how egocentric and allocentric combine. Trends in cognitive sciences. 2006;10(12):551–557. pmid:17071127
  31. 31. Allen GL, Haun DBM. Proximity and precision in spatial memory. Human spatial memory: Remembering where. 2004; p. 41–63.
  32. 32. Gillner S, Mallot HA. Navigation and acquisition of spatial knowledge in a virtual maze. Journal of Cognitive Neuroscience. 1998;10(4):445–463. pmid:9712675
  33. 33. Weisberg SM, Schinazi VR, Newcombe NS, Shipley TF, Epstein RA. Variations in cognitive maps: Understanding individual differences in navigation. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2014;40(3):669. pmid:24364725
  34. 34. Kraemer DJM, Schinazi VR, Cawkwell PB, Tekriwal A, Epstein RA, Thompson-Schill SL. Verbalizing, visualizing, and navigating: The effect of strategies on encoding a large-scale virtual environment. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2017;43(4):611–621. pmid:27668486
  35. 35. Clark A. Being there: Putting brain, body, and world together again. MIT press; 1998.
  36. 36. Warren WH. The dynamics of perception and action. Psychological review. 2006;113(2):358. pmid:16637765
  37. 37. Hegarty M, Waller D. Individual differences in spatial abilities. The Cambridge handbook of visuospatial thinking. 2005; p. 121–169.
  38. 38. Gibson JJ. Motion picture testing and research (Aviation Psychology Research Reports, No. 7). Washington, DC: US Government Printing Office. 1947;.
  39. 39. Fajen BR. Guiding locomotion in complex, dynamic environments. Frontiers in behavioral neuroscience. 2013;7:85. pmid:23885238
  40. 40. Loomis JM, Knapp JM. Visual perception of egocentric distance in real and virtual environments. Virtual and adaptive environments. 2003;11:21–46.
  41. 41. Schinazi VR, Nardi D, Newcombe NS, Shipley TF, Epstein RA. Hippocampal size predicts rapid learning of a cognitive map in humans. Hippocampus. 2013;23(6):515–528. pmid:23505031
  42. 42. Waller D, Hodgson E. Transient and enduring spatial representations under disorientation and self-rotation. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2006;32(4):867. pmid:16822154
  43. 43. Diwadkar VA, McNamara TP. Viewpoint dependence in scene recognition. Psychological Science. 1997;8(4):302–307.
  44. 44. Friedman A, Waller D. View combination in scene recognition. Memory & Cognition. 2008;36(3):467–478.
  45. 45. Wang RF, Spelke ES. Updating egocentric representations in human navigation. Cognition. 2000;77(3):215–250. pmid:11018510
  46. 46. Mou W, McNamara TP. Intrinsic frames of reference in spatial memory. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2002;28(1):162. pmid:11827078
  47. 47. Shelton AL, McNamara TP. Systems of spatial reference in human memory. Cognitive psychology. 2001;43(4):274–310. pmid:11741344
  48. 48. Wolbers T, Hegarty M. What determines our navigational abilities? Trends in cognitive sciences. 2010;14(3):138–146. pmid:20138795
  49. 49. Filimon F. Are all spatial reference frames egocentric? Reinterpreting evidence for allocentric, object-centered, or world-centered reference frames. Frontiers in human neuroscience. 2015;9. pmid:26696861
  50. 50. Wang RF. Theories of spatial representations and reference frames: What can configuration errors tell us? Psychonomic bulletin & review. 2012;19(4):575–587.
  51. 51. Denis M, Pazzaglia F, Cornoldi C, Bertolo L. Spatial discourse and navigation: An analysis of route directions in the city of Venice. Applied cognitive psychology. 1999;13(2):145–174.
  52. 52. Thorndyke PW, Hayes-Roth B. Differences in spatial knowledge acquired from maps and navigation. Cognitive psychology. 1982;14(4):560–589. pmid:7140211
  53. 53. Farrell MJ, Thomson JA. On-line updating of spatial information during locomotion without vision. Journal of Motor Behavior. 1999;31(1):39–53. pmid:11177618
  54. 54. Mou W, McNamara TP, Rump B, Xiao C. Roles of egocentric and allocentric spatial representations in locomotion and reorientation. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2006;32(6):1274. pmid:17087583
  55. 55. Mou W, McNamara TP, Valiquette CM, Rump B. Allocentric and egocentric updating of spatial memories. Journal of experimental psychology: Learning, Memory, and Cognition. 2004;30(1):142. pmid:14736303
  56. 56. Easton RD, Sholl MJ. Object-array structure, frames of reference, and retrieval of spatial knowledge. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1995;21(2):483. pmid:7738511
  57. 57. May M. Imaginal perspective switches in remembered environments: Transformation versus interference accounts. Cognitive psychology. 2004;48(2):163–206. pmid:14732410
  58. 58. Rieser JJ. Access to knowledge of spatial structure at novel points of observation. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1989;15(6):1157. pmid:2530309
  59. 59. Ruddle RA, Lessels S. For efficient navigational search, humans require full physical movement, but not a rich visual scene. Psychological Science. 2006;17(6):460–465. pmid:16771793
  60. 60. Grübel J. Assessing Human Interface Device Interaction in Virtual Environments [Bachelor Thesis]. ETH Zürich; 2014. Available from: http://e-collection.library.ethz.ch/view/eth:48223.
  61. 61. Hegarty M, Richardson AE, Montello DR, Lovelace K, Subbiah I. Development of a self-report measure of environmental spatial ability. Intelligence. 2002;30(5):425–447.
  62. 62. Grübel J, Thrash T, Hölscher C, Schinazi VR. Protocol for “Evaluation of a Conceptual Framework for Predicting Navigation Performance in Virtual Reality”; 2017. Available from: dx.doi.org/10.17504/protocols.io.jbdcii6.
  63. 63. Jung S, Takane Y. Regularized common factor analysis. New trends in psychometrics. 2008; p. 141–149.
  64. 64. Jung S, Lee S. Exploratory factor analysis for small samples. Behavior research methods. 2011;43(3):701–709. pmid:21431996
  65. 65. Esri. ArcGIS; 2017. Available from: http://www.esri.com/arcgis/about-arcgis.
  66. 66. Jung S. Exploratory factor analysis with small sample sizes: A comparison of three approaches. Behavioural processes. 2013;97:90–95. pmid:23541772
  67. 67. O’connor BP. SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior research methods, instruments, & computers. 2000;32(3):396–402.
  68. 68. Yates A. Multivariate exploratory data analysis: A perspective on exploratory factor analysis. Suny Press; 1988.
  69. 69. Guttman L. General theory and methods for matric factoring. Psychometrika. 1944;9(1):1–16.
  70. 70. Greenhouse SW, Geisser S. On methods in the analysis of profile data. Psychometrika. 1959;24(2):95–112.
  71. 71. Thurstone LL. Multiple-factor analysis. University of Chicago Press; 1947.
  72. 72. Browne MW. An overview of analytic rotation in exploratory factor analysis. Multivariate behavioral research. 2001;36(1):111–150.
  73. 73. Matsunaga M. How to factor-analyze your data right: do’s, don’ts, and how-to’s. International Journal of Psychological Research. 2015;3(1):97–110.
  74. 74. Mirman D, Zhang Y, Wang Z, Coslett HB, Schwartz MF. The ins and outs of meaning: Behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia. Neuropsychologia. 2015;76:208–219. pmid:25681739
  75. 75. White H. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica: Journal of the Econometric Society. 1980; p. 817–838.
  76. 76. Šidák Z. Rectangular confidence regions for the means of multivariate normal distributions. Journal of the American Statistical Association. 1967;62(318):626–633.
  77. 77. Kitchin R, Blades M. The Cognition of Geographic Space. London: Taurus; 2002.
  78. 78. Helbing D, Buzna L, Johansson A, Werner T. Self-organized pedestrian crowd dynamics: Experiments, simulations, and design solutions. Transportation science. 2005;39(1):1–24.
  79. 79. Moussaïd M, Helbing D, Theraulaz G. How simple rules determine pedestrian behavior and crowd disasters. Proceedings of the National Academy of Sciences. 2011;108(17):6884–6888.
  80. 80. McKinley RA, McIntire LK, Funke MA. Operator selection for unmanned aerial systems: comparing video game players and pilots. Aviation, space, and environmental medicine. 2011;82(6):635–642. pmid:21702315
  81. 81. Hollerbach JM. Locomotion interfaces. Handbook of virtual environments: Design, implementation, and applications. 2002; p. 239–254.
  82. 82. Waller D, Bachmann E, Hodgson E, Beall AC. The HIVE: A huge immersive virtual environment for research in spatial cognition. Behavior Research Methods. 2007;39(4):835–843. pmid:18183898
  83. 83. Kearns MJ, Warren WH, Duchon AP, Tarr MJ. Path integration from optic flow and body senses in a homing task. Perception. 2002;31(3):349–374. pmid:11954696
  84. 84. Slater M, Usoh M, Steed A. Taking steps: the influence of a walking technique on presence in virtual reality. ACM Transactions on Computer-Human Interaction (TOCHI). 1995;2(3):201–219.
  85. 85. Razzaque S, Kohn Z, Whitton MC. Redirected walking. In: Proceedings of EUROGRAPHICS. vol. 9. Manchester, UK; 2001. p. 105–106.