Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

Examples of feature requirements for VR tasks.

Left column names example tasks, middle column lists features which need to be implemented to run the given task, right column names software components which can fulfil the respective requirement. Trap-lining refers to a type of route optimisation task where navigators have to find and collect resources distributed around an environment [34, 35]. Eye tracking refers to the inclusion of 3D gaze recording in other virtual spatial tasks, see also section “Example of interface driven design” for a discussion on implementing gaze-based ray-casting within the VNT. VNT modules are designed to be highly modular and re-usable, which is shown by the fact that the same modules can fulfil requirements across different experiments. Note for example that while data logging is provided by the UXF for our showcase triangle completion study, we also provide our own data logging functionalities in the Util module of the VNT and use it for data handling in an ongoing eye tracking study.

More »

Table 1 Expand

Fig 1.

Implementation showcase.

Subplot A shows the general layout of the implemented triangle completion task: Participants are guided along two legs of a triangle (beige arrows) and are then asked to first point at and then return to their starting location (a campfire). When reaching a distance of 12.5 meters from homing start (red dashed circle) participants are asked to point at their starting location a second time. The campfire was not visible during the pointing or return phase of each trial. In the first part of the showcase study, no trees were present and the translation velocity of the player avatar was manipulated, while in the second part of the study, 1, 2, or 3 trees were presented as shown in subplot A to aid in navigation, while the player speed was held constant at 2.5 m/s. These task variants were carried out repeatedly across several trials, either in the shown spatial arrangement, or in a design which mirrored the spatial layout along the world y-axis, yielding trials with either left or right-handed turns, which were presented in randomised order. Subplots B-F show the progression of an individual trial from the participant’s perspective: Subplot B shows the instructions given at the start of the trial, subplot C shows the participant reaching one of the waypoints during the outbound phase of the trial (waypoints are pieces of firewood, which the participant needs to collect), subplot D shows the instructions given at the start of the return or homing phase, while subplots E and F show the instructions given for pointing and the resulting marker placement by the participant respectively.

More »

Fig 1 Expand

Fig 2.

Integration of the VNT with existing tools.

The modular nature of the VNT makes it easy to integrate its components with other existing tools. Here we showcase, how the implemented triangle completion task makes use of the powerful session management and data logging features of the Unity Experiment Framework (UXF) while providing the main features required for the task in question. Specifically, the UXF provides us with the basic framework of triggering a chain of trials within experimental sessions, but does not define what happens within a single trial, which is then accomplished by a combination of VNT modules, coordinated by the VNT trial state machine (“trial management”), which in turn passes the collected data of the current trial back to the UXF for saving.

More »

Fig 2 Expand

Fig 3.

Illustration of trial flow.

The flow chart visualises how the program progresses through a trial of the triangle completion task of our showcase study. Trial flow is managed by the VNT trial state machine module, which in turn calls up functions from the other VNT modules, as well as from the UXF framework. This visualisation reveals the complexity of interconnected components that need to be managed when designing VR experiments, even when only considering a relatively simple design like the triangle completion task. A state machine, as we have provided with the VNT and used for the showcase study, helps make this complexity manageable by packaging the different phases of the experimental trial (Displacement, Homing, etc.) and clearly defining the logic that governs the transitions between the states (i.e. phases) of the experiment. Rounded boxes summarise the tasks managed within each state. Light green tasks are handled by a VNT module or the UXF, while grey tasks are currently handled in a project-specific manner, but could be included in the VNT as part of the ongoing toolbox development.

More »

Fig 3 Expand

Fig 4.

Structure of the “Pointing and Highlighting” module.

The module features several components, listed below. Pointing: The pointing system is centred around the IPointingController interface, for which we provide a default implementation called RaycastBasedPointingController. The SurfaceBasedPointingManager allows the user to define a list of objects which can be pointed at (using the raycasting system, see below) and handles the actual pointing process, including placement and movement of a customisable pointing-marker-object in the 3D scene. Highlighting: The highlighting system features flexible components for collecting and managing possible targets, as well as doing the actual highlighting of targeted objects in the scene at runtime. This ITargetingVisualiser interface simply defines the functionalities needed mark a GameObject as targeted or untargeted. We provide an implementation of the Interface called HighlightTargetVisualiser, which can change the rendering material of objects to show they are being targeted. The HighlightTargetVisualiser is then used to fill the requirement for a ITargetingVisualer in the SimpleObjectHighlighter. Raycasting: The raycasting system provides the necessary functionality to do raycasting and identifying targeted objects. It is built around the IRayCaster, ITargetCollector and IRayTargetFinder interfaces. The IRayCaster interface simply allows the casting of rays from a specified source. We provide the MouseRayCaster as a default implementation of this interface, which is also used in the example scene. Note that just by exchanging this component, a system using a completely different set of raycast inputs (such as an eye-tracking system) can be created and the different raycast input modes could even be exchanged on the fly in a given project. To check if a given ray has hit a target, we can use an implementation of the IRayTargetFinder interface. Here, we provide to implementations, which can be exchanged based on task requirements. The first implementation (DotProductBasedRayTargetFinder) determines targets based on the relative directions of the ray and the straight line between ray source and (potential) target objects. The second implementation (ColliderBasedRayTargetFinder) uses Unity’s Collider system to directly detect, whether a given object was hit. Finally, to manage which objects should be targetable, we can use an implementation of the ITargetCollector interface. We provide a simple default implementation (SimpleTargetCollector), which can be supplied with lists of GameObjects to track as being targetable.

More »

Fig 4 Expand

Fig 5.

Example screenshot of unity scene hierarchy and inspector.

Shown is the scene hierarchy of the main scene of the showcase study within the unity editor, together with the inspector window showing the components of the VNT pointing module in that scene. For the screenshot, the mouse was hovering over the RayCaster field of the RaycastBasedPointingController component, to show the tooltip which indicates that this field makes use of the VNT feature to drag-assign interface implementations (red dotted box and arrow). This feature exemplifies the commitment of the VNT to ease of adaptation as it enables easy swapping out of components, like replacing the MouseRayCaster shown here with a RayCaster based on gaze data (see example in section ‘Toolbox Availability’).

More »

Fig 5 Expand

Fig 6.

No effects of avatar velocity on homing performance.

Shown are the results of an analysis of the effect of different translation velocities on performance in a triangle completion task in a virtual steppe (see ‘Showcase Part A: homing at different speeds’ in main text). Position errors for integrated pointing trials at the start of the return leg (1st pointing) and after a distance of 12.5 meters from the start of the return leg (2nd pointing) are shown together with the final position error of a given trial (trajectory end). Boxes show the median response for each experimental condition (from n = 17-24 repetitions) for each of N = 8 participants. Notches show 95% confidence intervals around the median, whiskers extend to 1.5x IQR, and diamonds are outliers.

More »

Fig 6 Expand

Fig 7.

Reduction of homing errors with more landmarks.

Shown are the results of an analysis of the effect of different numbers of identical landmarks (trees) on performance in a virtual triangle completion task (see ‘Showcase Part B: homing with landmarks’ in main text). The leftmost, separated part of the figure shows comparison data for the same translation velocity (2.5 m/s) collected in the previous experiment (Showcase A) in which no landmarks were present but the walked path was the same. The rest of the figure shows data from Showcase B. For each experimental condition (1-3 landmarks), position errors for integrated pointing trials at the start of the return leg (1st pointing) and after a distance of 12.5 meters from the start of the return leg (2nd pointing) are shown together with the final position error of a given trial (trajectory end). Boxes show the median response for each experimental condition (from n = 18-24 repetitions) for each of N = 10 participants (different group from showcase A). Notches show 95% confidence intervals around the median, whiskers extend to 1.5x IQR, and diamonds are outliers.

More »

Fig 7 Expand