VR2 is designed to be a flexible research facility that can be used to:
- study fundamental perceptual, cognitive and motor mechanisms in health and disease/disorder using virtual reality (VR) and motion capture;
- develop and test potential novel therapies for clinical and sub-clinical groups;
- investigate applied research questions that would be difficult or dangerous to study in the real world, in a well-controlled and safe laboratory setting.
Head-mounted display (HMD) technology provides the most immersive virtual reality experience and is likely to be the most common way of using the lab. The motion capture system tracks movement of the HMD through space (at 360Hz temporal resolution) and sends this six degrees of freedom (DOF) position and orientation information wirelessly to the computer attached to the HMD.
This computer has enhanced graphical capabilities, meaning it can update the complex 3D display in the HMD to generate what the observer should see as they move through the simulated environment.
3D Active Wall
Together with the rear projection system, our screen (2.4m x 3.2m) can be thought of as a large field of view projection surface, which can be used for experiments requiring simple 2D or 3D visual stimuli.
By using the projection system together with the motion tracking cameras, the screen becomes a 3D Active Wall on which the display can be updated in line with observer movement. This means that the observer can have a VR experience without needing to wear a head-mounted display (HMD). This might be preferred for certain participant groups who are not able to wear a HMD and/or a backpack PC.
The lab is equipped with an eight camera infrared motion capture system from OptiTrack. As well as primarily providing position tracking data for use with the HMDs (see below) or the 3D Active Wall, it is also possible to use the lab for motion capture research.
Using Motive motion tracking software, we can simultaneously track thousands of points in 3D space at 360Hz and with sub-millimetre accuracy. This can be used to support projects involving biological motion, reaching, facial capture and gait analysis. We can also undertake real-time avatar animation.
There is also space and equipment available in VR2 that can be used for development. We have high-end gaming PCs, together with licenses for major 3D/VR software, enabling development of photo-realistic immersive environments.
Examples of studies currently using VR2
Click on the headings below for more information about studies at the University which are currently using VR2.
The role of client control in VR exposure to spiders
This study is designed to assess how important it is for someone with spider phobia to control their own exposure to a spider when trying to face their fears. Should the therapist encourage a client or should the client go at their own pace? This is an important practical issue for delivering exposure therapies for anxiety, and it has proved hard to test reliably in ‘real world’ research. Virtual reality provides an ideal environment to answer this question. The VR2 facility is used to create a virtual room containing an animated spider. The spider is designed with some basic artificial intelligence so that the degree to which the participant can control their distance from a spider can be measured accurately, continuously and even manipulated.
This study investigates our ability to perceive a stable environment during movement. When we move, stationary parts of the world around us also move on the retina. Of course, we do not want to perceive this as movement and to this end we have neural mechanisms that enable us to discount the retinal motion, thereby perceiving a stable environment. VR gives us the opportunity to investigate these mechanisms. Specifically, we can use VR to break the rules that normally link our own movement to the resulting movement of stationary objects on the retina. Using this approach we can investigate the accuracy and precision of the mechanisms that support perceptual stability during movement.
Visual search and color vision
In interacting with the world around us, we use colour to search for, detect, distinguish, and identify surfaces and objects. Yet what we know about the role of colour vision in such tasks comes largely from laboratory measurements with highly simplified 2-dimensional stimuli that lack authenticity and preclude the kinds of interaction possible in the real world. On the other hand, laboratory experiments performed in a more natural setting, although ingenious, may be unrepresentative and do not easily generalize. Theoretical simulations can provide approximate upper limits on visual search performance in natural environments, but bear an uncertain relationship to human performance. What is required is a flexible, immersive, naturalistic, 3-dimensional environment in which observers can actively search for a wide range of target objects, as the composition of the surrounding scene and the dynamic properties of its illumination are systematically varied.
The aim of this feasibility study is to test whether we can develop a VR system to provide this environment and measure information-theoretic estimates of limits on human performance. In addition to improving our understanding of fundamental human competence, this work has potential applications in way-finding, scene analysis, and virtual medical diagnosis. This study will consequently provide a vital first step for future multi-disciplinary funding applications.
In order to develop competence in diagnosing and managing complexity, student doctors must develop expertise in Clinical Reasoning (CR) – the thinking and decision-making processes associated with clinical practice. The advent of virtual reality (VR) technology offers learners the opportunity to be immersed in quasi-realistic, high-fidelity experiences. In addition, VR allows for parametric control of variables in the scenario (such as potential stressors). However, the use of VR alone is unlikely to lead to successful outcomes without taking appropriate account of the underlying cognitive and social factors that drive CR.
Decades of research on the cognitive mechanisms of human judgement and decision making is potentially relevant in the domain of CR. Research on social theories of learning explore mechanisms of how novices develop expert competencies through legitimate peripheral participation, which the immersion of VR potentially provides. In this study we will use insights from both fields to explore the role of stressors on CR and the trajectory of novice to expert perspective to try to improve the efficacy of VR based CR training.
The contribution of visual and non-visual compensation system to fall risk
Whenever we move stationary parts of the scene will move on the retina. We do not typically perceive this retinal motion, however, because humans have developed multiple (at least 4) neural mechanisms that work to compensate (suppress) such self-generated visual motion signals. When such systems are altered, however, (e.g. in healthy ageing, autism, schizophrenia, etc…) there is potential for inappropriately perceiving the scene as moving. As well as being disconcerting, such errors in perceived scene stability could lead to an increased collision and/or fall risk in such groups.
VR offers the opportunity to address 3 important questions with genuinely moving observers: 1) What are the relative contributions of different compensation systems to perceptual stability? 2) How do these contributions change with different types of observer movement/different viewing conditions? 3) Can we predict fall/collision risk as a function of individual differences in compensation performance – i.e. do those with worse compensation performance exhibit more virtual collisions? In this feasibility study we will develop the necessary VR tasks and collect pilot data to address these questions.