Using eyes, hands and brain for 3D interaction with virtual environments:

a perception-based approach

Anatole Lécuyer

(Bunraku research team)

Habilitation defense

Inria-Rennes-Bretagne Atlantique

June 18th 2010


img-logo-pdf Les transparents


Far beyond science-fiction clichés, Virtual Reality (VR) technologies can be used in a wide range of applications including, for instance, industrial virtual prototyping, medical training or architectural project review. However, even after decades of research, the efficiency of current VR interfaces is still far below what it is possible to achieve with a classical computer mouse and a desktop computer. Our research activity has been entirely driven by the necessity to change this situation and improve 3D user interfaces and virtual reality systems. Our objective was to improve 3D interaction with virtual environments by making full use of available interfaces, e.g., visual, haptic and brain-computer interfaces. We intended to improve each component of this framework individually, but we also wanted to improve the subsequent combinations of these components. We have adopted a perception-based approach consisting in taking advantage from knowledge in human perception to improve both the design and the evaluation of our technologies.

First, we have studied the use of novel Brain-Computer Interfaces (BCI), which give an access to novel user's input and an incomparable way of interacting with 3D content "by thought”. Our main contributions to the integration of brain-computer interfaces in virtual environments have consisted in: (1) designing novel signal processing techniques to access a wider range of mental states and in a more efficient way, and (2) designing 3D interaction techniques based on high-level orders that make up for the small number of mental commands. We have also introduced the use of performance models that can predict performance of BCI-based interaction techniques.

Second, we have studied the combination of two output devices that are widely used in virtual reality systems: visual and haptic interfaces. Our intention was to find an optimal combination of these two different types of interface by improving the current software and hardware architectures. We have also introduced a novel interaction paradigm called Haptic Hybrid Control for solving problems related to spatial discrepancies between visual and haptic workspaces.

Third, we have studied multi-sensory rendering of virtual environments, and the improvement of visual and haptic feedback. We studied how to adapt visual feedback to user's gaze, with novel techniques for tracking and predicting user's gaze in VR, and novel visual effects such as depth-of-field visual blur and camera motions. Then, we studied how to improve haptic feedback by superimposing vibration patterns to classical force-feedback and enhancing the perception of spatial location of contacts in virtual environments. Last, we studied visuo-haptic rendering and a novel approach that we called “Pseudo-Haptic Feedback” which uses vision to distort haptic perception.
img-telephone-petit télécharger la vidéo ( format MP4 - 156 Mo)

L'ensemble des HDR
(enregistrées depuis 2001)

Retour vidéothèque

© 2010 Pôle audiovisuel de l'INRIA