Self-motion and Presence in the Perceptual Optimization of a Multisensory Virtual Reality Environment

Sammanfattning: Determining the perceptually optimal resolution of multisensory renderingmight help to foster the development of cost-effective, highly immersivemulti-modal displays for mediated environments (e.g. virtual and augmentedreality). The required sensory depth of stimulation can be quantified usinghuman centered methodologies where end user experiences serve as a basis foruni- and cross-modal optimization of the sensory inputs. In the psychophysicalstudies presented in this thesis, self-reported presence and illusoryself-motion (vection) indicated salience of auditory and multisensory cues indesign of perceptually optimized motion simulators.Contribution of auditory cues to illusory self-motion has been largelyneglected until very recently and papers A and B present studies on purelyauditory induced vection (AIV). Paper A shows that rotating auditory scenessynthesized using individualized Head-Related Transfer Functions (HRTFs) aremore instrumental for presence compared to generic binaural synthesis. Study ontranslational AIV in paper B shows that inconsistent auditory scene mightsignificantly decrease self-motion responses. Paper C and D demonstrate thatbi-sensory stimulations increase presence and self-motion ratings as expected.In paper C additional vibrotactile stimulation increased translational AIV andpresence ratings, especially for the stimuli containing the auditory-tactileengine metaphor. Paper D extended paper A results for rotational AIV showingthat spatial resolution of rotating auditory scenes can be greatly reduced whencombined with visual input.This thesis shows that sound plays important role in the illusory self-motionperception and it should be carefully used in multi-modal motion simulators. Thepresented findings suggest that a minimum set of acoustic cues can be sufficientfor eliciting a self-motion sensation, especially if other modalities areinvolved. However, perceptual consistency of the created auditory and multimodalscenes should be assured in the design of the next generation of motionsimulators.

  Denna avhandling är EVENTUELLT nedladdningsbar som PDF. Kolla denna länk för att se om den går att ladda ner.