Perceptual Audio Rendering of Complex Virtual Environments
Résumé
We propose a real-time 3D audio rendering pipeline for complex virtual scenes containing hundreds of moving sound sources. The approach, based on auditory culling and spatial level-of-detail, can handle more than ten times the number of sources commonly available on consumer 3D audio hardware, with minimal decrease in audio quality. The method performs well for both indoor and outdoor environments. It leverages the limited capabilities of audio hardware for many applications, including interactive architectural acoustics simulations and automatic 3D voice management for video games.Our approach dynamically eliminates inaudible sources and groups the remaining audible sources into a budget number of clusters. Each cluster is represented by one impostor sound source, positioned using perceptual criteria. Spatial audio processing is then performed only on the impostor sound sources rather than on every original source thus greatly reducing the computational cost.A pilot validation study shows that degradation in audio quality, as well as localization impairment, are limited and do not seem to vary significantly with the cluster budget. We conclude that our real-time perceptual audio rendering pipeline can generate spatialized audio for complex auditory environments without introducing disturbing changes in the resulting perceived soundfield.
Fichier principal
TsingosSiggraph04_1.pdf (164.12 Ko)
Télécharger le fichier
snap0.jpg (933.47 Ko)
Télécharger le fichier
mov2.mp4 (91.46 Mo)
Télécharger le fichier
snap1.jpg (751.55 Ko)
Télécharger le fichier
snap5.jpg (732.65 Ko)
Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Format : Figure, Image
Format : Vidéo
Format : Figure, Image
Format : Figure, Image
Loading...