TagsAudio Engineering Society Auditory Modeling auralization beamformer beamforming Binaural Binaural Model Binaural Room Transfer Function Binaural Synthesis Call for Papers data-based rendering dynamic auditory scenes EEG endfire array Evaluation focused sources Head-Related Transfer Functions Higher-Order Ambisonics Interaction Line Array Localization Matlab microphone array modal-beamforming Octave Perception SEACEN SFS Toolbox SOFA Software Sound Field Synthesis Sound Figures SoundScape Renderer Spatial Sampling special session spherical harmonics spherical microphone array supersonic system identification time-reversal acoustics translatory head-movements virtual acoustics Wave Field Synthesis
An overview on the application of loudspeaker arrays for audio presentation is given. The foundations of Wave Field Synthesis and Higher-Order Ambisonics are introduced in an intutive manner. The possibilites and applications of Sound Field Synthesis techniques in comparison with conventional methods like Stereophony are highlighted.
The article is enhanced online with animated versions of the figures.
Feel free to download the Slides and Paper for our recent contribution
Schultz, F.; Spors, S. (2014): “On the Frequency Response Variation of Sound Field Synthesis using Linear Arrays”. In Fortschritte der Akustik: Tagungsband d. 40. DAGA, Oldenburg.
Please feel free to download the slides of our talk to the corresponding paper
Schultz, F.; Rettberg, T.; Spors, S. (2014): “On Spatial-Aliasing-Free Sound Field Reproduction using Infinite Line Source Arrays”. In Proc. of 136th Aud. Eng. Soc. Conv., Berlin. #9078.
Concert sound reinforcement systems aim at the reproduction of homogeneous sound fields over extended audiences for the whole audio bandwidth. For the last two decades this has been mostly approached by using so called line source arrays due to their superior abilities of producing homogeneous sound fields. Design and setup criteria for line source arrays were derived as Wavefront Sculpture Technology in literature. This paper introduces a viewpoint on the problem at hand by utilizing a signal processing model for sound field synthesis. It will be shown that the optimal radiation of a line source array can be considered as a special case of spatial-aliasing-free synthesis of a wave front that propagates perpendicular to the array. For high frequencies the so called waveguide operates as a spatial lowpass filter and therefore attenuates energy that otherwise would lead to spatial aliasing artifacts.
The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources from Jörg Müller, Matthias Geier, Christina Dicke and Sascha Spors is presented at the ACM CHI Conference on Human Factors in Computing Systems 2014.
The paper “Perceptual Properties of Data-based Wave Field Synthesis” was presented by Sascha Spors and Hagen Wierstorf at the 40.Jahrestagung für Akustik (DAGA) in Oldenburg, Germany.
We investigated the capture of sound fields by spherical microphone arrays and their reproduction by Wave Field Synthesis. The limited number of microphones and loudspeakers, as well as other practical factors, impair the synthesized sound field. A special attention is drawn to the perceptual implications these impairments could have. In order to reproduce our findings you can take a look at the
- presentation slides (German)
- scripts used to compute the synthesized sound fields and listening examples
(requires MATLAB/Octave and the SoundField Synthesis Toolbox)
Paper: Comparing Approaches to the Spherical and Planar Single Layer Potentials for Interior Sound Field Synthesis
Feel free to download the presentation slides of our recent contribution
Schultz, F.; Spors, S. (2014): “Comparing Approaches to the Spherical and Planar Single Layer Potentials for Interior Sound Field Synthesis.”
In: Proc. of the EAA Joint Symposium on Auralization and Ambisonics 2014, Berlin. The paper is currently in the review process for an ACTA ACUST UNITED AC special issue on Auralization and Ambisonics.
The paper gives a compact recollection of analytic and explicit solutions for sound field synthesis using the single layer potential. It is shown that for planar and spherical secondary source distributions the same basic principles in solving the Fredholm integral equation within the spectral representation hold. The solutions are well introduced in literature known as Nearfield Compensated Higher Order Ambisonics (NFC-HOA) for spherical geometry and Spectral Division Method (SDM) for planar geometry. Taking the Helmholtz integral equation as starting point for solving the sound field synthesis problem, the equivalent sound-soft scattering approach leads to the same results. In the special case of a planar secondary source distribution the Neumann Green’s function incidentally leads to a single layer potential representation of the Helmholtz integral, which as the Rayleigh integral is an implicit and exact solution and was used for modern formulation of wave field synthesis.
After a few years of work we finally are coming up with the 1.0.0 release of the Sound Field Synthesis Toolbox, which should be of great help for all simulations of sound field synthesis in Matlab/Octave.
You can download the latest release and you should have a look at the tutorial on github how to use it.
- added references for all driving functions
- streamlined nested conf settings; e.g. now it is no longer neccessary to set conf.ir.hcompfile if conf.usehcomp == false
- added WFS driving functions from Völk et al. and Verheijen et al.
- removed secondary_source_number() and xy_grid, because they are no longer needed
- enabled pre-equalization filter of WFS as default in SFS_config_example()
- fixed sound_field_mono_sdm_kx()
- Green's function for line sources returns now real values
- correct y-direction of plane waves for 3D NFC-HOA
- updated the test functions in the validation folder
- several small fixes
Paper: Measurement of time-variant binaural room impulse responses for data-based synthesis of dynamic auditory scenes
The paper “Measurement of time-variant binaural room impulse responses for data-based synthesis of dynamic auditory scenes” was presented by Nara Hahn and Sascha Spors at the 40th Deutsche Jahrestagung für Akustik (DAGA) in Oldenburg.
The paper discusses a method and results for the measurement of time-varying binaural room impulse responses (BRIRs) of dynamic scenes. BRIRs are measured using so-called perfect sequences in combination with dynamic system identification methods.
Understanding auditory perception and cognitive processes involved with our interaction with the world are of high relevance for a vast variety of ICT systems and applications. Human beings do not react according to what they perceive, but rather, they react on the grounds of what the percepts mean to them in their current action-specific, emotional and cognitive situation. Thus, while many models that mimic the signal processing involved in human visual and auditory processing have been proposed, these models cannot predict the experience and reactions of human users. The model we aim to develop in the TWO!EARS project will incorporate both signal-driven (bottom-up), and hypothesis-driven (top-down) processing. The anticipated result is a computational framework for modelling active exploratory listening that assigns meaning to auditory scenes.
TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).