Article: Sound Field Synthesis for Audio Presentation

The article Sound Field Synthesis for Audio Presentation from Jens Ahrens, Rudolf Rabenstein and Sascha Spors was published in the Spring 2014 issue of Acoustics Today.

AcousticsToday_fig4An overview on the application of loudspeaker arrays for audio presentation is given. The foundations of Wave Field Synthesis and Higher-Order Ambisonics are introduced in an intutive manner. The possibilites and applications of Sound Field Synthesis techniques in comparison with conventional methods like Stereophony are highlighted.

The article is enhanced online with animated versions of the figures.

Posted in Publications, Video | Tagged , , | Leave a comment

Paper: On the frequency response variation of sound field synthesis using linear arrays

Feel free to download the Slides and Paper for our recent contribution
Schultz, F.; Spors, S. (2014): “On the Frequency Response Variation of Sound Field Synthesis using Linear Arrays”. In Fortschritte der Akustik: Tagungsband d. 40. DAGA, Oldenburg.

Posted in Publications | Leave a comment

Paper: On Spatial-Aliasing-Free Sound Field Reproduction using Infinite Line Source Arrays

Please feel free to download a white paper on the topic and the slides of our talk to the corresponding paper

Schultz, F.; Rettberg, T.; Spors, S. (2014): “On Spatial-Aliasing-Free Sound Field Reproduction using Infinite Line Source Arrays”. In Proc. of 136th Aud. Eng. Soc. Conv., Berlin. #9078.

Abstract:
Concert sound reinforcement systems aim at the reproduction of homogeneous sound fields over extended audiences for the whole audio bandwidth. For the last two decades this has been mostly approached by using so called line source arrays due to their superior abilities of producing homogeneous sound fields. Design and setup criteria for line source arrays were derived as Wavefront Sculpture Technology in literature. This paper introduces a viewpoint on the problem at hand by utilizing a signal processing model for sound field synthesis. It will be shown that the optimal radiation of a line source array can be considered as a special case of spatial-aliasing-free synthesis of a wave front that propagates perpendicular to the array. For high frequencies the so called waveguide operates as a spatial lowpass filter and therefore attenuates energy that otherwise would lead to spatial aliasing artifacts.

Posted in Publications | Tagged , | Leave a comment

Video: The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources

The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources from Jörg Müller, Matthias Geier, Christina Dicke and Sascha Spors is presented at the ACM CHI Conference on Human Factors in Computing Systems 2014.

Posted in Video | Tagged , | Leave a comment

Paper: Perceptual Properties of Data-based Wave Field Synthesis

The paper “Perceptual Properties of Data-based Wave Field Synthesis” was presented by Sascha Spors and Hagen Wierstorf at the 40.Jahrestagung für Akustik (DAGA) in Oldenburg, Germany. DAGA2014_data_based_WFS

We investigated the capture of sound fields by spherical microphone arrays and their reproduction by Wave Field Synthesis. The limited number of microphones and loudspeakers, as well as other practical factors, impair the synthesized sound field. A special attention is drawn to the perceptual implications these impairments could have. In order to reproduce our findings you can take a look at the

 

Posted in Publications, Reproducible Research | Tagged , , , | Leave a comment

Sound Field Synthesis Toolbox 1.0.0 released

After a few years of work we finally are coming up with the 1.0.0 release of the Sound Field Synthesis Toolbox, which should be of great help for all simulations of sound field synthesis in Matlab/Octave.
You can download the latest release and you should have a look at the tutorial on github how to use it.

sfs-1.0.0

NEWS:
- added references for all driving functions
- streamlined nested conf settings; e.g. now it is no longer neccessary to set conf.ir.hcompfile if conf.usehcomp == false
- added WFS driving functions from Völk et al. and Verheijen et al.
- removed secondary_source_number() and xy_grid, because they are no longer needed
- enabled pre-equalization filter of WFS as default in SFS_config_example()
- fixed sound_field_mono_sdm_kx()
- Green's function for line sources returns now real values
- correct y-direction of plane waves for 3D NFC-HOA
- updated the test functions in the validation folder
- several small fixes

Posted in Announcement, Reproducible Research | Tagged | 1 Comment

Paper: Measurement of time-variant binaural room impulse responses for data-based synthesis of dynamic auditory scenes

The paper “Measurement of time-variant binaural room impulse responses for data-based synthesis of dynamic auditory scenes” was presented by Nara Hahn and Sascha Spors at the 40th Deutsche Jahrestagung für Akustik (DAGA) in Oldenburg.

The paper discusses a method and results for the measurement of time-varying binaural room impulse responses (BRIRs) of dynamic scenes. BRIRs are measured using so-called perfect sequences in combination with dynamic system identification methods.

The presentation slides as well as listening examples are available for download.

 

scenario_and_brirsScenario shown above: the direct path of the sound field emitted by the loudspeaker is disturbed by a moving person.

Posted in Publications, Reproducible Research | Tagged , , | Leave a comment

The Two!Ears Project

Understanding auditory perception and cognitive processes involved with our interaction with the world are of high relevance for a vast variety of ICT systems and applications. Human beings do not react twoears-logo-rgb-180x230according to what they perceive, but rather, they react on the grounds of what the percepts mean to them in their current action-specific, emotional and cognitive situation. Thus, while many models that mimic the signal processing involved in human visual and auditory processing have been proposed, these models cannot predict the experience and reactions of human users. The model we aim to develop in the TWO!EARS project will incorporate both signal-driven (bottom-up), and hypothesis-driven (top-down) processing. The anticipated result is a computational framework for modelling active exploratory listening that assigns meaning to auditory scenes.

TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).

Posted in Projects | Tagged | Leave a comment

Major Release of the SoundScape Renderer

The SoundScape Renderer (SSR) is a tool for real-time spatial audio reproduction providing a variety of rendering algorithms, e.g. Wave Field Synthesis, Higher-Order Ambisonics and binaural techniques. The development team has worked hard in the past months and is proud to announce a major release of the SoundScape Renderer.

Release 0.4.1 includes (amongst others)

  • multi-threading support and other performance improvements
  • Near-Field-Corrected Higher-Order-Ambisonics (NFC-HOA) renderer (experimental)
  • Virtual-Reality Peripheral Network (VRPN) tracker support
  • all renderers (except BRS and generic) are now available in MATLAB as MEX files
  • the signal processing core of the SSR is separate and part of the Audio Processing Framework

You can download the SoundScape Renderer and find more information on its homepage. For feedback and bug-reports please use or new mailinglist or ssr@spatialaudio.net.

mozart_full_muted_wfs mozart_full_muted_binaural

Posted in Announcement | Tagged , , , , | Leave a comment

Paper: The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources

The paper The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources from Jörg Müller, Matthias Geier, Christina Dicke and Sascha Spors will be presented at the ACM CHI Conference on Human Factors in Computing Systems 2014.

boom_room

In this paper we present a system that allows to “touch”, grab and manipulate sounds in mid-air. Further, arbitrary objects can seem to emit sound. We use spatial sound reproduction for sound rendering and computer vision for tracking. Using our approach, sounds can be heard from anywhere in the room and always appear to originate from the same (possibly moving) position, regardless of the listener’s position. We demonstrate that direct “touch” interaction with sound is an interesting alternative to indirect interaction mediated through controllers or visual interfaces. As an application of the system, we built a spatial music mixing room.

Posted in Publications | Tagged , , | Leave a comment