Paper: Perceptual Properties of Data-based Wave Field Synthesis

The paper “Perceptual Properties of Data-based Wave Field Synthesis” was presented by Sascha Spors and Hagen Wierstorf at the 40.Jahrestagung für Akustik (DAGA) in Oldenburg, Germany. DAGA2014_data_based_WFS

We investigated the capture of sound fields by spherical microphone arrays and their reproduction by Wave Field Synthesis. The limited number of microphones and loudspeakers, as well as other practical factors, impair the synthesized sound field. A special attention is drawn to the perceptual implications these impairments could have. In order to reproduce our findings you can take a look at the

 

Posted in Publications, Reproducible Research | Tagged , , , | Leave a comment

Paper: Comparing Approaches to the Spherical and Planar Single Layer Potentials for Interior Sound Field Synthesis

Feel free to download the presentation slides of our recent contribution

Schultz, F.; Spors, S. (2014): “Comparing Approaches to the Spherical and Planar Single Layer Potentials for Interior Sound Field Synthesis.” In: Proc. of the EAA Joint Symposium on Auralization and Ambisonics 2014, Berlin.

The paper gives a compact recollection of analytic and explicit solutions for sound field synthesis using the single layer potential. It is shown that for planar and spherical secondary source distributions the same basic principles in solving the Fredholm integral equation within the spectral representation hold. The solutions are well introduced in literature known as Nearfield Compensated Higher Order Ambisonics (NFC-HOA) for spherical geometry and Spectral Division Method (SDM) for planar geometry. Taking the Helmholtz integral equation as starting point for solving the sound field synthesis problem, the equivalent sound-soft scattering approach leads to the same results. In the special case of a planar secondary source distribution the Neumann Green’s function incidentally leads to a single layer potential representation of the Helmholtz integral, which as the Rayleigh integral is an implicit and exact solution and was used for modern formulation of wave field synthesis.

Posted in Publications | Leave a comment

Sound Field Synthesis Toolbox 1.0.0 released

After a few years of work we finally are coming up with the 1.0.0 release of the Sound Field Synthesis Toolbox, which should be of great help for all simulations of sound field synthesis in Matlab/Octave.
You can download the latest release and you should have a look at the tutorial on github how to use it.

sfs-1.0.0

NEWS:
- added references for all driving functions
- streamlined nested conf settings; e.g. now it is no longer neccessary to set conf.ir.hcompfile if conf.usehcomp == false
- added WFS driving functions from Völk et al. and Verheijen et al.
- removed secondary_source_number() and xy_grid, because they are no longer needed
- enabled pre-equalization filter of WFS as default in SFS_config_example()
- fixed sound_field_mono_sdm_kx()
- Green's function for line sources returns now real values
- correct y-direction of plane waves for 3D NFC-HOA
- updated the test functions in the validation folder
- several small fixes

Posted in Announcement, Reproducible Research | Leave a comment

Paper: Measurement of time-variant binaural room impulse responses for data-based synthesis of dynamic auditory scenes

The paper “Measurement of time-variant binaural room impulse responses for data-based synthesis of dynamic auditory scenes” was presented by Nara Hahn and Sascha Spors at the 40th Deutsche Jahrestagung für Akustik (DAGA) in Oldenburg.

The paper discusses a method and results for the measurement of time-varying binaural room impulse responses (BRIRs) of dynamic scenes. BRIRs are measured using so-called perfect sequences in combination with dynamic system identification methods.

The presentation slides as well as listening examples are available for download.

 

scenario_and_brirsScenario shown above: the direct path of the sound field emitted by the loudspeaker is disturbed by a moving person.

Posted in Publications, Reproducible Research | Tagged , , | Leave a comment

The Two!Ears Project

Understanding auditory perception and cognitive processes involved with our interaction with the world are of high relevance for a vast variety of ICT systems and applications. Human beings do not react twoears-logo-rgb-180x230according to what they perceive, but rather, they react on the grounds of what the percepts mean to them in their current action-specific, emotional and cognitive situation. Thus, while many models that mimic the signal processing involved in human visual and auditory processing have been proposed, these models cannot predict the experience and reactions of human users. The model we aim to develop in the TWO!EARS project will incorporate both signal-driven (bottom-up), and hypothesis-driven (top-down) processing. The anticipated result is a computational framework for modelling active exploratory listening that assigns meaning to auditory scenes.

TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).

Posted in Projects | Tagged | Leave a comment

Major Release of the SoundScape Renderer

The SoundScape Renderer (SSR) is a tool for real-time spatial audio reproduction providing a variety of rendering algorithms, e.g. Wave Field Synthesis, Higher-Order Ambisonics and binaural techniques. The development team has worked hard in the past months and is proud to announce a major release of the SoundScape Renderer.

Release 0.4.1 includes (amongst others)

  • multi-threading support and other performance improvements
  • Near-Field-Corrected Higher-Order-Ambisonics (NFC-HOA) renderer (experimental)
  • Virtual-Reality Peripheral Network (VRPN) tracker support
  • all renderers (except BRS and generic) are now available in MATLAB as MEX files
  • the signal processing core of the SSR is separate and part of the Audio Processing Framework

You can download the SoundScape Renderer and find more information on its homepage. For feedback and bug-reports please use or new mailinglist or ssr@spatialaudio.net.

mozart_full_muted_wfs mozart_full_muted_binaural

Posted in Announcement | Tagged , , , , | Leave a comment

Paper: The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources

The paper The BoomRoom: Mid-air Direct Interaction with Virtual Sound Sources from Jörg Müller, Matthias Geier, Christina Dicke and Sascha Spors will be presented at the ACM CHI Conference on Human Factors in Computing Systems 2014.

boom_room

In this paper we present a system that allows to “touch”, grab and manipulate sounds in mid-air. Further, arbitrary objects can seem to emit sound. We use spatial sound reproduction for sound rendering and computer vision for tracking. Using our approach, sounds can be heard from anywhere in the room and always appear to originate from the same (possibly moving) position, regardless of the listener’s position. We demonstrate that direct “touch” interaction with sound is an interesting alternative to indirect interaction mediated through controllers or visual interfaces. As an application of the system, we built a spatial music mixing room.

Posted in Publications | Tagged , , | Leave a comment

Sound Field Synthesis Toolbox 1.0.0-beta2 released

A new bug fix release of the SFS Toolbox was released today.
You can download the latest release and you should have a look at the tutorial on github how to use it.

sfs-1.0.0-beta2

NEWS:
- rms() now works for arbitrary arrays
- speedup of delayline() and HRTF extrapolation
- delayline() now works with more than one channel
- fixed a critical bug in wfs_preequalization()
- fixed missing conf values in several functions
- fixed README
- changed location of sfs-data for automatic download, because github does
not allow this
- several minor fixes

Posted in Announcement, Reproducible Research | Tagged , , | Leave a comment

Article: The Synthesis of Sound Figures

The article The Synthesis of Sound Figures published by Karim Helwani, Sascha Spors and Herbert Buchner in the Journal Multidimensional Systems and Signal Processing discusses the synthesis of sound fields within a predefined region in space. The problem is formulated by separating the sound field into regions with high acoustic level, so-called bright regions, and zones with low acoustic level (zones of quiet) by time independent virtual boundaries. An analytic solution to the problem is developed and its application using established sound field synthesis techniques is shown including an analysis of practical limitations.

sound_figure

Posted in Publications | Tagged , , | Leave a comment

Paper: Is Sound Field Control Determined at all Frequencies? How is it Related to Numerical Acoustics?

The paper Is Sound Field Control Determined at all Frequencies? How is it Related to Numerical Acoustics? published by Franz Zotter and Sascha Spors at the 52nd International Conference of the Audio Engineering Society reviews the physical foundations of Sound Field Synthesis. The presentation given by Sascha Spors can be viewed here.

The paper shows how synthesis can be achieved using the equivalent scattering approach and the inherent problem of non-uniqueness is discussed. It is furthermore highlighted that the problem of synthesizing a sound field is directly related to the Boundary Element Method (BEM) used in numerical acoustics. It is finally derived that Wave Field Synthesis (WFS) can be interpreted as high-frequency approximation of the exact solution.

equivalent_scattering

Posted in Publications | Tagged , , , | Leave a comment