On DAGA 41st we will present the contribution
Schultz, F.; Spors, S. (2015): “On the Connections between Radiation Synthesis and Sound Field Synthesis using Linear Arrays”. In: Proc. of the 41. Jahrestagung für Akustik, Nürnberg.
Please feel free to download the presentation slides.
After a few years I finally managed to end my PhD thesis and published it under CC BY 3.0:
Hagen Wierstorf – Perceptual Assessment of Sound Field Synthesis
It derives different sound field synthesis driving functions for Wave Field Synthesis and Near-Field Compensated Higher Order Ambisonics. Further more, lots of different psycho-acoustic tests are presented which investigate perceptual aspects such as coloration, localization and artifacts. Finally, an adapted binaural model predicts the localization results and can also be used to predict localization accuracy of newly planed loudspeaker array setups.
Beside the PDF of the thesis there exists a repository on github where you can find erratas and all the Matlab/Octave scripts that were used to create the figures in the thesis, including sound field synthesis simulations, data from listening tests, and auditory modeling. Every figure comes with its own directory including all the needed scripts and an explanation how to reproduce the figure.
The TWO!EARS project aims the development of an auditory model that will incorporate both signal-driven (bottom-up), and hypothesis-driven (top-down) processing. The anticipated result is a computational framework for modelling active exploratory listening that assigns meaning to auditory scenes. In conjunction with the project’s 1st year anniversary, parts of its software framework and its database have been made publicly available:
- The Two!Ears Binaural Simulator enables the creation of binaural audio signals for different situations. This is done via the usage of head-related transfer functions (HRTFs) or binaural room impulse responses (BRIRs), which are provided in the Two!Ears data repository. The Two!Ears Binaural Simulator uses the signal processing core of the SoundScape Renderer.
- The purpose of the Two!Ears auditory front-end is to extract a subset of common auditory representations from a binaural recording or from a stream of binaural audio data. These representations are to be used later by higher modelling or decision stages. The auditory front-end is capable of working in a block-based manner.
- The Two!Ears data repository contains freely available data ranging from head-related impulse responses up to human quality ratings for different spatial audio systems.
The data is collected from different sources outside and inside the Two!Ears project.
TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).
Some time ago we have published an approach to the synthesis of sound figures. A surrounding three-dimensional loudspeaker array can be used to synthesize an (more or less) arbitrary shaped zone with higher sound level. See what we have simulated these days
The sound field has been synthesized by a cubic array consisting of 21,600 point sources (60×60 with a spacing of 10 centimeters at each side). The shape of the figure has been defined by a PNG image. The desired sound field is a monocromatic plane wave with 2000 Hz, an incidence angle of 90 degrees in the horizontal plane and 45 degrees in elevation. Shown is the xy-plane for z=0.
We are currently active in porting the SFS-Toolbox from MATLAB to Python. The Python SFS-Toolbox together with source code for the synthesis of sound figures is available on GitHub. Have fun in creating your own fancy sound figures. Happy Christmas!
On Forum Acusticum 2014 in Krakow, Poland, we presented the contribution
Winter, F.; Schultz, F.; Spors. S. (2014): ”Localization Properties of Data-based Binaural Synthesis including Translatory Head-Movements.” In: FORUM ACUSTICUM, European Acoustics Association (EAA), 2014.
Feel free to download the paper on the topic and the presentation slides. The source code for the figures of the paper is provided at github.
Binaural synthesis of plane wave decomposed spherical microphone data using head-related transfer functions (HRTFs) is a well-known approach for auralization. Rotational head movements can be considered by dynamic rotation of the HRTF dataset. Translatory head movements are coped by spatio-temporal shifts of the individual planes waves.
This paper analyses this auralization method with respect to the localization of sound sources. The algorithm’s performance is evaluated with respect to the accuracy achieved by a binaural model utilized for localization. The influence and effects of spatial sampling (number of plane waves) and the translatory head-movement are investigated. In addition, the modal resolution is taken into account.
Sascha Spors gave a keynote at the 55th International Conference of the Audio Engineering Society (AES). The talk “The Adventure of Spatial Sound Reproduction” gives a personal view on recent achievements in spatial sound reproduction. The slides can be viewed on Speakerdeck.
On AES 137th in Los Angeles we will present the contribution
Schultz, F.; Rettberg, T.; Spors. S. (2014): ”On Spatial-Aliasing-Free Sound Field Reproduction using Finite Length Line Source Arrays.” In: Proc. of the 137th Audio Eng. Soc. Conv., Los Angeles, #9098.
Feel free to download a white paper on the topic and the presentation slides.
Concert sound reinforcement systems aim at the reproduction of homogeneous sound fields over extended audiences for the whole audio bandwidth. For the last two decades this has been mostly approached by using so called line source arrays for which Wavefront Sculpture Technology (WST) was introduced in the literature. This paper utilizes a signal processing model developed for sound field synthesis in order to analyze and expand WST criteria for straight arrays. Starting with the driving function for an infinite and continuous linear array, spatial truncation and discretization are subsequently taken into account. The role of the involved loudspeakers as a spatial lowpass filter is stressed, which can reduce undesired spatial aliasing contributions. The paper aims to give a better insight on how to interpret the synthesized sound fields.
Feel free to download the presentation slides of our recent contribution Schultz, F.; Spors, S. (2014): “Comparing Approaches to the Spherical and Planar Single Layer Potentials for Interior Sound Field Synthesis.”
In: Proc. of the EAA Joint Symposium on Auralization and Ambisonics 2014, Berlin. The article was published as Schultz, F; Spors, S. (2014): “Comparing Approaches to the Spherical and Planar Single Layer Potentials for Interior Sound Field Synthesis.” In: Acta Acust united Ac, 100(5):900-911.
The authors created a tutorial on 3D analytic methods for sound field synthesis based on this manuscript, which can be accessed as a public BitBucket git repository under https://bitbucket.org/fs446/analytical_3d_sfs
The paper gives a compact recollection of analytic and explicit solutions for sound field synthesis using the single layer potential. It is shown that for planar and spherical secondary source distributions the same basic principles in solving the Fredholm integral equation within the spectral representation hold. The solutions are well introduced in literature known as Nearfield Compensated Higher Order Ambisonics (NFC-HOA) for spherical geometry and Spectral Division Method (SDM) for planar geometry. Taking the Helmholtz integral equation as starting point for solving the sound field synthesis problem, the equivalent sound-soft scattering approach leads to the same results. In the special case of a planar secondary source distribution the Neumann Green’s function incidentally leads to a single layer potential representation of the Helmholtz integral, which as the Rayleigh integral is an implicit and exact solution and was used for modern formulation of wave field synthesis.
This is mainly a bugfix release, but there are also a few minor new features:
- The default number of threads is now obtained automatically.
- Certain renderers are now available as Puredata externals. This is still quite experimental (a.k.a. buggy).
- Minor GUI changes (no more “pie slices” on sources, larger fonts)
The new SSR release is available at http://spatialaudio.net/ssr/
Please feel free to post/send feedback for the SSR to
Due to the efforts of Johannes Zmölnig a debian package for the SoundScape Renderer (SSR) is now available. You can check out the details here.