Release 0.3.1 of the Sound Field Synthesis Toolbox for Python

We are pround to announce release 0.3.1 of the Sound Field Synthesis Toolbox for Python.

This release features

• Calculation of the sound field scattered by an edge
• Various driving functions for sound field synthesis using an edge-shaped secondary source distribution
• Several refactorings, bugfixes and other improvements

The Python port of the Sound Field Synthesis Toolbox features the calculation of the synthesized sound field for various sound reproduction methods for the monofrequent case. Functionality for visualization of sound fields, as well as a set of auxiliary functions is included. The documentation provides installation instructions, usage examples and details on the API.

Paper: On Analytic Methods for 2.5-D Local Sound Field Synthesis Using Circular Distributions of Secondary Sources

In the IEEE/ACM Transactions on Audio, Speech, and Language Processing  (Volume:24 ,  Issue: 5 ) we published

Winter, F.; Ahrens, J.; Spors, S. (2016), “On Analytic Methods for 2.5-D Local Sound Field Synthesis Using Circular Distributions of Secondary Sources,” In: IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 5

The paper can be found here.

Abstract:
Sound Field Synthesis techniques reproduce a virtual sound field inside an
extended listening area using a distribution of loudspeakers located on the
area’s boundary. The theoretical foundations of such techniques assume a
spatially smooth boundary. Non-smooth shapes, like e.g. rectangles, are however
more suitable in practical applications since the loudspeaker setup has to
fit into the architecture of the listening room. Such discrepancy introduces
diffraction artefacts to the reproduced sound field. Consequentially, deviations
from desired sound field with respect to amplitude and spectral properties
are present.
This paper compares Wave Field Synthesis, Local Wave Field Synthesis, and an
analytically derived solution for rectangular geometries regarding the
mentioned artefacts.

Paper: A Comparison of Sound Field Synthesis Techniques for Non-Smooth Secondary Source Distributions

On the 42nd DAGA conference we presented the contribution

Winter, F.; Spors, S. (2016): “A Comparison of Sound Field Synthesis Techniques for Non-Smooth Secondary Source Distributions.” In: Proc. of
42nd DAGA, Aachen.

Abstract:
Sound Field Synthesis techniques reproduce a virtual sound field inside an
extended listening area using a distribution of loudspeakers located on the
area’s boundary. The theoretical foundations of such techniques assume a
spatially smooth boundary. Non-smooth shapes, like e.g. rectangles, are however
more suitable in practical applications since the loudspeaker setup has to
fit into the architecture of the listening room. Such discrepancy introduces
diffraction artefacts to the reproduced sound field. Consequentially, deviations
from desired sound field with respect to amplitude and spectral properties
are present.
This paper compares Wave Field Synthesis, Local Wave Field Synthesis, and an
analytically derived solution for rectangular geometries regarding the
mentioned artefacts.

Paper: On the Connections of Wave Field Synthesis and Spectral Division Method Plane Wave Driving Functions

On the 42nd DAGA conference we presented the contribution

Schultz, F.; Spors, S. (2016): “On the Connections of Wave Field Synthesis
and Spectral Division Method Plane Wave Driving Functions.” In: Proc. of
42nd DAGA, Aachen.

Abstract:
Wave Field Synthesis (WFS) is a well-established sound field synthesis (SFS) technique that uses a dense spatial distribution of loudspeakers arranged around an extended listening area. It has been shown that WFS based on the Neumann Rayleigh integral constitutes the high-frequency and/or farfield approximation of the explicit SFS solution, such as the Spectral Division Method (SDM) and Nearfield Compensated Higher-Order Ambisonics (NFC-HOA). However, for SFS of a virtual plane wave using a linear loudspeaker array a supposed mismatch between the SDM and a WFS driving function has been reported in literature.
In this paper we will derive the WFS plane wave driving functions using the same stationary phase approximation approach as introduced for the virtual non-focused point source. This yields WFS driving functions either for a reference point or for a parallel reference line. It is shown that the latter is identical with the high-frequency and/or farfield approximated SDM solution. Thus, with no mismatch existing, the SFS fundamentals are proven to be consistent.

Release of Two!Ears Auditory Model Version 1.2

A new release of the Two!Ears auditory model is available

You can download the release on the Two!Ears website. Check out the installation guide.

Besides lots of bug fixes, this release features:
Blackboard system:
* Replaced GmtkLocationKS by GmmLocationKS
* Remove dependency on external GMTK framework
New Examples:
* GMM-based localisation under reverberant conditions

TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).

Sound Field Synthesis Toolbox 2.1.0 released

A new release of the Sound Field Synthesis Toolbox for Matlab/Octave is available. The highlights of the new version include several improvements for time-domain NFC-HOA simulations, new virtual line sources for WFS and NFC-HOA, and the ability to set t=0 for time-domain simulations to the start of the virtual source.

Download the SFS Toolbox 2.1.0, PDF documentation and have a look at the tutorial on github how to use it.

NEWS:

- make conf struct mandatory - add new start message - fix handling of 0 in least squares fractional delays - fix NFC-HOA order for even loudspeaker numbers to N/2-1 - add conf.wfs.hpreFIRorder as new config option (was hard coded to 128 before) - speed up secondary source selection for WFS - rename chromajs colormap to yellowred - fix tapering_window() for non-continuous secondary sources - remove cubehelix colormap as it is part of Octave - add conf.wfs.t0 option with is useful, if you have more than one virtual source - virtual line sources are now available for monochromatic WFS and NFC-HOA - allow arbritrary orders for time-domain NFC-HOA simulations 

Online Exercises for Communication Acoustics

The exercises to our course “Communication Acoustics” are available online:

Communication Acoustics Exercises

For most exercises, we are using the programming language Python within Jupyter notebooks.

You can read a static online version of the exercises using the link above, you can get all the notebooks and additional files from the Github repository and run them on your computer with Jupyter, or you can run them interactively in your browser by clicking on right now!

If you find errors or have suggestions for improvements, please open an issue on Github (you’ll need to create a Github account first) or leave a comment right here.

Have fun!

HpTF compensation filters for KEMAR available

Headphone transfer function (HpTF) compensation filters of a KEMAR 45BA with large ears for several headphone types are now available at:

The compensation filters are released complete with HpTF measurements and the Matlab code the filters have been calculated with. They are fit to use with our free database of BRIRs of our 64-channel loudspeaker array.

Two!Ears Auditory Model 1.1 Released

A new version of our software framework was published today, please go to the download section and have a look at the installation guide in order to try it out.

Besides lots of bug fixes, the main new features of this release are:

Binaural simulator:
* Works now under Matlab 2015b
New processors in the Auditory front-end:
* precedence effect processor
* MOC feedback processor
New knowledge source in the Blackboard system:
* Segmentation knowledge source
* Deep neural-network based localisation knowledge source
* Coloration knowledge source
* Localisation knowledge source for evaluating spatial audio systems
New Database entries:
* Results from listening test on coloration in wave field synthesis
New Examples:
* DNN-based localisation under reverberant conditions
* Segmentation with and without priming
* (Re)train the segmentation stage
* Prediction of coloration in spatial audio systems
* Prediction of localisation in spatial audio systems

TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).