Thesis: Sound Field Synthesis for Line Source Array Applications in Large-Scale Sound Reinforcement

The doctoral thesis
Frank Schultz (2016): “Sound Field Synthesis for Line Source Array Applications in Large-Scale Sound Reinforcement”, University of Rostock, URN: urn:nbn:de:gbv:28-diss2016-0078-1
was finally released.

Abstract: This thesis deals with optimized large-scale sound reinforcement for large audiences in large venues using line source arrays. Homogeneous audience coverage requires flat frequency responses for all listeners and an appropriate sound pressure level distribution. This is treated as a sound field synthesis problem rather than a directivity synthesis problem. For that the synthesis of a virtual source via the line source array allows for interpreting the problem as audience adapted wavefront shaping. This is either achieved by geometrical array curving, by electronic control of the loudspeakers or by ideally combining both approaches. Obviously the obtained results depend on how accurately an array can emanate the desired wavefront. For practical array designs and setups this is affected by the deployed loudspeakers and their arrangement, its electronic control and potential spatial aliasing occurrence. The influence of these parameters is discussed with the aid of array signal processing revisiting the so called wavefront sculpture technology and proposing so called wave field synthesis as a suitable control method.4217367580-SPLxy_Slides_Teaser

Posted in Publications | Leave a comment

Release 0.3.1 of the Sound Field Synthesis Toolbox for Python

We are pround to announce release 0.3.1 of the Sound Field Synthesis Toolbox for Python.

This release features

  • Calculation of the sound field scattered by an edge
  • Various driving functions for sound field synthesis using an edge-shaped secondary source distribution
  • Several refactorings, bugfixes and other improvements

The Python port of the Sound Field Synthesis Toolbox features the calculation of the synthesized sound field for various sound reproduction methods for the monofrequent case. Functionality for visualization of sound fields, as well as a set of auxiliary functions is included. The documentation provides installation instructions, usage examples and details on the API.

diffraction_edge

Posted in Announcement, Reproducible Research | Tagged , , , | Leave a comment

Article: On Analytic Methods for 2.5-D Local Sound Field Synthesis Using Circular Distributions of Secondary Sources

In the IEEE/ACM Transactions on Audio, Speech, and Language Processing  (Volume:24 ,  Issue: 5 ) we published

Winter, F.; Ahrens, J.; Spors, S. (2016), “On Analytic Methods for 2.5-D Local Sound Field Synthesis Using Circular Distributions of Secondary Sources,” In: IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 5

The paper can be found here.

Abstract:
Sound Field Synthesis techniques reproduce a virtual sound field inside an
extended listening area using a distribution of loudspeakers located on the
area’s boundary. The theoretical foundations of such techniques assume a
spatially smooth boundary. Non-smooth shapes, like e.g. rectangles, are however
more suitable in practical applications since the loudspeaker setup has to
fit into the architecture of the listening room. Such discrepancy introduces
diffraction artefacts to the reproduced sound field. Consequentially, deviations
from desired sound field with respect to amplitude and spectral properties
are present.
This paper compares Wave Field Synthesis, Local Wave Field Synthesis, and an
analytically derived solution for rectangular geometries regarding the
mentioned artefacts.

Posted in Papers, Publications | Tagged , , | Leave a comment

Paper: A Comparison of Sound Field Synthesis Techniques for Non-Smooth Secondary Source Distributions

On the 42nd DAGA conference we presented the contribution

Winter, F.; Spors, S. (2016): “A Comparison of Sound Field Synthesis Techniques for Non-Smooth Secondary Source Distributions.” In: Proc. of
42nd DAGA, Aachen.

Local Wave Field Synthesis for Rectangular Arrays

Abstract:
Sound Field Synthesis techniques reproduce a virtual sound field inside an
extended listening area using a distribution of loudspeakers located on the
area’s boundary. The theoretical foundations of such techniques assume a
spatially smooth boundary. Non-smooth shapes, like e.g. rectangles, are however
more suitable in practical applications since the loudspeaker setup has to
fit into the architecture of the listening room. Such discrepancy introduces
diffraction artefacts to the reproduced sound field. Consequentially, deviations
from desired sound field with respect to amplitude and spectral properties
are present.
This paper compares Wave Field Synthesis, Local Wave Field Synthesis, and an
analytically derived solution for rectangular geometries regarding the
mentioned artefacts.

Posted in Announcement | Leave a comment

Paper: On the Connections of Wave Field Synthesis and Spectral Division Method Plane Wave Driving Functions

On the 42nd DAGA conference we presented the contribution

Schultz, F.; Spors, S. (2016): “On the Connections of Wave Field Synthesis
and Spectral Division Method Plane Wave Driving Functions.” In: Proc. of
42nd DAGA, Aachen.
Please feel free to download the slides.

Abstract:
Wave Field Synthesis (WFS) is a well-established sound field synthesis (SFS) technique that uses a dense spatial distribution of loudspeakers arranged around an extended listening area. It has been shown that WFS based on the Neumann Rayleigh integral constitutes the high-frequency and/or farfield approximation of the explicit SFS solution, such as the Spectral Division Method (SDM) and Nearfield Compensated Higher-Order Ambisonics (NFC-HOA). However, for SFS of a virtual plane wave using a linear loudspeaker array a supposed mismatch between the SDM and a WFS driving function has been reported in literature.
In this paper we will derive the WFS plane wave driving functions using the same stationary phase approximation approach as introduced for the virtual non-focused point source. This yields WFS driving functions either for a reference point or for a parallel reference line. It is shown that the latter is identical with the high-frequency and/or farfield approximated SDM solution. Thus, with no mismatch existing, the SFS fundamentals are proven to be consistent.

Posted in Papers | Leave a comment

Release of Two!Ears Auditory Model Version 1.2

A new release of the Two!Ears auditory model is available

10.5281/zenodo.47487You can download the release on the Two!Ears website. Check out the installation guide.

Besides lots of bug fixes, this release features:
Blackboard system:
* Replaced GmtkLocationKS by GmmLocationKS
* Remove dependency on external GMTK framework
New Examples:
* GMM-based localisation under reverberant conditions

 

TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).

Posted in Announcement, Reproducible Research | Tagged , , | Leave a comment

Sound Field Synthesis Toolbox 2.1.0 released

A new release of the Sound Field Synthesis Toolbox for Matlab/Octave is available. The highlights of the new version include several improvements for time-domain NFC-HOA simulations, new virtual line sources for WFS and NFC-HOA, and the ability to set t=0 for time-domain simulations to the start of the virtual source.

sfs_v2.1.0

Download the SFS Toolbox 2.1.0, PDF documentation and have a look at the tutorial on github how to use it.

10.5281/zenodo.47292

NEWS:

- make conf struct mandatory
- add new start message
- fix handling of 0 in least squares fractional delays
- fix NFC-HOA order for even loudspeaker numbers to N/2-1
- add conf.wfs.hpreFIRorder as new config option (was hard coded to 128
before)
- speed up secondary source selection for WFS
- rename chromajs colormap to yellowred
- fix tapering_window() for non-continuous secondary sources
- remove cubehelix colormap as it is part of Octave
- add conf.wfs.t0 option with is useful, if you have more than one
virtual source
- virtual line sources are now available for monochromatic WFS and NFC-HOA
- allow arbritrary orders for time-domain NFC-HOA simulations

Posted in Announcement, MATLAB, Reproducible Research | Tagged | Leave a comment

Online Exercises for Communication Acoustics

The exercises to our course “Communication Acoustics” are available online:

Communication Acoustics Exercises

For most exercises, we are using the programming language Python within Jupyter notebooks.

You can read a static online version of the exercises using the link above, you can get all the notebooks and additional files from the Github repository and run them on your computer with Jupyter, or you can run them interactively in your browser by clicking on Binder right now!

If you find errors or have suggestions for improvements, please open an issue on Github (you’ll need to create a Github account first) or leave a comment right here.

Have fun!

Posted in Announcement, Open Educational Resource | Tagged , , , | Leave a comment

HpTF compensation filters for KEMAR available

Headphone transfer function (HpTF) compensation filters of a KEMAR 45BA with large ears for several headphone types are now available at:

hptf-compensation-filters

The compensation filters are released complete with HpTF measurements and the Matlab code the filters have been calculated with. They are fit to use with our free database of BRIRs of our 64-channel loudspeaker array.

Posted in Announcement | Leave a comment

Two!Ears Auditory Model 1.1 Released

A new version of our software framework was published today, please go to the download section and have a look at the installation guide in order to try it out.

Besides lots of bug fixes, the main new features of this release are:

Binaural simulator:
* Works now under Matlab 2015b
New processors in the Auditory front-end:
* precedence effect processor
* MOC feedback processor
New knowledge source in the Blackboard system:
* Segmentation knowledge source
* Deep neural-network based localisation knowledge source
* Coloration knowledge source
* Localisation knowledge source for evaluating spatial audio systems
New Database entries:
* Results from listening test on coloration in wave field synthesis
New Examples:
* DNN-based localisation under reverberant conditions
* Segmentation with and without priming
* (Re)train the segmentation stage
* Prediction of coloration in spatial audio systems
* Prediction of localisation in spatial audio systems

 

TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).

Posted in Announcement, Reproducible Research | Tagged , , | Leave a comment