Free database of single-channel and binaural room impulse responses of a 64-channel loudspeaker array for different room configurations

At the 138th AES Convention in Warsaw, we presented the freely available database of single-channel and binaural room impulse responses (RIRs and BRIRs) measured in our rectangular 64-channel loudspeaker array at the University of Rostock under varying room acoustical conditions. The RIRs have been measured at three receiver positions for four different absorber configurations. Corresponding BRIRs for head-orientations in the range of ±80° in 2° steps with a KEMAR manikin have been captured for a subset of seven combinations of position and absorber configurations. The data is provided in the Spatially Oriented Format for AKEMARcoustics (SOFA) standardised in AES69 standard for file exchange. The database can be used to study the influence of the listening room on multichannel audio reproduction.

You can find the database here. The poster of the AES Convention can be downloaded here.

Posted in Publications, Reproducible Research | Leave a comment

Paper: Physical Properties of Local Wave Field Synthesis Using Linear Loudspeaker Arrays

At the 138th AES convention we presented the contribution
Winter, F.; Spors, S. : “Physical Properties of Local Wave Field Synthesis Using Linear Loudspeaker Arrays“. In: Proc. of the 138th Audio Eng. Soc. Convention, Warsaw, #9321.

Wave Field Synthesis aims at a physically accurate synthesis of a desired sound field inside an extended listening area. Due to limitations of practical loudspeaker setups, the accuracy of this sound field synthesis technique over the entire listening area is limited. Local Wave Field Synthesis narrows the spatial extent down to a local listening area in order to improve the reproduction accuracy inside this limited region. Recently an method has been published, which utilizes focused sources as a distribution of more densely placed virtual secondary sources around the local area. Within this paper, an analytical framework is established to analyze the physical properties of this approach for linear loudspeaker setups.

The presentation slides can be found here.

Posted in Publications, Reproducible Research | Tagged , , | Leave a comment

Paper: Discussion of the Wavefront Sculpture Technology criteria for straight line arrays


At the upcoming 138th AES convention we will present the paper

Schultz, F.; Straube, F.; Spors, S. (2015): “Discussion of the Wavefront Sculpture Technology criteria for straight line arrays.” In: Proc. of the 138th Audio Eng. Soc. Convention, Warsaw, #9323.
Abstract:
Wavefront Sculpture Technology introduced line source arrays for large scale sound reinforcement, aiming at the synthesis of highly spatial-aliasing free sound fields for full audio bandwidth. The paper revisits this technology and its criteria for straight arrays using a signal processing model from sound field synthesis. Since the latest array designs exhibit very small driver distances, the sampling condition for grating lobe free electronic beam forming regains special interest. Furthermore, a discussion that extends the initial derivations of the spatial lowpass characteristics of circular and line pistons and line pistons with wavefront curvature applied in subarrays is given.

I’ve managed to compile a pre-release, stand-alone readable version of the 4th chapter from my intended PhD-thesis, that covers the topics discussed in the paper. Feel free to download this draft. This rather long treatment revisits the WST technology from a different viewpoint than initially given in the original literature. The content of the mentioned contribution can be found in Sec. 1.5.
The slides of the talk can be downloaded here.

Posted in Publications | Leave a comment

Sound Field Synthesis Toolbox 1.1.0 released

A new release of the Sound Field Synthesis Toolbox for Matlab/Octave is available. Beside some important bug fixes it introduces local WFS as a new synthesis method as one of its highlights.
You can download the latest release and you should have a look at the tutorial on github how to use it.

sound_field_localwfs_2d

NEWS:
- fix amplitude bug in get_ir() and ir_generic()
- remove direct gnuplot plotting
- add support for local Wave Field Synthesis
- the length of the dirac impulse response is now an option for dummy_irs()
- fix iseven(), isodd() for very large numbers
- correct the sign for Wave Field Synthesis driving functions

Posted in Announcement, Reproducible Research | Tagged | Leave a comment

Paper: Parameter Analysis for Range Extrapolation of Head-Related Transfer Functions using Virtual Local Wave Field Synthesis

Winter_2015_HRTF_Extrapolation_Local_WFS_exampleAt the 41st German Annual Conference on Acoustics (DAGA) we presented the contribution
Winter, F.; Spors, S. : “Parameter Analysis for Range Extrapolation of Head-Related Transfer Functions using Virtual Local Wave Field Synthesis”. In Proc. of the 41. Jahrestagung für Akustik, Nürnberg.

The presentation slides can be found here.

Posted in Publications | Tagged , , , | Leave a comment

Paper: On the Connections between Radiation Synthesis and Sound Field Synthesis using Linear Arrays

On DAGA  41st we will present the contribution
Schultz, F.; Spors, S. (2015): “On the Connections between Radiation Synthesis and Sound Field Synthesis using Linear Arrays”. In: Proc. of the 41. Jahrestagung für Akustik, Nürnberg.

Please feel free to download the presentation slides and the paper.

SFSModel

Posted in Publications | Leave a comment

Thesis: Perceptual Assessment of Sound Field Synthesis

After a few years I finally managed to end my PhD thesis and published it under CC BY 3.0:
Hagen Wierstorf – Perceptual Assessment of Sound Field Synthesis

thesis_titlepage

It derives different sound field synthesis driving functions for Wave Field Synthesis and Near-Field Compensated Higher Order Ambisonics. Further more, lots of different psycho-acoustic tests are presented which investigate perceptual aspects such as coloration, localization and artifacts. Finally, an adapted binaural model predicts the localization results and can also be used to predict localization accuracy of newly planed loudspeaker array setups.

Beside the PDF of the thesis there exists a repository on github where you can find erratas and all the Matlab/Octave scripts that were used to create the figures in the thesis, including sound field synthesis simulations, data from listening tests, and auditory modeling. Every figure comes with its own directory including all the needed scripts and an explanation how to reproduce the figure.

fig3_12

Posted in Announcement, Publications, Reproducible Research | Tagged , , , , , | Leave a comment

First Releases of the Two!Ears Project

The TWtwoears-logo-rgb-180x230O!EARS project aims the development of an auditory model that will incorporate both signal-driven (bottom-up), and hypothesis-driven (top-down) processing. The anticipated result is a computational framework for modelling active exploratory listening that assigns meaning to auditory scenes. In conjunction with the project’s 1st year anniversary, parts of its software framework and its database have been made publicly available:

  • The Two!Ears Binaural Simulator enables the creation of binaural audio signals for different situations. This is done via the usage of head-related transfer functions (HRTFs) or binaural room impulse responses (BRIRs), which are provided in the Two!Ears data repository. The Two!Ears Binaural Simulator uses the signal processing core of the SoundScape Renderer.
  • The purpose of the Two!Ears auditory front-end is to extract a subset of common auditory representations from a binaural recording or from a stream of binaural audio data. These representations are to be used later by higher modelling or decision stages. The auditory front-end is capable of working in a block-based manner.
  • The Two!Ears data repository contains freely available data ranging from head-related impulse responses up to human quality ratings for different spatial audio systems.
    The data is collected from different sources outside and inside the Two!Ears project.

TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).

Posted in Projects | Leave a comment

Happy Christmas 2014

Some time ago we have published an approach to the synthesis of sound figures. A surrounding three-dimensional loudspeaker array can be used to synthesize an (more or less) arbitrary shaped zone with higher sound level.  See what we have simulated these days soundfigure_xmas

 

The sound field has been synthesized by a cubic array consisting of 21,600 point sources (60×60 with a spacing of 10 centimeters at each side). The shape of the figure has been defined by a PNG image. The desired sound field is a monocromatic plane wave with 2000 Hz, an incidence angle of 90 degrees in the horizontal plane and 45 degrees in elevation. Shown is the xy-plane for z=0.

We are currently active in porting the SFS-Toolbox from MATLAB to Python. The Python SFS-Toolbox together with source code for the synthesis of sound figures is available on GitHub. Have fun in creating your own fancy sound figures. Happy Christmas!


Posted in Reproducible Research, Video | Tagged , , , | Leave a comment

Paper: Localization Properties of Data-based Binaural Synthesis including Translatory Head-Movements

On Forum Acusticum 2014 in Krakow, Poland, we presented the contribution
Winter, F.; Schultz, F.; Spors. S. (2014): ”Localization Properties of Data-based Binaural Synthesis including Translatory Head-Movements.” In: FORUM ACUSTICUM, European Acoustics Association (EAA), 2014.

Feel free to download the paper on the topic and the presentation slides. The source code for the figures of the paper is provided at github.

Abstract:
Binaural synthesis of plane wave decomposed spherical microphone data using head-related transfer functions (HRTFs) is a well-known approach for auralization. Rotational head movements can be considered by dynamic rotation of the HRTF dataset. Translatory head movements are coped by spatio-temporal shifts of the individual planes waves.
This paper analyses this auralization method with respect to the localization of sound sources. The algorithm’s performance is evaluated with respect to the accuracy achieved by a binaural model utilized for localization. The influence and effects of spatial sampling (number of plane waves) and the translatory head-movement are investigated. In addition, the modal resolution is taken into account.

Winter_2014_Localization_Binaural_Synthesis_FA_slides

Posted in Publications | Tagged , , | Leave a comment