At the Acoustics17′ conference we gave a talk:
F. Winter, N. Hahn, H. Wierstorf, and S. Spors, “Azimuthal localisation in 2.5D near-field-compensated higher order ambisonics,” 2017.
Additional Material and the Slides can be found here.
Sound Field Synthesis approaches aim at the reconstruction of a desired sound field in a defined target region using a distribution of loudspeakers. Near-Field Compensated Higher Order Ambisonics (NFCHOA) is a prominent example of such techniques. In practical implementations different artifacts are introduced to the synthesized sound field: spatial aliasing is caused by the non-zero distance between the loudspeakers. Modal bandwidth limitation is a well-established approach to reduce spatial aliasing in 2.5D NFCHOA, but introduces temporal and spectral impairments to the reproduced sound field which strongly depend on the relative position to the center of modal expansion. Also, the dimensionality mismatch in a 2.5D synthesis scenario results in a different amplitude decay compared to the desired sound field. Listening experiments already investigated the azimuthal localization in 2.5D NFCHOA. It is however unclear, in how far individual artifacts caused by spatial sampling, modal bandwidth limitation, and the 2.5D dimensionality mismatch contribute to these localization impairments in particular. Within this contribution a mathematical framework is used together with binaural synthesis to simulate the individual effect of each artifact on the ear signals. Humans’ performance is approximated by a binaural model for azimuthal localization.
A fixed-term (4 years) position for a postdoctoral researcher is available at the Institute of Communications Engineering, Laboratory of Signal Processing and Virtual Acoustics, University of Rostock, Germany. The research is carried out within the framework of project INF ‘Infrastructure Support Project’ of the DFG Collaborative Research Centre (CRC) 1270 ‘Electrically Active Implants’ – ELAINE.
The objective is the conception and realization of research data management, e.g. for numerical simulations, imaging techniques or experiments, for the entire collaborative research centre. Herby an explicit focus lies on the support of open and reproducible research. This includes the realization of a virtual research environment for the CRC respectively the University of Rostock. Research in the field of efficient management of research data and the reproduction of scientific results should be carried out. The candidate is furthermore responsible for a training and qualification programme on data management.
The official advertisement including the application procedure is available here. Closing date for applications is 21 June 2017.
A fixed-term (3 years) position for a doctoral researcher is available at the Institute of Communications Engineering, Laboratory of Signal Processing and Virtual Acoustics, University of Rostock, Germany.
The aim of the project is to acoustically localize and classify cavitation at ship propellers using multiple hydrophones.This includes the development and experimental validation of algorithms including the conception and implementation of experiments.
Key requirements: The successful applicant will have a Diploma or Master’s degree in electrical engineering. Essential skills include profund expertise in digital signal processing and acoustics. Experience in the field of hydroacoustics and machine learning, as well as prgramming skills in Python would be desirable.
The official advertisement (in German) including the application procedure is available here. Closing date for applications is 12 June 2017.
At the 142nd Audio Engineering Society Convention conference we presented the contribution:
F. Winter, H. Wierstorf, A. Raake, and S. Spors, “The Two!Ears Database,” in Proc. of 142nd Aud. Eng. Soc. Conv., 2017.
The Poster can be found here.
TWO!EARS was an EU-funded project for binaural auditory modelling with ten international partners involved. Its main goal was to provide a computational framework for the modelling of active exploratory listening that assigns meaning to auditory scenes. As one outcome of the project, a database including data acquired by the involved partners as well as third-party measurements has been published. Among others, a large collection of Head Related Impulse Responses and Binaural Room Impulse Responses is part of the database. Further, results from psychoacoustic experiments conducted within TWO!EARS to validate the developed auditory model were added. For the usage of the database together with the TWO!EARS model, a software interface was developed to download the data from the database on demand.
At the 142nd Audio Engineering Society Convention conference we presented the contribution:
F. Winter, H. Wierstorf, and S. Spors, “Improvement of the reporting method for closed-loop human localization experiments,” in Proc. of 142nd Aud. Eng. Soc. Conv., 2017.
Additional Material and the Slides can be found here. The results of the listening test are avaivable on zenodo.
Sound Field Synthesis reproduces a desired sound field within an extended listening area using up to hundreds of loudspeakers. The perceptual evaluation of such methods is challenging, as many degrees of freedom have to be considered. Binaural Synthesis simulating the loudspeakers over headphones is an effective tool for the evaluation. A prior study has investigated whether non-individual anechoic binaural synthesis is perceptually transparent enough to evaluate human localization in sound field synthesis. With the used apparatus, an undershoot for lateral sound sources was observed for real loudspeakers and their binaural simulation. This paper reassesses human localization for the mentioned technique using a slightly modified setup. The results show that the localization error decreased and no undershoot was observed.
At the 142nd AES convention in Berlin we presented the paper
Frank Schultz, Gergely Firtha, Peter Fiala, Sascha Spors (2017): “Wave Field Synthesis Driving Functions for Large-Scale Sound Reinforcement Using Line Source Arrays.” In: Proc. of 142nd Audio Eng. Soc. Conv. Berlin, #9722.
Please feel free to download the slides Schultz_2017_LSA with LSA_AES142nd .
Wave field synthesis (WFS) can be used for wavefront shaping using line source arrays (LSAs) in large-scale sound reinforcement. For that the individual drivers might be electronically controlled by WFS driving functions of a virtual directional point source. From the recently introduced unified 2.5D WFS framework it is known that positions of amplitude correct synthesis (PCS) only exist along an arbitrary shaped curve—the reference curve—in front of the LSA. However, its shape can be adapted with the so called referencing function. We introduce the adaption of the referencing function along the audience line of typical concert venues for optimized wavefront shaping. This yields considerable improvements with respect to sound field’s homogeneity and more convenient setups compared to previous WFS-based sound reinforcement.
We use the unified 2.5D WFS framework for this approach, see the post.
Our recent contribution to 2.5D WFS theory is published:
Gergely Firtha, Peter Fiala, Frank Schultz, Sascha Spors (2017): “Improved Referencing Schemes for 2.5D Wave Field Synthesis Driving Functions.” In: IEEE/ACM Trans. Audio, Speech, Language Process. 25(5):1117-1127. 10.1109/TASLP.2017.2689245.
Wave Field Synthesis allows the reconstruction of an arbitrary target sound field within a listening area by using a secondary source contour of spherical monopoles. While phase correct synthesis is ensured over the whole listening area, amplitude deviations are present besides a predefined reference curve. So far, the existence and potential shapes of this reference curve was not extensively discussed in the Wave Field Synthesis literature. This article introduces improved driving functions for 2.5D Wave Field Synthesis. The novel driving functions allow for the control of the locations of amplitude correct synthesis for arbitrarily shaped—possibly curved—secondary source distributions. This is achieved by deriving an expressive physical interpretation of the stationary phase approximation leading to the presented unified Wave Field Synthesis framework. The improved solutions are better suited for practical applications. Additionally, a consistent classification of existing implicit and explicit 2.5D sound field synthesis solutions as special cases of the unified framework is given.
The paper Towards Open Science in Acoustics: Foundations and Best Practices by Sascha Spors, Matthias Geier and Hagen Wierstorf presented at the annual meeting of the German acoustical society (DAGA) discusses the open science approach and its application in acoustics. The paper and presentation, as well as its sources are available as Open Access on GitHub.
H Wierstorf, A Raake, S Spors, “Assessing localization accuracy in sound field synthesis,” The Journal of the Acoustical Society of America 141, p. 1111-1119 (2017), 10.1121/1.4976061
It is published as open access (CC BY 4.0), so feel free to download the PDF version.
The following additional material is available as well:
Stimuli for the listening tests
Average and single results from the listening tests
Code to reproduce the figures
Sound field synthesis methods like Wave Field Synthesis (WFS) and Near-Field Compensated Higher Order Ambisonics synthesize a sound field in an extended area surrounded by loudspeakers. Because of the limited number of applicable loudspeakers the synthesized sound field includes artifacts. This paper investigates the influence of these artifacts on the accuracy with which a listener can localize a synthesized source. This was performed with listening tests using dynamic binaural synthesis to simulate different sound field synthesis methods and incorporated several listening positions. The results show that WFS is able to provide good localization accuracy in the whole listening area even for a low number of loudspeakers. For Near-Field Compensated Higher Order Ambisonics the achievable localization accuracy of the listener depends highly on the Ambisonics order and shows large localization deviations for low orders, where splitting of the perceived sound source was sometimes reported.
A new version of our Sound Field Synthesis Toolbox for Matlab/Octave is available. The highlights of the new release include a correction of the absolute amplitudes in WFS and a new and improvement point selection for HRTFs/BRIRs interpolation which should now work for almost all 2D and 3D data sets.
Download the SFS Toolbox 2.3.0 and have a look at the online documentation how to use it.
- default 2D WFS focused source is now a line sink
- improve point selection and interpolation of impulse responses
- speed up Parks-McClellan resampling method
- change default value of conf.usebandpass to false
- rename conf.wfs.t0 to conf.t0
- rename and improve easyffft() to spectrum_from_signal()
- rename and improve easyifft() to signal_from_spectrum()
- correct amplitude values of WFS and NFC-HOA in time domain
- fix default 2.5D WFS driving function in time domain
- add time_response_point_source()
- update amplitude and position of dirac in dummy_irs()
- fix missing secondary source selection in ssr_brs_wfs()
- add amplitude terms to WFS FIR pre-filter
- fix Gauss-Legendre quadrature weights
- add delay_offset as return value to NFC-HOA and ir funtions
- fix handling of delay_offset in WFS time domain driving functions