Hahn, N.; Winter, F.; Spors, S. (2017): “Synthesis of a Spatially Band-Limited Plane Wave in the Time-Domain Using Wave Field Synthesis” In: Proc. Eur. Sig. Process Conf. (EUSIPCO), Kos, Greece.
was presented at the 25th European Signal Processing Conference (EUSIPCO). The slides can be found here.
At the 25th European Signal Processing Conference (EUSIPCO) conference we presented the contribution:
Winter, F.; Hahn, N; Spors, S. (2017): “Time-Domain Realisation of Model-Based Rendering for 2.5D Local Wave Field Synthesis Using Spatial Bandwidth-Limitation.” In: Proc. of the 25th European Signal Processing Conference (EUSIPCO), 2017.
The slides and additional material can be found here.
Wave Field Synthesis aims at a physically accurate synthesis of a desired sound field inside an extended listening area. This area is surrounded by loudspeakers individually driven by their respective driving signal. Recently, the authors have published an approach for so-called Local Wave Field Synthesis which enhances the reproduction accuracy in a limited region by applying a spatial bandwidth limitation in the circular/spherical harmonics domain to the desired sound field. This paper presents an efficient time-domain realisation of the mentioned approach for 2.5-dimensional synthesis scenarios. It focuses on the model-based rendering of virtual plane waves and point sources. As an outcome, the parametric representation of the driving signals for both source types allows for the reproduction of time-varying acoustic scenarios. This also includes an adaptation to the tracked position of a moving listener. The realisation is compared with conventional Wave Field Synthesis regarding the spatial structure and spectral properties of the reproduced sound field. The results confirm the findings of the prior publication, that the reproduction accuracy can be locally improved with Local Wave Field Synthesis.
A new version of our Sound Field Synthesis Toolbox for Matlab/Octave is available. This release is a major update including the following highlights:
- Fix the calculation of zeros of the spherical Bessel function for high orders as needed for NFC-HOA. After the recent paper by N. Hahn and S. Spors we managed to get rid of numerical instabilities for orders higher than 80 by using a similar implementation as provided by scipy. To demonstrate the advantage of the new implementation we have a look at the NFC-HOA part of Fig. 3.14 from Wierstorf (2014).The first version of the figure shows the sound pressure of a cosine shaped impulse synthesized as a plane wave by NFC-HOA. NFC-HOA was realized with an order of 256 for a continuous (approximated by 500 sources) distribution of secondary sources and for one with 64 secondary sources. You can clearly see some numerical noise on the signal. In the area where the signal is not shown, but replaced by the two labels “numercially unstable” the signal starts to oscillate with very high amplitudes.
The second version of the figure presents exactly the same numerical simulation, but now applying the version 2.4 with the new implementation of sphbesselh_zeros(), for which numerical problems are no longer an issue for the applied order of 256.
- Switch the addressing of time from samples to seconds, e.g. to get the sound pressure of a broadband impulse point source placed at (0,2,0) m and synthesized by 2.5D NFC-HOA sound after 5 ms:
conf = SFS_config;
sound_field_imp_nfchoa([-2 2],[-2 2],0,[0 2 0],'ps',0.005,conf)
- Local Wave Field Synthesis (LWFS) using spatial bandwidth-limitation may be used to reduce spatial aliasing artifacts in a prioritized region. It will be presented at WASPAA 2017, that this leads to a perceivable reduction of coloration.
The details of the Time-Domain Implementation of this LWFS technique where presented at EUSIPCO 2017.
- Add max-rE, Tukey and Kaiser weighting to the modal window. The Tukey and Kaiser windows can be parameterized and allow an investigation of the influence of the modal window on the sound field as done for a recent talk a the Acoustics’17. The max-rE weighting is popular within the Ambisonics community. To see the windows in action have a look at the new section in the documentation on modal weighting.
Download the SFS Toolbox 2.4.0 and have a look at the online documentation how to use it.
- improve references in SFS_config()
- update structure of configuration for LWFS methods
- fix off-center dummy head positions for HRTFs
- add elevation to head orientation for binaural synthesis
- fix sphbesselh_zeros() for high orders
- fix symmetric ifft for Octave
- add inverse Legendre transform
- fix integral weights for spherical secondary sources
- add 3D ps and pw driving functions for NFC-HOA
- add 'reference_circle' as new default for focused sources in 2.5D
- add max-rE and tukey modal weighting windows
- add time-domain implementation of LWFS using spatial bandwidth-limitation
- add circular expansion functions
- fix incorporation of tapering weights for LWFS
- remove x0 from interpolate_ir() call
- fix interpolate_ir() for special cases
- switch handling of time from samples to seconds
- add freq_response_line_source()
- add freq_response_point_source()
- add time_response_line_source()
At the Acoustics17′ conference we gave a talk:
F. Winter, N. Hahn, H. Wierstorf, and S. Spors, “Azimuthal localisation in 2.5D near-field-compensated higher order ambisonics,” 2017.
Additional Material and the Slides can be found here.
Sound Field Synthesis approaches aim at the reconstruction of a desired sound field in a defined target region using a distribution of loudspeakers. Near-Field Compensated Higher Order Ambisonics (NFCHOA) is a prominent example of such techniques. In practical implementations different artifacts are introduced to the synthesized sound field: spatial aliasing is caused by the non-zero distance between the loudspeakers. Modal bandwidth limitation is a well-established approach to reduce spatial aliasing in 2.5D NFCHOA, but introduces temporal and spectral impairments to the reproduced sound field which strongly depend on the relative position to the center of modal expansion. Also, the dimensionality mismatch in a 2.5D synthesis scenario results in a different amplitude decay compared to the desired sound field. Listening experiments already investigated the azimuthal localization in 2.5D NFCHOA. It is however unclear, in how far individual artifacts caused by spatial sampling, modal bandwidth limitation, and the 2.5D dimensionality mismatch contribute to these localization impairments in particular. Within this contribution a mathematical framework is used together with binaural synthesis to simulate the individual effect of each artifact on the ear signals. Humans’ performance is approximated by a binaural model for azimuthal localization.
A fixed-term (4 years) position for a postdoctoral researcher is available at the Institute of Communications Engineering, Laboratory of Signal Processing and Virtual Acoustics, University of Rostock, Germany. The research is carried out within the framework of project INF ‘Infrastructure Support Project’ of the DFG Collaborative Research Centre (CRC) 1270 ‘Electrically Active Implants’ – ELAINE.
The objective is the conception and realization of research data management, e.g. for numerical simulations, imaging techniques or experiments, for the entire collaborative research centre. Herby an explicit focus lies on the support of open and reproducible research. This includes the realization of a virtual research environment for the CRC respectively the University of Rostock. Research in the field of efficient management of research data and the reproduction of scientific results should be carried out. The candidate is furthermore responsible for a training and qualification programme on data management.
The official advertisement including the application procedure is available here. Closing date for applications is 21 June 2017.
A fixed-term (3 years) position for a doctoral researcher is available at the Institute of Communications Engineering, Laboratory of Signal Processing and Virtual Acoustics, University of Rostock, Germany.
The aim of the project is to acoustically localize and classify cavitation at ship propellers using multiple hydrophones.This includes the development and experimental validation of algorithms including the conception and implementation of experiments.
Key requirements: The successful applicant will have a Diploma or Master’s degree in electrical engineering. Essential skills include profund expertise in digital signal processing and acoustics. Experience in the field of hydroacoustics and machine learning, as well as prgramming skills in Python would be desirable.
The official advertisement (in German) including the application procedure is available here. Closing date for applications is 12 June 2017.
At the 142nd Audio Engineering Society Convention conference we presented the contribution:
F. Winter, H. Wierstorf, A. Raake, and S. Spors, “The Two!Ears Database,” in Proc. of 142nd Aud. Eng. Soc. Conv., 2017.
The Poster can be found here.
TWO!EARS was an EU-funded project for binaural auditory modelling with ten international partners involved. Its main goal was to provide a computational framework for the modelling of active exploratory listening that assigns meaning to auditory scenes. As one outcome of the project, a database including data acquired by the involved partners as well as third-party measurements has been published. Among others, a large collection of Head Related Impulse Responses and Binaural Room Impulse Responses is part of the database. Further, results from psychoacoustic experiments conducted within TWO!EARS to validate the developed auditory model were added. For the usage of the database together with the TWO!EARS model, a software interface was developed to download the data from the database on demand.
At the 142nd Audio Engineering Society Convention conference we presented the contribution:
F. Winter, H. Wierstorf, and S. Spors, “Improvement of the reporting method for closed-loop human localization experiments,” in Proc. of 142nd Aud. Eng. Soc. Conv., 2017.
Additional Material and the Slides can be found here. The results of the listening test are avaivable on zenodo.
Sound Field Synthesis reproduces a desired sound field within an extended listening area using up to hundreds of loudspeakers. The perceptual evaluation of such methods is challenging, as many degrees of freedom have to be considered. Binaural Synthesis simulating the loudspeakers over headphones is an effective tool for the evaluation. A prior study has investigated whether non-individual anechoic binaural synthesis is perceptually transparent enough to evaluate human localization in sound field synthesis. With the used apparatus, an undershoot for lateral sound sources was observed for real loudspeakers and their binaural simulation. This paper reassesses human localization for the mentioned technique using a slightly modified setup. The results show that the localization error decreased and no undershoot was observed.
At the 142nd AES convention in Berlin we presented the paper
Frank Schultz, Gergely Firtha, Peter Fiala, Sascha Spors (2017): “Wave Field Synthesis Driving Functions for Large-Scale Sound Reinforcement Using Line Source Arrays.” In: Proc. of 142nd Audio Eng. Soc. Conv. Berlin, #9722.
Please feel free to download the slides Schultz_2017_LSA with LSA_AES142nd .
Wave field synthesis (WFS) can be used for wavefront shaping using line source arrays (LSAs) in large-scale sound reinforcement. For that the individual drivers might be electronically controlled by WFS driving functions of a virtual directional point source. From the recently introduced unified 2.5D WFS framework it is known that positions of amplitude correct synthesis (PCS) only exist along an arbitrary shaped curve—the reference curve—in front of the LSA. However, its shape can be adapted with the so called referencing function. We introduce the adaption of the referencing function along the audience line of typical concert venues for optimized wavefront shaping. This yields considerable improvements with respect to sound field’s homogeneity and more convenient setups compared to previous WFS-based sound reinforcement.
We use the unified 2.5D WFS framework for this approach, see the post.
Our recent contribution to 2.5D WFS theory is published:
Gergely Firtha, Peter Fiala, Frank Schultz, Sascha Spors (2017): “Improved Referencing Schemes for 2.5D Wave Field Synthesis Driving Functions.” In: IEEE/ACM Trans. Audio, Speech, Language Process. 25(5):1117-1127. 10.1109/TASLP.2017.2689245.
Wave Field Synthesis allows the reconstruction of an arbitrary target sound field within a listening area by using a secondary source contour of spherical monopoles. While phase correct synthesis is ensured over the whole listening area, amplitude deviations are present besides a predefined reference curve. So far, the existence and potential shapes of this reference curve was not extensively discussed in the Wave Field Synthesis literature. This article introduces improved driving functions for 2.5D Wave Field Synthesis. The novel driving functions allow for the control of the locations of amplitude correct synthesis for arbitrarily shaped—possibly curved—secondary source distributions. This is achieved by deriving an expressive physical interpretation of the stationary phase approximation leading to the presented unified Wave Field Synthesis framework. The improved solutions are better suited for practical applications. Additionally, a consistent classification of existing implicit and explicit 2.5D sound field synthesis solutions as special cases of the unified framework is given.