Winter, F.; Wierstorf, H.; Hold C.; Krüger, F.; Raake A.; Spors, S. (2018), “Colouration in Local Wave Field Synthesis,” In: IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 10
Abstract: Sound Field Synthesis techniques including Wave Field Synthesis and Near-Field-Compensated Higher Order Ambisonics aim at a physically accurate reproduction of a desired sound field inside an extended listening area. This area is surrounded by loudspeakers individually driven by their respective driving signals. The latter have to be chosen such that the superposition of all emitted sound fields coincides with the desired one. Due to practical limitations, artefacts impair the synthesis accuracy resulting in a perceivable change in timbre. Recently, two approaches to so-called Local Wave Field Synthesis were published which enhance the reproduction accuracy in a limited region while allowing stronger artefacts outside. This work reports on two listening experiments comparing conventional techniques for Sound Field Synthesis with the mentioned approaches. Furthermore, the influence of different parametrisations for Local Wave Field Synthesis is investigated. The results show that the enhanced reproduction accuracy in Local Wave Field Synthesis leads to a reduction of perceived colouration, if a suitable parametrisation is chosen.
Winter, F.; Ahrens, J.; Spors, S. (2018): “A Geometric Model for Spatial Aliasing in Wave Field Synthesis.” In: German Annual Conference on Acoustics (DAGA).
The poster and additional material can be found here.
Abstract: Wave Field Synthesis aims at a physically accurate synthesis of a desired sound field inside a target region. Typically, the region is surrounded by a finite number of discrete loudspeakers. For practical loudspeaker setups, this spatial sampling causes spatial aliasing artefacts and does not allow for an accurate synthesis over the entire audible frequency range. In the past, different theoretical treatises of the spatial sampling process for simple loudspeaker geometries, e.g. lines and circles, led to anti-aliasing criteria independent of listener’s position inside a target region. However, no inference about the spatial phenotype of the aliasing artefacts could be made by this models. This work presents a geometrical model based on high-frequency approximations of the underlying theory to describe the spatial occurrence and the propagation direction of the additional wave fronts caused by spatial aliasing. Combined with a ray-tracing algorithm, it can be used to predict position-dependent spatial aliasing artefacts for any convex loudspeaker geometry.
Winter, F.; Hold, C.; Wierstorf, H.;Raake A.; Spors, S. (2017): “Colouration in 2.5D local wave field synthesis using spatial bandwidth-limitation.” In: 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA).
The poster and additional material can be found here.
Abstract: Sound Field Synthesis techniques, such as Wave Field Synthesis aim at a physically accurate reproduction of a desired sound field inside an extended listening area. This area is surrounded by loudspeakers individually driven by their respective driving signals. Due to practical limitations, artefacts impair the synthesis accuracy resulting in a perceivable change in timbre compared to the desired sound field. Recently, an approach for so-called Local Wave Field Synthesis was published which enhances the reproduction accuracy in a limited region by applying a spatial bandwidth limitation in the circular/spherical harmonics domain to the desired sound field. This paper reports on a listening experiment comparing conventional Sound Field Synthesis techniques with the mentioned approach. Also the influence of the different parametrisations for Local Wave Field Synthesis is investigated. The results show that the enhanced reproduction accuracy in Local Wave Field Synthesis leads to an improvement with regard to the perceived colouration.
A new version of our Sound Field Synthesis Toolbox for Matlab/Octave is available. This is a minor update fixing some bugs and adding support for mono-frequent simulations of local Wave Field Synthesis (LWFS) using spatial bandwidth-limitation.
- add monochromatic implementation of LWFS using spatial bandwidth-limitation
- add monochromatic circular expansion functions for ps and pw
- add function for conversion from circular to plane wave expansion
- add freq_response_* and time_response_* for all LWFS methods
- add optional message arg to progress_bar()
- fix missing conf.N in freq_response_nfchoa()
- fix auralize_ir() for local files
Winter, F.; Hahn, N; Spors, S. (2017): “Time-Domain Realisation of Model-Based Rendering for 2.5D Local Wave Field Synthesis Using Spatial Bandwidth-Limitation.” In: Proc. of the 25th European Signal Processing Conference (EUSIPCO), 2017.
The slides and additional material can be found here.
Abstract: Wave Field Synthesis aims at a physically accurate synthesis of a desired sound field inside an extended listening area. This area is surrounded by loudspeakers individually driven by their respective driving signal. Recently, the authors have published an approach for so-called Local Wave Field Synthesis which enhances the reproduction accuracy in a limited region by applying a spatial bandwidth limitation in the circular/spherical harmonics domain to the desired sound field. This paper presents an efficient time-domain realisation of the mentioned approach for 2.5-dimensional synthesis scenarios. It focuses on the model-based rendering of virtual plane waves and point sources. As an outcome, the parametric representation of the driving signals for both source types allows for the reproduction of time-varying acoustic scenarios. This also includes an adaptation to the tracked position of a moving listener. The realisation is compared with conventional Wave Field Synthesis regarding the spatial structure and spectral properties of the reproduced sound field. The results confirm the findings of the prior publication, that the reproduction accuracy can be locally improved with Local Wave Field Synthesis.
A new version of our Sound Field Synthesis Toolbox for Matlab/Octave is available. This release is a major update including the following highlights:
Fix the calculation of zeros of the spherical Bessel function for high orders as needed for NFC-HOA. After the recent paper by N. Hahn and S. Spors we managed to get rid of numerical instabilities for orders higher than 80 by using a similar implementation as provided by scipy. To demonstrate the advantage of the new implementation we have a look at the NFC-HOA part of Fig. 3.14 from Wierstorf (2014).The first version of the figure shows the sound pressure of a cosine shaped impulse synthesized as a plane wave by NFC-HOA. NFC-HOA was realized with an order of 256 for a continuous (approximated by 500 sources) distribution of secondary sources and for one with 64 secondary sources. You can clearly see some numerical noise on the signal. In the area where the signal is not shown, but replaced by the two labels “numercially unstable” the signal starts to oscillate with very high amplitudes.
The second version of the figure presents exactly the same numerical simulation, but now applying the version 2.4 with the new implementation of sphbesselh_zeros(), for which numerical problems are no longer an issue for the applied order of 256.
Switch the addressing of time from samples to seconds, e.g. to get the sound pressure of a broadband impulse point source placed at (0,2,0) m and synthesized by 2.5D NFC-HOA sound after 5 ms:
conf = SFS_config;
sound_field_imp_nfchoa([-2 2],[-2 2],0,[0 2 0],'ps',0.005,conf)
Local Wave Field Synthesis (LWFS) using spatial bandwidth-limitation may be used to reduce spatial aliasing artifacts in a prioritized region. It will be presented at WASPAA 2017, that this leads to a perceivable reduction of coloration.
The details of the Time-Domain Implementation of this LWFS technique where presented at EUSIPCO 2017.
Add max-rE, Tukey and Kaiser weighting to the modal window. The Tukey and Kaiser windows can be parameterized and allow an investigation of the influence of the modal window on the sound field as done for a recent talk a the Acoustics’17. The max-rE weighting is popular within the Ambisonics community. To see the windows in action have a look at the new section in the documentation on modal weighting.
- improve references in SFS_config()
- update structure of configuration for LWFS methods
- fix off-center dummy head positions for HRTFs
- add elevation to head orientation for binaural synthesis
- fix sphbesselh_zeros() for high orders
- fix symmetric ifft for Octave
- add inverse Legendre transform
- fix integral weights for spherical secondary sources
- add 3D ps and pw driving functions for NFC-HOA
- add 'reference_circle' as new default for focused sources in 2.5D
- add max-rE and tukey modal weighting windows
- add time-domain implementation of LWFS using spatial bandwidth-limitation
- add circular expansion functions
- fix incorporation of tapering weights for LWFS
- remove x0 from interpolate_ir() call
- fix interpolate_ir() for special cases
- switch handling of time from samples to seconds
- add freq_response_line_source()
- add freq_response_point_source()
- add time_response_line_source()
F. Winter, N. Hahn, H. Wierstorf, and S. Spors, “Azimuthal localisation in 2.5D near-field-compensated higher order ambisonics,” 2017.
Additional Material and the Slides can be found here.
Abstract: Sound Field Synthesis approaches aim at the reconstruction of a desired sound field in a defined target region using a distribution of loudspeakers. Near-Field Compensated Higher Order Ambisonics (NFCHOA) is a prominent example of such techniques. In practical implementations different artifacts are introduced to the synthesized sound field: spatial aliasing is caused by the non-zero distance between the loudspeakers. Modal bandwidth limitation is a well-established approach to reduce spatial aliasing in 2.5D NFCHOA, but introduces temporal and spectral impairments to the reproduced sound field which strongly depend on the relative position to the center of modal expansion. Also, the dimensionality mismatch in a 2.5D synthesis scenario results in a different amplitude decay compared to the desired sound field. Listening experiments already investigated the azimuthal localization in 2.5D NFCHOA. It is however unclear, in how far individual artifacts caused by spatial sampling, modal bandwidth limitation, and the 2.5D dimensionality mismatch contribute to these localization impairments in particular. Within this contribution a mathematical framework is used together with binaural synthesis to simulate the individual effect of each artifact on the ear signals. Humans’ performance is approximated by a binaural model for azimuthal localization.