Frank Schultz, Nara Hahn, Sascha Spors (2019): “Detection of Constant Phase Shifts in Filters for Sound Field Synthesis”, In: Proc. of 5th Intl Conf on Spatial Audio (ICSA), Ilmenau, Germany
The paper can be downloaded here. The slides can be downloaded here.
The accompanying git repository can be found at https://github.com/spatialaudio/audibility-constant-phase.
Abstract: Filters with constant phase shift in conjunction with +3/6 dB amplitude slope per octave frequently occur in sound field synthesis and sound reinforcement applications. These ideal filters, known as (half) differentiators, exhibit zero group delay and +45/90 degree phase shift. It is well known that certain group delay distortions in electro-acoustic systems are audible for trained listeners and critical audio stimuli, such as transient, impulse-like and square wave signals. It is of interest if linear distortion by a constant phase shift is audible as well. For that, we conducted a series of ABX listening tests, diotically presenting non-phase shifted references against their treatments with different phase shifts. The experiments revealed that for the critical square waves, this can be clearly detected, which generally depends on the amount of constant phase. Here, -90 degree (Hilbert transform) is comparably easier to detect than other phase shifts. For castanets, lowpass filtered pink-noise and percussion the detection rate tends to guessing for most listeners, although trained listeners were able to discriminate treatments in the first two cases based on changed pitch, attack and roughness cues. Our results motivate to apply constant phase shift filters to ensure that also the most critical signals are technically reproduced as best as possible. In the paper, we furthermore give analytical expressions for discrete-time infinite impulse response of an arbitrary constant phase shifter
and for practical filter design.
In the IEEE/ACM Transactions on Audio, Speech, and Language Processing we published
Winter, F.; Schultz S.; Firtha G.; Spors, S. (2019), “A Geometric Model for Prediction of Spatial Aliasing in 2.5D Sound Field Synthesis,” In: IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, no. 6
The article can be found here.
The avoidance of spatial aliasing is a major challenge in the practical implementation of Sound Field Synthesis. Such methods aim at a physically accurate reconstruction of a desired sound field inside a target region using a finite ensemble of loudspeakers. In the past, different theoretical treatises of the inherent spatial sampling process led to anti-aliasing criteria for simple loudspeaker array arrangements, e.g. lines and circles, and fundamental sound fields, e.g. plane and spherical waves. Many criteria were independent of the listener’s position inside the target region. Within this article, a geometrical framework based on ray-approximation of the underlying synthesis problem is proposed. Unlike former approaches, this model predicts spatial aliasing artefacts for arbitrary convex loudspeaker arrays and as a function of the listening position and the desired sound field. Anti-aliasing criteria for distinct listening positions and extended listening areas are formulated based on the established predictions. For validation, the model is applied to different analytical Sound Field Synthesis approaches: The predicted spatial structure of the spatial aliasing agrees with numerical simulation of the synthesised sound fields. Moreover, it is shown within this framework, that the active prioritisation of a control region using so-called Local Sound Field Synthesis approaches does indeed reduce spatial aliasing artefacts. For the scenario under investigation, a method for Local Wave Field Synthesis achieves an artefact-free synthesis up to a frequency which is between 2.9 and 17.3 times as high as for conventional Wave Field Synthesis.
On the 45th Annual German Acoustic Conference (DAGA) we presented the poster “Software Tools and Workflows for Open Science” (DOI10.5281/zenodo.2638363).
Science relies on the traceability and replicability of studies. Aiming for sustainability, this is important for the authors itself as well as for the research community once results become published. Ideally the entire research process, from initial concepts till publication, shall be performed under the open science paradigm. Recent efforts in the open source software community led to convenient tools for research data management. Nowadays, it is almost self-evident that researchers are engaged in the professional typesetting process using the mature LaTeX front-end with graphical packages like TikZ. Furthermore, version-control systems such as Git are probably used by a large part of the community for open and closed source projects. Besides that, the open source programming language Python and its various open tools for code development gradually become predominant, supporting the open science paradigm. Jupyter notebooks together rapidly gain importance in the workflow for prototyping, documentation and education. Documentation tools like ”Sphinx” and free hosting platforms like ”Read the Docs” are emerging front-ends that allow versioned technical documentation with hyperlinks. In this contribution we discuss and demonstrate a current reliable workflow for open science/research starting from puzzling ideas to publishing results.
On the 45th Annual German Acoustic Conference (DAGA) we presented the poster “Open Source Sound Field Synthesis Toolbox” (DOI:10.5281/zenodo.2633830) accompanying the new releases.
Sound Field Synthesis (SFS) aims at production of wave fronts within a large target region enveloped by a massive number of loudspeakers. Nowadays, these techniques are known as Wave Field Synthesis (WFS) as an implicit solution of the SFS problem and as explicit solutions, like Ambisonics in the spherical domain and Spectral Division Method in the cartesian domain. Research and development on Ambisonics and WFS proceeded since the 1970s and the late 1980s, being most lively in the last decade due to DSP power available. This resulted in many SFS systems at research institutes with different rendering methods, thus complicating comparability and reproducibility. In order to pool the outcomes of different SFS approaches the Matlab/Octave based Sound Field Synthesis Toolbox was initiated 2010 as an open source project by the authors. This toolbox was later accompanied by online theoretical documentation giving an overview on the SFS approaches and citing the reference literature. In 2013 porting of the SFS Toolbox to Python was initiated, serving as convenient framework together with Jupyter notebooks. In this contribution we discuss and demonstrate the concepts, workflows and capabilities of the SFS Toolbox and their documentation as fundamental component for open research on SFS.
On the 45th Annual German Acoustic Conference (DAGA) we presented further thoughts on the links between NFC-HOA and WFS. See the accompanying github repository for the manuscript, slides and extended calculus.
Schultz, F; Firtha, G.; Winter, F.; Spors, S. (2019): “On the Connections of High-Frequency Approximated Ambisonics and Wave Field Synthesis.” In: Proc. of the 45th DAGA, Rostock, p. 1446-1449.
We are happy to announce new versions of the Sound Field Synthesis (SFS) Toolbox for Python (0.5.0) and for Matlab (2.5.0) together with updated theory documentation.
Please see https://sfs.readthedocs.io for further information.
At the 45th German Annual Conference on Acoustics (DAGA) we presented the contribution:
Winter, F.; Schultz, F.; Spors, S. (2019): “Array Design for Increased Spatial Aliasing Frequency in Wave Field Synthesis Based on a Geometric Model.” In: German Annual Conference on Acoustics (DAGA). Rostock. p. 463-446.
The poster and additional material can be found here.
Wave Field Synthesis aims at a physically accurate synthesis of a desired sound field inside a target region. Typically, the region is surrounded by a finite number of discrete loudspeakers. For practical loudspeaker setups, this spatial sampling causes spatial aliasing artefacts and does not allow for an accurate synthesis over the entire audible frequency range. Recently, the authors proposed a geometric model to predict the so-called aliasing frequency up to which the spatial aliasing is negligible for a specific listening position or area. Besides its dependency on the desired sound field, this frequency is influenced by the spacing between individual loudspeakers. This work discusses the effects of non-uniform spacing on the aliasing frequency. We further propose optimal discretisation patterns for a given array geometry and desired sound field. The derived patterns are compared to a uniform sampling scheme via numerical simulations of the synthesised sound fields. The results show an increase of the aliasing frequency for the optimised patterns.
A new version is available including a macOS app bundle: http://spatialaudio.net/ssr/download/
The code repository is here: https://github.com/SoundScapeRenderer/ssr
Here’s a brief summary of the changes:
- GUI now uses Qt5
- The exponent that determines distance attenuation of the amplitude in the virtual space can be set by the user
- Significant extensions of the documentation – the former NFC-HOA renderer is back in an experimental version now called distance-coded Ambisonics (DCA)
- Headphone-compensated HRTFs are included
- The end-of-message character in TCP messages can be selected by the user
Being curious about numerical simulations in acoustics using the Finite Element Method (FEM), we started to compile a series of jupyter notebooks providing some insight into the theory, implementation as well as simulation results. The notebooks are available on Github https://github.com/spatialaudio/computational_acoustics.
If you just want to take a brief look, follow the ‘view it on nbviewer’ links in the Readme for a non-interactive view on the notebooks. We are planning to add notebooks on other methods of computational acoustics in the future.
Firtha, G.; Fiala, P.; Schultz, F.; Spors, S. (2018): “On the General Relation of Wave Field Synthesis and Spectral Division Method for Linear Arrays.” In: IEEE/ACM Trans. Audio Speech Language Process., 26(12):2393-2403
was recently published. The topic is also covered in Gergely Firtha’s dissertation in chapter 4, please see his open access project https://github.com/gfirtha/gfirtha_phd_thesis
Sound field synthesis aims at the reproduction of an arbitrary target sound field over an extended listening area applying a densely spaced loudspeaker ensemble. Two basic analytic methodologies—the explicit and the implicit—exist in order to derive the required loudspeaker driving functions. The explicit solution aims at the direct solution of the involved integral equation describing the general sound field synthesis problem, resulting in driving functions in the form of a spectral integral. The implicit solution extracts the driving function from an appropriate boundary integral representation of the target sound field. So far the relationship between two approaches was investigated for target field specific synthesis scenarios. For linear arrays this paper introduces a high-frequency approximation for the explicit solution resulting in a novel, purely spatial domain formulation of the direct approach. The presented driving functions allow the synthesis of an arbitrary virtual sound field, optimizing the reproduction on an arbitrary reference line. It is furthermore shown that for an arbitrary virtual sound field, the implicit solution constitutes a high-frequency approximation of the explicit method.