In the IEEE/ACM Transactions on Audio, Speech, and Language Processing we published
Winter, F.; Schultz S.; Firtha G.; Spors, S. (2019), “A Geometric Model for Prediction of Spatial Aliasing in 2.5D Sound Field Synthesis,” In: IEEE/ACM Transactions on Audio, Speech, and Language Processing
The preprint can be found here.
The avoidance of spatial aliasing is a major challenge in the practical implementation of Sound Field Synthesis. Such methods aim at a physically accurate reconstruction of a desired sound field inside a target region using a finite ensemble of loudspeakers. In the past, different theoretical treatises of the inherent spatial sampling process led to anti-aliasing criteria for simple loudspeaker array arrangements, e.g. lines and circles, and fundamental sound fields, e.g. plane and spherical waves. Many criteria were independent of the listener’s position inside the target region. Within this article, a geometrical framework based on ray-approximation of the underlying synthesis problem is proposed. Unlike former approaches, this model predicts spatial aliasing artefacts for arbitrary convex loudspeaker arrays and as a function of the listening position and the desired sound field. Anti-aliasing criteria for distinct listening positions and extended listening areas are formulated based on the established predictions. For validation, the model is applied to different analytical Sound Field Synthesis approaches: The predicted spatial structure of the spatial aliasing agrees with numerical simulation of the synthesised sound fields. Moreover, it is shown within this framework, that the active prioritisation of a control region using so-called Local Sound Field Synthesis approaches does indeed reduce spatial aliasing artefacts. For the scenario under investigation, a method for Local Wave Field Synthesis achieves an artefact-free synthesis up to a frequency which is between 2.9 and 17.3 times as high as for conventional Wave Field Synthesis.