The SoundScape Renderer (SSR) is a tool for real-time spatial audio reproduction providing a variety of rendering algorithms, e.g. Wave Field Synthesis, Higher-Order Ambisonics and binaural techniques. The SSR is currently available for GNU/Linux and Mac OS X and has been released as open source software under the GNU General Public License (GPL). It is developed at Quality and Usability Lab/TU Berlin (http://qu.tu-berlin.de/) and at Institut für Nachrichtentechnik/Universität Rostock (http://www.int.uni-rostock.de/).
Several rendering modules are currently available:
- Wave Field Synthesis (WFS)
- Vector Base Amplitude Panning (VBAP)
- Ambisonics Amplitude Panning (AAP)
- Distance-coded Ambisonics (DCA)
- this was formerly called Near-field-corrected Higher-Order Ambisonics (NFC-HOA)
- (Dynamic) Binaural Synthesis
- (Dynamic) Binaural Room Synthesis (BRS)
- Generic Renderer (arbitrary static filters between inputs and outputs)
New rendering algorithms can be implemented quite easily using the SSR framework.
Interaction with the SSR can happen either using its Graphical User Interface (written in Qt5) or its network interface (based on TCP/IP sockets). The latter allows for connecting any interface of your choice to SSR. An example for such interfaces is our Android Client, another example is the SSR Remote for Max for Live.
Here is a video of the Android client (use headphones!):
The SSR is written in C++ under massive use of the Standard Template Library (STL). It can be compiled with g++ (the GNU C++ compiler) or clang++ (the LLVM C++ compiler) and runs under Linux and Mac OS X. A Debian package is available for use on Linux, for Mac OS X, we provide a pre-compiled Application Bundle. The JACK Audio Connection Kit is used to handle audio data which makes it very easy to connect several audio processing programs to each other and to the hardware. This way any program that produces audio data (and supports the JACK) and any live input from the audio hardware can be connected to the SSR and can serve as source input.
Since version 0.4.0, the SSR can use multiple threads for audio processing and therefore better utilize multi-processor and multi-core computers. The signal processing core of the SSR was factored out into a separate project called Audio Processing Framework (APF). More information is available in a paper presented at the Linux Audio Conference 2012.
Binaural resynthesis works best with head tracking. Therefore, the binaural renderers of the SSR have built-in support for the following tracking devices:
- Razor AHRS, a high-quality, low-cost, do-it-yourself tracker solution with USB and/or Bluetooth support. Open Source firmware and documentation is available at https://github.com/ptrbrtz/razor-9dof-ahrs/.
- Polhemus Fastrak, which works out-of-the-box (but is not so cheap).
- InterSense InertiaCube3 (and maybe other InterSense trackers), which needs a proprietary library from their website (and is also not so cheap). Due to licensing terms, we cannot provide InterSense support in the MacOSX AppBundle, you have to compile it on your own.
- Any tracker which is supported by the Virtual Reality Peripheral Network (VRPN).
The “BoomRoom” is an example where an optical head-tracker was used and the tracking data were sent via TCP/IP to the SSR. Use headphones when watching this video:
There are a few papers about the SSR available, if you use the SSR in a scientific context, please consider citing one of them.
This ist the most recent paper about the SSR in general:
Matthias Geier, Sascha Spors:
Spatial Audio Reproduction with the SoundScape Renderer
27th Tonmeistertagung – VDT International Convention, 2012
This paper is about the signal processing core of the SSR and how multi-threading is achieved:
Here we show how the SSR can be used in the background with a custom GUI for psychoacoustic experiments:
Matthias Geier, Sascha Spors:
Conducting Psychoacoustic Experiments with the SoundScape Renderer
9. ITG Fachtagung Sprachkommunikation, 2010
One of the first papers about the SSR:
Matthias Geier, Jens Ahrens, Sascha Spors:
The SoundScape Renderer: A Unified Spatial Audio Reproduction Framework for Arbitrary Rendering Methods
124th Convention of the Audio Engineering Society, 2008
User Manual: http://ssr.rtfd.org/
Development pages: http://github.com/SoundScapeRenderer/ssr/
SSR Remote for Android: https://github.com/SoundScapeRenderer/android-remote/
These are links to old versions of the SSR:
GNU General Public License (GPL) version 3 or higher.
Copyright (c) 2014 Institut für Nachrichtentechnik, Universität Rostock
Copyright (c) 2012 Quality & Usability Lab, Deutsche Telekom Laboratories, TU Berlin