O7A Decoder - VR Monitoring
This plugin takes a seventh order ambisonics (O7A) mix and decodes it to binaural stereo. It allows monitoring of VR mixes with a range of binaural decoders that might be used during later playback, assuming the mix is passed on as an ambisonic mix.
In addition to the decoder itself, a frontal directional emphasis module and a limiter module are available.
If the Normalise switch is not enabled, the plugins output audio using the decoders' standard levels, which are not quite the same. This can be useful when performing level checks during mastering.
Enabling the Normalise switch brings the decoder levels more closely into line with each, so timbre can be compared without changing your monitoring level. Behaviour is then also consistent with the O7A Meter, O7A Meter - Karma and so on.
Directional Emphasis Module
This module further modifies levels to emphasise frontal audio. It uses essentially the same algorithm as Rapture3D Universal (running at seventh order) or the O7A Directional Emphasis plugin from the O7A Manipulators plugin library. It provides focus and strength controls and is always applied forwards.
This module is an optional brickwall limiter module. This will automatically reduce the final level of loud audio to avoid clipping. A light indicates when the signal level is reduced.
The plugin is available in the O7A View plugin library.
A number of decoder methods are supported. In all cases, these are HRTF-based decoders that are intended to synthesize psychoacoustic binaural cues on headphones.
|Amber||This decoder uses Blue Ripple Sound's ground-breaking "Amber" HRTF technology. It is used in the "O7A Decoder - Headphones plugin" in the O7A Decoding pack and is the default headphone decoder in Rapture3D Universal. It is well-suited to a wide range of heads.|
|Red, Orange, Yellow, Green and Blue||These are older Blue Ripple Sound HRTF decoders supported by Rapture3D. They are sometimes preferred to Amber.|
|Mid/Side||This decoder uses a full-width, first order Mid/Side decode (equivalent to two side-facing cardioids). This is used by a number of players when loudspeakers are present rather than headphones. This actually only uses audio from the first two channels of the O7A mix.|
|YouTube Binaural||This decoder uses an algorithm equivalent to YouTube's 360 video binaural headphone decoder as of April 2017. This is a first order decoder and therefore only uses spatial information from the first four channels of the O7A mix.|
|Facebook 360||This decoder is equivalent to the binaural headphone decoder used in Facebook 360 as of May 2018. This is based on coefficients published by Facebook on GitHub. This is a second order decoder and therefore only uses spatial information from the first nine channels of the O7A mix.|
|Purple||This decoder uses Blue Ripple Sound's current HRTF processing, with HRTF data from a KU100 dummy head. It is supported by Rapture3D.|
If the Normalise switch is enabled, the decoder output levels are normalised to be consistent with other O7A decoder plugins and O7A metering. Otherwise, raw decoder output levels are used.
Focus determines the shape of the frontal directional emphasis, with higher values using a more focussed (narrow) region.
A value of zero means that no modification is made to the stream. The focus value can go up to three, for the most focussed emphasis.
Strength determines how aggressively sounds not in the focussed region are reduced in level by the directional emphasis module.
Strength values are between zero and one, where a value of zero means that no modification is made to the stream and a value of one provides the strongest emphasis.
The limiter control toggle enables or disables the limiter module.
The Amber and Orange HRTFs use data from the IRCAM LISTEN HRTF data set, available at http://recherche.ircam.fr/equipes/salles/listen/index.html. The data has been processed by Rapture3D.
The Red and Yellow HRTFs use data from the MIT KEMAR data set by Bill Gardner and Keith Martin, available at https://sound.media.mit.edu/KEMAR.html. The data has been processed by Rapture3D.
The Green and Blue HRTFs use data from the CIAIR HRTF data set by Takanori Nishino, Shoji Kajita, Kazuya Takeda and Fumitada Itakura, which was available at http://www.sp.m.is.nagoya-u.ac.jp/HRTF/database.html. The data has been processed by Rapture3D.
The YouTube HRTF is derived fairly directly from the "symmetric" version of the SADIE binaural measurements published by Google (April 2017), available at https://github.com/google/spatial-media/tree/master/spatial-audio .
The Facebook 360 HRTF is based on coefficients used in the Facebook 360 audio renderer published by Facebook as an open source project on GitHub under the "MIT" license. This can be found at https://github.com/facebookincubator/Audio360 .
The Purple HRTF uses data from the SADIE II data set, prepared at the University of York. This is available at https://www.york.ac.uk/sadie-project/index.html . The data has been processed by Rapture3D.
Thanks to all parties!