Google Resonance Audio is a plugin/script written for various engines and softwares such as Unity game engine and other similar programs. It is designed to bring spatial audio experiences such as augmented or virtual reality to a more professional standard by simulating some of the acoustic principles mentioned previously.
It simulates the acoustic principles such as head related transfer function, which allows humans to localise and perceive frequency using the physical dimensions of the head, ear cavities etc. GRA uses Unity's pre scripted audio listener and upgrades it to add additional perameters for greater accuracy and depth to the control over the way sound acts within the digital envrionment. (For a indepth look at Google's introductory Fundamental Concepts writup please visit [22W] on the blogs bibliography).
GRA keeps the features from Unity's audio listener and audio source scripts but adds some extra 3D features to attenuate how the audio sources are processed. Adding other spatial elements such as more complex control over emmision patterns (Alpha, Sharpness) and sound occlusion. Occlusion is based on the geometry within the envrionment, which blocks or damplens certain frequency based upon its material property. Within these new parameters the user is also able to control how the source mixes with the envrioment with regards to reverb (Reverb Zone Mix + Spatial Blend).
The user can also chose between a few volume roll off patterns. This can be set to linear, logarithmic or a custom pattern to simulate how we hear based upon the distance of the listener from the audio source. This is a great feature that shockingly does not always require a logarithmic pattern for a immersive sound. (Humans hear logarithmicly and would suggest that this pattern would be the obvious choice, however some situations go against this).
GRA uses two main features to simulate reverberation. One of these is a reverb mesh. This is like most other meshes within Unity. It is a box/container that has properties which are active within the mesh itself. In the case of a reverb mesh, each face of a cube reverb mesh has some pre defined material data that represents the reflective properties of the surface. So the user is able to chose which reflective property the walls, floor and ceiling have. This is a great and quick way to create a nice reverb for a room as it comes with plenty of material poperties to apply to the surfaces which would match the majority of most scenes.
The reverb mesh is a great way to input a fast reverberation effect, and probably will suit most cases. However the other way to implement it is using Reverb Probes in conjuntion with a material map. This allows all geometry within a scene to be calculated and 'baked' to create a realistic RT60 for the created envrionment. If done with care and detail, accurate and realistic envrionments can be created.
The material map (below) should be made first to allow all geometry within the scene to be calculated within the reverb bake. The probes themselves are the same as the reverb meshes, except they take data from the reverb bake (which takes data from the map) to calculate how the sound will reflect from geometry within the scene.
Material Map Example
The map and probe workflow seems to give the most control and sounds great with alot more accuracy when walking around the digital envrionment. The RT60 generated has even more control with the ability to add brightness, time and extra gain if required as additional attenuation.
The material map example above shows that the user can apply perameters to all materials within a scene. This applies a colour map for each material which applies its level of refelctivity. Once baked this is fixed until a new bake/render is done, so all geometric changes after the bake will not change the reflections heard. So it is good to get all geometry and static objects to be decided early on to save on re-rendering. All perameters applied to the colours are based on material refelction coefficients (0-1 values).