Best Practices for Using Spatial Sound
Spatial sound provides game developers with an expanded arsenal of effects, delivering a more realistic and exciting gaming experience. With spatial sound, sounds can emanate from any place in space, not just the gamer's fixed speaker location. This helps gamers locate virtual enemies and enhances overall awareness of their gaming environment.
Middleware plugins, available in FMOD Studio and Audiokinetics Wwise, provide full spatial sound support, driving audio from the engine down to the sound platform, where it is rendered by the system. The plugins can send sound to either static bed objects located in 3-dimensional space or dynamic objects located anywhere in the space. Dynamic objects tie the movements of spatial objects within the game. Middleware allows the developer to mix, adjust, and listen to sound before sending it to the platform.
New tools require new best practices for their use. Spatial sound is no different. This article will examine best practices for delivering a high-quality experience to your audience while avoiding time-consuming pitfalls.
Start Planning in Pre-production
To quote Lewis Carroll, "If you don't know where you are going, then any road will take you there." However, knowing where you are going will help you reach your destination much faster. That's what preproduction is about: determining your goals.
Spatial sound provides far more sound choices than ever before. This means sound takes more significant time and consideration during preproduction to optimize its use. Understanding how and when you are going to use spatial sound in a game becomes an integral part of your storyboard.
Planning spatial sound in preproduction helps you choose between features competing for the same resources. This helps prevent costly changes during production.
Understanding DSP
In digital signal processing (DSP), generally, you are moving from eight-channel to 12-channel output. The increased number of channels means you need to take into account reverberation and other considerations.
In some cases, such as with dynamic objects in FMOD studio, sounds need to be in final form before going down to the system. Once dynamic objects are at the system level, you are unable to make sound changes. Each of the dynamic objects needs to be processed separately instead of incorporated into a single stream and balanced.
Understanding middleware capabilities and the differences created by DSP use is critical to producing the product you envisioned.
Consider Dynamic Object Limitations
While dynamic objects significantly increase game playability, the number of dynamic objects available at any given point in time is limited. A total limit of 32 objects can go down an HDMI pipe. Twelve of those objects are consumed by the static bed channels, leaving a maximum of 20 objects for other uses.
There may be further limitations on the number of dynamic objects your particular middleware can handle. These limitations make it necessary for you to compromise on using dynamic objects within the game space.
Listen to All Rendered Endpoints
Spatial introduces a broader array of listening possibilities, from stereo to home theater to headphones, across various brands. Ideally, you want to mix your sound once and have it perform well in all possible environments. However, the sound may behave differently on each of these endpoints. You should test all options to ensure playability across platforms.
Use the Latest XDK/GDK and Understand the Windows Ecosystem
As the spatial sound platform evolves, developers continually fix bugs and add features. So, you want to make sure you take advantage of the latest updates. Microsoft releases updates regularly.
Also, Windows updates appear at much longer intervals than Xbox Development Kit/Globalization Development Kit (XDK/GDK) updates. You do not control which version of Windows your clients are using. Gaming companies need to be aware of different interactions between the Windows ecosystem and their game features to help customers overcome sound and gameplay issues.
Use Caution With Low-Frequency Sound
While dynamic objects and home theater systems, including Dolby Atmos, work with full audio range endpoints, low-frequency content, by its nature, does not localize very well. This lack of localization is not caused by any system functionality issues. It is simply the physics of low-frequency sound itself.
For example, if you try to localize a sound over the gamer's head, low frequencies will spread out and be less localizable to the gamer. Spread is likely not the effect developers were attempting to create. Taking this into consideration early in the production process saves rework later on.
Consider Listener Orientation
Similar to listening to all rendered endpoints, we also need to consider the listener’s orientation. Some gaming platforms have their own listener movement system. These systems may orient the listener in the game space differently from how they are oriented in their physical space.
For example, if a listener in the game space is oriented downward to the front, sound emanating from in front of a key character seems to be coming from above, distorting the gameplay. Understanding how your listener is likely to hear sounds based on their orientation is vital to creating the best gaming experience.
Avoid Content Duplication in Height Channels
In an effort to get more sound all around the player, you may be tempted to duplicate ambient sound in the height’s channels. When the sound is rendered in a standard home theater setting, this may be okay. However, in stereo, the ambient sound becomes distorted by phasing and cancellation. So, you should avoid or use caution when duplicating content to the height’s channels.
Consider Cutscene Audio
If you are using pre-rendered cutscene audio in your game, it must be compatible with the gaming audio. For example, if cutscene audio is in stereo and the game is 12-channel, the effect of going back and forth from stereo to 12-channels can be jarring. If you are using pre-rendered audio, consider expanding the channel count in the cutscene audio to match the in-game audio for a better user experience.
Take Advantage of More Space in the Mix
Spatial provides you with far more options to use sound for enhancing gaming experiences. Having theme music follow a character through space enhances storytelling. Sounds coming from different parts of the game space make the environment feel more spacious.
For visually-impaired users, spatial sound provides greater gameplay accessibility by helping map the environment. In short, spatial offers the developer far more opportunities to make sound a significant part of the gameplay, boosting gamer experience.
To learn more about using spatial sound, read Microsoft’s Spatial Sound for Developers documentation or check out spatial sound samples on Github. When you are ready to incorporate spatial sound into your next game, explore Dolby’s developer tools and sign up to level up your gaming experience. Your audience will be thrilled.