For developers who want to incorporate object-based audio into their own audio engine, there are a few fundamental systems that will need to be updated.
- Panning - Up until now, audio engines have been taking only the x and y coordinates from a 3D game’s output when tracking position of an object. Now with Dolby Atmos®, one needs to incorporate the z axis into the panning algorithms, both runtime 3D positional panning as well as any offline panning tools for content creators.
- Object management - As discussed above, Dolby Atmos consists of both dynamic and static objects (Bed), with a total maximum number of 32. It will be up to the audio engine to decide what sound will be assigned to either a dynamic object or mixed into the bed objects. Those objects will then be passed to the platform’s 3D audio API for packaging into the Dolby Atmos bitstream. This is because every game and engine is unique, and prioritization rules that work for one may not work for another. Also, many engines and middleware already have a robust prioritization and management system where object-audio management can easily be added.
- Linear content ingestion - Content that has already been mixed in Dolby Atmos will have a unique file format that will include PCM audio data along with descriptive metadata. This could be the soundtrack for a cut-scene, music, or a rich ambience track. This file format will need to be properly read into the audio engine in order to pass it through to the system and eventually to the platform’s 3D audio API.