Dev Diary 29 – Content complete
The film is almost finished!
As of August (2024), all final creature animation has been completed.
Effects animation is completed.
Other than final testing and optimizing the performance for VR, getting the sound mixed and fine-tuned is our final step, and we wanted to share a few tidbits about the process!
In Unreal Engine, organizing sounds into different “Classes” makes mixing much easier. For a traditional film, the audio is linear and fixed, with all sounds carefully arranged in a stereo or surround environment. The viewer’s perspective is static, so the mix is designed to sound consistent from one fixed point.
In a non-prerendered VR film, however, sound is dynamic, adjusting in real-time based on the viewer’s movement and orientation. Since there’s no single static mix, the process is much more complex, with audio continuously adapting to the viewer’s focus and perspective.
Sound is also more critical in VR because it helps direct the viewer’s attention. With the freedom to look anywhere at any time, important narrative elements can easily be missed. Using sound cues to guide focus ensures the audience stays engaged with key moments.
Zephyr Sounds
Check out a few of the sound elements used for the floating zephyr creatures that Octavia and Heulfryn encounter on the planet!
Some sound design has been done by Joel Benjamin but the majority of it, including the foley, has been done at Noisefloor, LTD.
Here’s a post by Katie Waters at Noisefloor, LTD. –>
Recently, both Bryen Hensley, who’s the Post Sound SupervisorNoisefloor LTD, and Katie, were featured in an online video entry for Encyclopedia Britannica for The Art of Foley! (included in the video is Joel Benjamin’s previous film, HIBERNATION)
View this post on Instagram
Sound implementation
Implementing the sounds in a way that was efficient and also performant turned out to be a tremendous challenge.
Because film animators typically work at 24 fps — 24 frames per second, or 24 different poses of a character per second — we decided early on to stick with that framerate. Unfortunately, because many factors can create motion sickness — including playback framerate — the film itself was targeting 90 frames per second. Unreal Engine can essentially convert, or interpret an animation created at 24 fps up to a playback speed of 90 fps without much trouble (though animators will all notice it!), that part of it wasn’t the issue. The challenge was that all of the timing for dialogue, foley, and sound effects were based on the 24 fps animation asset.
This mean some serious conversion!
The first step was to look at the timeline in Adobe Audition (used at Electric Beard for audio editing). If the timeline read 14:29.72, that meant in Unreal Engine I needed to find 13 minutes (there was a 1 minute offset due to the text at the start of the film), 29 seconds, and some frames. Because the Unreal Engine framerate was 90fps, I had to take .72 seconds times 90 frames
Finding the true time (e.g. 14 minutes, 29.72 seconds)
The handy calculator. The most exciting blog post image I’ve ever put in.
Over to Unreal, I’d go to the proper point in time (13m 29s 60f, for example).
Because things like dialogue, or character foley sound effects, were being attached directly to the animation assets I would then select the character’s animation asset in order to see the current frame number (remember it was animated at 24 fps, not 90!) to know where the sound should land INSIDE that animation asset. COMPLICATED! (also probably a really stupid way of approaching this process!)
Inside the animation asset for that character, I could locate the same frame number, then add a notify event triggering that exact sound effect and make sure it was attached to some part of the character. (dialog, eg., would get attached to their jaw bones, footstep foley sfx would get attached to each foot as it took a step, etc)
Below are a couple more videos from Katie, working on sound for Max Q at Noisefloor, LTD!