Wed 18th October 2023, 12:30 – 13:30. This seminar will be on campus in Peel 338, a Teams link is also available to join the seminar.
Abstract:
Mixing sound for real-time broadcast of live sports and entertainment is very difficult, requiring constant attention to pick up the sounds and adapt the mix to a rapidly evolving and unpredictable environment. At Salsa Sound we have addressed this problem by leveraging audio pattern recognition and machine learning to detect when key sounds happen to algorithmically understand the environment and compose an audio mix accordingly, ensuring that all of the important sounds are included and that viewers get the full narrative of the event.
In this seminar we take a deep-dive into our MIXaiR software, highlighting the foundational algorithms that run the audio event detection and automixing and how it can be used to create immersive content for Dolby Atmos and/or personalised, interactive sound with MPEG-H.
Rob Oldfield has worked in the audio/broadcast industry for over 17 years and gained his PhD from the University of Salford in object-based, immersive audio in 2013. He’s passionate about innovation, sports and broadcast. Rob is co-founder of Salsa Sound who have developed a patented AI engine which automates and enhances audio mixing for live sports broadcast. Salsa’s technology improves mix quality, consistency and production efficiency