Conversation article about some of our work on accessible TV sound from Salford researcher Lauren Ward
Announcing a call for papers for our workshop at TVX2017.
The workshop title is In-Programme Personalisation for Broadcast and we would very much like to see submissions relating to broadcast accessibility in addition to other areas of personalisation. Full details at the following link:
The growing demand for personalisation provides endless research, development and innovation opportunities for academics and industry. Personalisation is usually interpreted as the generation of personalised playlist, programme guide, product placement and advertising for viewers. However the notion explored in this workshop is the personalisation of audio, video and data elements within the broadcast programme which we are calling In-Programme Personalisation.
Consequently, the IPP4B workshop focusses on the automatic personalisation of the streamed content. The likely technology to support these features is object-based media where audio, video and other elements may be placed into existing media and be rendered for consumption by the end viewer.
Also expect a call in the next day or so for the TVX2017 doctoral consortium for PhD researchers working in interactive TV and related areas.
IBC 2015 Demonstration of Object Based Clean Audio
The problems of hearing impaired people watching TV have been well documented of late. Loud music, background noise and other factors can ruin the enjoyment of TV for many people with hearing loss – around 10 million people in the UK according to Action on Hearing Loss.
In previous research funded by the ITC and Ofcom I looked at solutions that took advantage of the (then) recent introduction of 5.1 surround sound broadcast. Some of this ended up in broadcast standards and is being used by broadcasters. Now emerging new audio standards are opening the door to improving TV sound much more for hearing impaired people, and also for many others.
I’ve written about some of this work before, a recent blog post described our journal article in the Journal of the Audio Engineering Society where my colleague Rob Oldfield and I picked up where my PhD left off and looked at how we could improve TV sound for hearing impaired people by using features of emerging object-based audio formats. In object-based audio all component parts of a sound scene are broadcast separate and are combined at the set top box based on metadata contained in the broadcast transmission. This means that speech, and other elements important to understanding narrative, can be treated differently compared to background sound (such as music, noise etc).
I’ve just returned from IBC in Amsterdam where we’ve been demonstrating some University of Salford research outputs on object-based clean audio with DTS, a key player in object-based audio developments.
Object-based Clean Audio at IBC 2015
Last week we spent a week showing the results of our recent collaboration with DTS – presenting personalised TV audio and Continue reading
Update: I just uploaded a new journal article published last month in the Journal of the Audio Engineering Society. Happily the University of Salford paid for it to be open access as part of their …