A couple of years ago I had the pleasure of supervising two of our talented students on BSc Professional Sound and Video Technology (Liam Funnel and Isabel Garriock) as they worked on an interesting project on media access requirements for people with dementia. Very happy to (rather belatedly) announce the publication of their combined work in the Journal of Enabling Technology – the article is open access and can be downloaded here. Many thanks also to Professor Tracey Williamson, now at University of Worcester’s Association for Dementia Studies, who provided invaluable guidance and made this project possible.
new blog post and link to open access JAES article here. Or click on the pic.
For a long time now Ofcom and broadcasters have received complaints that speech on TV can be difficult, or impossible, to understand. The problem is of course much worse for viewers who with even quite mild hearing loss. Reasons are varied and well described in a recent Conversation article from one of our researchers, Lauren Ward. Causes include unfamiliar accents, unclear speaking from actors, excessive background sound effects or music and even occasionally badly recorded location audio…..
Interesting accessible media conference coming up in York from the Enhancing Audio Description project.
Salford researcher Lauren Ward speaking at AES event in York next week.
Conversation article about some of our work on accessible TV sound from Salford researcher Lauren Ward
Action on Hearing Loss blog post about our visit there to discuss accessible audio research
For PhD students working on interaction, TV and online media
Deadline extensions reminder! The deadlines for submitting TVX Doctoral Consortium has been extended to March 30!
Why don’t you take this week to plan and start writing your proposal for an awesome PhD work update – 2-4 page paper by 30th March, poster on acceptance.
Deadlines for TVX in Industry, Demo and Work in Progress contributions have also been extended to March 30th.
More info at https://tvx.acm.org/2017/participation/
Announcing a call for papers for our workshop at TVX2017.
The workshop title is In-Programme Personalisation for Broadcast and we would very much like to see submissions relating to broadcast accessibility in addition to other areas of personalisation. Full details at the following link:
The growing demand for personalisation provides endless research, development and innovation opportunities for academics and industry. Personalisation is usually interpreted as the generation of personalised playlist, programme guide, product placement and advertising for viewers. However the notion explored in this workshop is the personalisation of audio, video and data elements within the broadcast programme which we are calling In-Programme Personalisation.
Consequently, the IPP4B workshop focusses on the automatic personalisation of the streamed content. The likely technology to support these features is object-based media where audio, video and other elements may be placed into existing media and be rendered for consumption by the end viewer.
Also expect a call in the next day or so for the TVX2017 doctoral consortium for PhD researchers working in interactive TV and related areas.
IBC 2015 Demonstration of Object Based Clean Audio
The problems of hearing impaired people watching TV have been well documented of late. Loud music, background noise and other factors can ruin the enjoyment of TV for many people with hearing loss – around 10 million people in the UK according to Action on Hearing Loss.
In previous research funded by the ITC and Ofcom I looked at solutions that took advantage of the (then) recent introduction of 5.1 surround sound broadcast. Some of this ended up in broadcast standards and is being used by broadcasters. Now emerging new audio standards are opening the door to improving TV sound much more for hearing impaired people, and also for many others.
I’ve written about some of this work before, a recent blog post described our journal article in the Journal of the Audio Engineering Society where my colleague Rob Oldfield and I picked up where my PhD left off and looked at how we could improve TV sound for hearing impaired people by using features of emerging object-based audio formats. In object-based audio all component parts of a sound scene are broadcast separate and are combined at the set top box based on metadata contained in the broadcast transmission. This means that speech, and other elements important to understanding narrative, can be treated differently compared to background sound (such as music, noise etc).
I’ve just returned from IBC in Amsterdam where we’ve been demonstrating some University of Salford research outputs on object-based clean audio with DTS, a key player in object-based audio developments.
Object-based Clean Audio at IBC 2015
Last week we spent a week showing the results of our recent collaboration with DTS – presenting personalised TV audio and Continue reading
Update: I just uploaded a new journal article published last month in the Journal of the Audio Engineering Society. Happily the University of Salford paid for it to be open access as part of their …