Posts about: Projects

Paper published in the Journal of the Audio Engineering Society, assessment of the quality of distorted audio

7 October 2015

Paper published in the Journal of the Audio Engineering Society by the Good Recording Project team.

For field recordings and user-generated content recorded on phones, tablets, and other mobile devices, poor audio quality arises in part from nonlinear distortions caused by clipping and limiting at pre-amplification stages and by dynamic range control. Based on the Hearing Aid Sound Quality Index (HASQI), a single-ended method to quantify perceived audio quality in the presence of nonlinear distortions has been developed. Validations on music and soundscapes yielded single-ended estimates within ±0.19 of HASQI on a quality range from 0.0 and 1.0. Perceptual tests were carried out to validate the method for music and soundscapes. HASQI has also been shown to predict quality degradations for processes other than nonlinear distortions including additive noise, linear filtering, and spectral changes. By including these other causes of quality degradations, the current model for nonlinear distortion assessment could be expanded.

To go with the publications the authors have also released a program so that if you have some audio that you suspect may be degradaed by amplitude clipping type distortions, this program will be able to detect distorted regions as well as providing a perceptual weighting. Please visit the following link for details on how to acquire the software,



Salford Acoustics Research and Arup co-host workshop on the use of numerical modelling and auralisation in built environment design

4 September 2015
Arup SoundLab Manchester

The SoundLab at Arup’s Manchester Office

Prof Yiu Lam and Dr Jonathan Hargreaves have been working for the last three years in collaboration with Steve Langdon at the University of Reading and industrial partners Arup and the BBC on an EPSRC funded project developing new computational acoustic algorithms aimed at auralisation applications.  As a culmination of this, they are organising a two day workshop to discuss the latest developments, and future research challenges and priorities, in this field, and specifically its application to built environment design consultation. The workshop will be hosted by Arup at their Manchester office, and will make use of their renowned SoundLab facility for audio demos.

For more information on the programme and speakers see

Dr Jonathan Hargreaves represents Salford at EU research workshop in Freising

4 September 2015

MHiVec project logoMHiVec is an EU FP7 IAPP project which is applying a new modelling method called Dynamic Energy Analysis (DEA) to structural and acoustic modelling of vehicle NVH at mid-to-high frequencies. This method works by transferring distributions of directional wave energy between elements of a discretised mesh of a sub-system boundary, be that an acoustic cavity or a structural element (e.g. a metal plate) and then solving for the steady-state intensities. It therefore has much in common with some late-time acoustic reverberation models and SEA but, like SEA, can be applied to a wider class of vibrating systems.

The MHiVec team organised a workshop in Freising, Germany, at the start of September 2015. Dr Jonathan Hargreaves from the ARC was an invited speaker and gave a talk on synergies between BEM and beam-tracing methods. In return, we are lucky to have MHiVec PI Dr David Chappell from NTU attending and presenting at the computational acoustics and auralisation workshop we are organising in collaboration with Arup on the 22nd and 23rd September, where he will speak on the application of DEA to room acoustics in 3D.

Experiments in object based audio

1 December 2014

To cope with the volume of information with which our senses are constantly bombarded, our brains utilise a variety of categorisation strategies to reduce the amount of data it has to process. I’m in the process of setting up some exciting experiments in the acoustics labs at Salford to investigate how categorisation is utilised when our brains process complex acoustic information. I’ll be using methods developed in cognitive psychology to determine how listeners categorise individual sounds in different types of broadcast audio material.

The application of this work is in object based audio, which is the future of broadcast audio. Traditionally, audio content is produced in such a way that the channels of audio information are mapped to a specific loudspeaker layout, such as stereo or 5.1 surround. The limitation of this approach is that the experience of the listener is severely impaired if the reproduction loudspeaker layout does not match the layout for which the audio was produced. Object based audio gets around this by sending each individual audio object (this may be a character’s dialogue, Foley effects, or music) along with information about the object’s position in space and time. Using this information at the receiving end, the audio can be reconstructed in a way that is optimal for the reproduction system; be that headphones, a tablet, or a cinema system. This approach also opens up possibilities for the listener at home to interact with the audio content, which had been explored recently by the BBC, such as choosing which side of the crowd you hear in a football match.

The results of the experiments I’m about to run at Salford will help us to understand what types of objects we need to represent when we store and transmit object based audio, and will lead to experiments exploring the effects different objects types have on the quality of the listener’s experience. This work is part of the S3A project, which is a five year collaborative project involving Salford, Surrey, and Southampton Universities, and BBC R&D that aims to develop immersive 3D audio systems that work in real environments, such as people’s living rooms. The project’s just getting started, so watch this space for more information!

Dr James Woodcock

Workshop just announced – Boundary and finite element methods for high frequency scattering problems

26 November 2014

High-frequency scattering by a screen

As part of our EPSRC funded project ‘Enhanced Acoustic Modelling for Auralisation using Hybrid Boundary Integral Methods‘ we are co-organising a cross-disciplinary workshop in the 15th and 16th of December. This will take place in the applied maths department at the University of Reading, who are collaborating with us on this project. The workshop will feature talks from researchers in applied mathematics, engineering and acoustics fields, plus structured discussions of future avenues for cross-disciplinary collaboration. For more information see

Wind noise meter used to improve bird song recognition app.

9 October 2014

Recently we had an email from a company who are developing a bird song recognizer who were having problems with wind noise corrupting recordings and giving inaccurate results.  The company,  iSpiny was interested in using our code for real time wind noise detection to indicate when high levels of wind noise would cause problems with their algorithm.  So while not directly related to audio quality it shows that our research has a wider possible application.  As we understand the wind noise detector is now being utilized within the mobile bird song recognizer app .  For more information see the following site;

if you are interested in using the algorithm with your own application there is the offline batch detector here,

as well as a realtime method implemented for iPhone. (contact us for details)


Microphone wind noise – published in the Journal of the acoustical society of america paper

11 September 2014

Our work into the perception and automated detection of microphone wind noise had been published in the Journal of The Acoustical Society of America. This paper discuss how wind noise is perceived by listeners, and uses this information to form the basis of s wind noise detector / meter for analyzing audio files you can access the Journal here:

Or if you don’t have access, the paper is will also be available here (the next couple of days)

If you want to run the wind noise detection algorithm you can do so using the code here

New paper in AAuA special issue on Auralization and Ambisonics

2 September 2014

Dr Jonathan Hargreaves‘ paper  ‘An energy interpretation of the Kirchhoff-Helmholtz boundary integral equation and its application to Sound Field synthesis’, which was selected as one of the 10 best papers at the  the EAA Joint Symposium on Auralization and Ambisonics in Berlin in April, was published today in Acta Acustica United with Acustica. You can find it via USIR at

New paper in Acta Acustica on human response to railway vibration

15 August 2014
Railway vibration simulator

Railway vibration simulator

Railways are a fantastic mode of transport, clean and efficient, they are generally well liked or even loved. However, for those living close to railway lines, noise and vibration can be a real problem. Annoyance and sleep disturbance from railway noise and vibration can lead to long term health effects. This has led to noise and vibration being described as the environmental Achilles’ heel of the European rail network. The Acoustics Research Centre has done a lot of groundbreaking work over the past few years to help to understand the effects railway vibration and noise have on people.

Recently, Dr James Woodcock had a paper published in Acta Acustica on some work he did to understand exactly what it is about railway vibration that causes people to become annoyed. This work involved building a simulator that was capable of reproducing the types of vibration people living  close to railways experience in their homes. We sat people on the simulator and exposed them to pairs of vibration signals that we had previously recorded in peoples’ homes. By asking people to rate the differences between the pairs of signals, we were able to identify the features of the vibration signals that caused people to be annoyed. It turns out that it’s not just the level of vibration that causes people to be annoyed, but also the duration of the vibration and how the signal varies with time. As it is very expensive to reduce vibration from trains, this work could help to focus vibration mitigation on targeting the specific features of the vibration that cause annoyance in a cost effective manner.

You can read more about this work in the paper here.

‘Pipgate’, Radio Four’s Pips, what happened?

22 July 2014

‘The Pips’ are series six of short tone bursts transmitted on Radio 4, they are known as the Greenwich time signal and are intended to accurately mark the start of the hour. They have been transmitted since 1924, and originate from an atomic clock.

On the 21st July 2014 a listener wrote to the Radio 4 programme ‘pm’ to ask why the pips had been changed. The programme played the offending pips and the originals. (here is a link to the program, the item is at 28m 31s

Here is an ‘old’ pip:

and a ‘new’ pip,

You may think that ‘new’ pip sound harsher, by looking at the wave form and spectra we can begin to understand what has happened. Here are the two waveforms of the pips,

Waveforms of the two pips

Waveforms of the two pips

and the two spectra.

Frequency Spectra of the two pips

We can see from the spectra there are additional lines in the spectrum known as harmonics, comparing the two waveforms we can see that the ‘new’ pips appear to be similar to the older ones except that the peaks of the waveform have been flattened or ‘Clipped’ a little.

This clipping is a form of distortion, it occurs when the gain applied to the signal is to great or if there is a fault in a preamp and the amplifier is no longer able to properly replicate the signal at the input.  We can clearly hear the difference between the two signals and according to the concerned listener (and his cat) it has a very negative impact on the sound quality.  Denis Nolan, the network manager for radio 4, identified the fault as being due to a particular desk the signal was going through.

In our project we are writing an algorithm to perform a similar function to the upset listener, we don’t mean that our algorithm will write pithy letters to Eddie Mair, we want to build an algorithm to automatically detect when something like this has gone wrong and the sound is being distorted.  The way we are going about this is to simulate all sorts of types of faults on many different types of sounds, and then see if we can look for ‘features’ of the audio which seem to be very dependant on theses faults.  We can then build automated systems that look for occurrences of these features to locate them, and try and estimate how bad the error is from the features themselves.