Welcome to the Acoustics Research Centre research blog. Here you will find regular updates on our research activity including publications, conference presentation, projects etc. For an overview of our key research areas, or more information on our taught courses, facilities, commercial work and resources for schools, please visit our static site at www.acoustics.salford.ac.uk.
All the best & we hope you enjoy your visit!
[WARNING: Contains very descriptive language and gory pictures! I hope this might be of some help for someone unfortunate enough to suffer the same ordeal.]
Barotrauma – An unfortunate accident
It was a typical flight from one European city to another. As the plane starts its descent I was entertained in conversation with the passenger sat next to me. I recall being aware that I couldn’t equalise one of my ears as the plane was heading towards the ground. I recall this, because it was unusual for me to have problems equalising the pressure difference caused by flying.
Most of us live mainly on the ground, under the pressure of our atmosphere. But, as anyone who has been on a flight, driven up a mountain or done some diving will know, as we move up and down the Earth’s atmosphere, we feel the differences in pressure mainly through our ear drums. The eardrum is a thin membrane of tissue, or skin, which separates the auditory canal, through which sound waves in air travel, and the middle ear(tympanic cavity), a small chamber containing the ossicles and connected to the back of the throat by the eustachian tube.
<diagram of ear> http://www.macroevolution.net/diagram-of-the-ear.html
The dullness, or sometimes pain, we feel as a plane starts plunging towards the ground (planes go up much slower than they come down because you have to burn expensive fuel on the ascent), is due to the pressure increasing outside the eardrum as we move deeper into the bottom of the atmospheric layer (more atmosphere above us, more pressure). Most people automatically equalise this pressure by moving their jaw, swallowing or pressurising their nasal cavity and pinching their nose. This increases the pressure inside the throat and nose cavity and forces the eustachian tube to open allowing the pressure behind the ear drum to equalise to that in front of it.
As the plane I was in kept descending I realised something was wrong because my left ear was not equalising. It wasn’t exactly painful but I could feel something wasn’t right. I kept on trying to equalise even on the ground but nothing was happening. I couldn’t ‘pop’ my ear! At this point I knew that something was wrong but I wasn’t in pain.
Slowly, some pain started to creep in and I felt more and more pressure on the ear. A couple of hours later, as I lay in bed trying to sleep, the pain was excruciating and, short of chopping my head off or puncturing the ear with a pin, I simply couldn’t do anything to ease it off. I drove to A&E to seek for help as this was probably one of the worst pains I had ever felt and I was very worried that my eardrum would just burst open due to the increased pressure. As I walked into A&E I believed what I needed was a controlled perforation of the ear drum to ease the pressure. This is routinely done to young children who often suffer from glue ear, a condition that is caused by mucus in the middle ear which cannot clear through the Eustachian tube which is tiny and often bent in the smaller, developing, bodies.
In a jam packed A&E ward, in such pain that I couldn’t sit down, I kept on pacing up and down the hospital corridors moving my jaw and swallowing in an attempt to ease off the pressure. At some point I started getting some crackles and pops in the ear and I felt the pressure easing off a bit, although I was still in lots of pain and, by this time, a loud buzz (tinnitus) had set in. I realised I wasn’t going to see a specialist (an ENT doctor) as they are not there overnight and all the A&E doctors would do was give me some painkillers and send me home. So I went home, took some painkillers and tried to go to sleep.
The next morning I went back to A&E and an ENT saw me. At this stage they told me that they could see some damage (possibly some blood behind the eardrum). I had suffered a barotrauma, a condition which is common to divers when something goes wrong in a dive and they resurface too quickly without going through the appropriate decompression stages, or, as happened to me, when flying without proper decompression. There was no mention of any damage to the eardrum. The doctor prescribed a course of antibiotics to clear any infection (surprise, surprise!) and told me I couldn’t fly for at least a week, until I could see them again to get an ‘all clear’. I was literally grounded for at least a week.
Weird and Wonderful Noises
During this week, some really interesting things happened. Of course I struggled with symptoms of deafness. Understanding people, particularly in noisy places, was difficult. At one point, as a waiter brought me a tea:
– Here’s your ***** peony tea, sir.
– Black? But I have asked for a white tea!
– Yes Sir, your WHITE peony tea. (giggles from my friends sat next to me).
A true example of black and white confusion!
At other times I could clearly see/hear a mismatch between the true (visual) location of a source and its corresponding auditory position. My localisation cues were all wrong. This gave me a bit of an insight (insound?) of what it is to live with partial hearing loss.
From the day of the accident I started suffering from really loud tinnitus. Mainly at high frequencies and, when it was strongest, I managed to measure its main frequency component by matching a tone on a sine wave generator (on my smartphone! I don’t carry tone generators around!). It was about 5kHz. This changed progressively and receded from the very loud tones within a week or two. At times it was mainly high frequency noise with some prominent components. After a while, a low frequency rumble joined in. That was a bit more annoying. My best description to it was that it felt heavy. This rumble would sometimes stop momentarily and then come back on again.
Perhaps the most remarkable, albeit perturbing, effect, was what I can only describe as a chorus effect on the sound on the left ear. Something very similar to a chorus pedal effect often used in guitar recordings (Pink Floyd is a good example). It was as if there were very short echoes on my left ear, particularly noticeable at mid frequencies, especially from loud screeching voices. It felt like I had a bubble of liquid in the ear and the sound got trapped there and kept reflecting inside the bubble for a bit. I have not been able to understand what might have caused this effect. Was it the transmission of sound through the ossicles? I will probably never know.
One week later, another ENT sent me for a batch of tests to diagnose the state of my hearing and whether I could fly.
They tested my hearing to check whether there was any internal damage which might cause irrecoverable damage. They do this by measuring your hearing acuity (audiograms) in two ways:
These are my audiograms at the end of that week:
You can see that, compared to the right ear (OD), the left ear (OE) is showing severe loss, particularly towards the high frequencies. The line is much lower on the right hand side audiogram. The good news is that the left inner ear seems to be healthy, as the bone conduction levels (the ‘>’ in the graph) are normal, except at the higher frequencies, where the loud tinnitus might be responsible for masking the tones I was supposed to pick out. Tinnitus arises from hyperactivity of the auditory hair-cells responsible for converting vibration into electrical pulses inside the inner ear. Given the trauma to my hearing system, it is not surprising that there was onset of tinnitus.
They also measured the movement of the eardrum (tympanometry), which is often affected if there is build-up of fluid due to infections, inside the middle ear, or if the eardrum has been perforated. They do this by pressurising the auditory canal, from outside, over a range of pressures, whilst driving it with a 220Hz tone and measuring the response of the pressure in the cavity as the ear drum flexes. A healthy eardrum will show a peak in the response (see the ‘sweep right’ in diagram below), whilst a perforated or restricted (by fluid) eardrum will show a flat response (see the ‘sweep left’ in diagram below).
According to these tests, I hadn’t suffered internal ear damage, and the losses appeared recoverable, which was great news. The bad news however, where that there was severe loss of sensitivity on the left ear, and the eardrum appeared to be blocked which meant I shouldn’t fly. In the doctor’s words:
– You can try, but it’s going to be extremely painful.
A trip of 3 days, 3800Km and 4 different trains back to Manchester ensued. I got to see dragons flying across the sky and have croissants in Paris.
On the mend
Back in the UK I went to see my GP who gasped as he looked into my ear:
– You have a perforated eardrum! he said.
– Will it heal?, Do I need surgery?, Am I going to be deaf?, What about the tinnitus. Is it going to go away?
I’m lucky to work in a top centre for Acoustics Research, so we have the right kit to take gory pictures of punctured eardrums. The following picture was taken 2 weeks after the accident and the damage is very visible. The whole eardrum is burst.
With an 18 week wait to see an ENT through the NHS system I booked an appointment with a private doctor. I went in armed with the batch of audiograms, the picture of the punctured eardrum, and another one I took just before going to see the doctor:
After explaining my ordeal, the doctor had a look. He said something like:
– There’s a bit of skin. This is going to be a bit sore, and loud.
He scrapped and hoovered inside the ear canal, removing the necrotic bit of my original eardrum (the red patch on the left side of the picture), which was glued to the top of the new, healthy, ear drum that had since reformed underneath! Apparently, throughout this time, my body had been busy rebuilding a perfectly tuned, new ear drum. This took less than 12 days! I left the hospital ecstatic!
Once the doctor removed the old bits of skin, a full eardrum can be seen underneath.
Causes and Outcomes
So, what might have caused the barotrauma? Most likely I would have been suffering from some mild congestion, and a bit of a cough, which might have blocked the airways and the eustachian tube. Because of this, as the pressure changed from ground level to cruising altitude (10,000 m) the slow variation allowed the pressure inside and outside of the middle ear to equalise. The more abrupt change caused on the way down caused the pressure outside to compress the eardrum and, as I wasn’t aware of the problem, I didn’t attempt any active equalisation procedure (eg: blow through your nose while pinching it, followed by swallowing) until it was too late. By then, the eustachian tube, which might have been slightly blocked from tissue inflammation, was closed shut by the lower pressure inside the middle ear, effectively creating a vacuum. Once that set in, there was no way of reverting the pressure gradient across the eardrum, apart from actually perforating it.
But, as I arrived on land, the pressure on the eardrum wasn’t that high! At that point I was just having a mild discomfort sensation. How did it evolve from that into an excruciating pain?! This is the sinister twist that I found most disturbing. The air that we breathe, which is also found inside the middle ear, behind the eardrum, contains about 78% nitrogen. However, in our blood stream, the concentration of nitrogen is lower at around 40%. This means the mucosal lining inside the middle ear will absorb some of the nitrogen in the air, due to the imbalance. This will cause a drop in pressure within the middle ear, which we equalise throughout the day by swallowing, causing the eustachian tube to open and allowing the external pressure to equalise with that inside the middle ear. If the eustachian tube is blocked and pressed shut because of a large pressure difference, there is no amount of swallowing motion that will make it open. This explains why, only some time after landing did I start to feel the increased pressure across the eardrum causing such strong pain[ http://deedee.dbi.udel.edu/MichaelTeixidoMD/pdfs/OtitisMedia.pdf].
At the end of this ordeal, I am left with a very gentle tinnitus, mainly in the high frequencies. It seems to be receding and I am hoping it will be gone some time soon. A few weeks later I also started getting symptoms of vertigo which, apparently, might be related to this accident and caused by debris in the vestibular canals [https://en.wikipedia.org/wiki/Benign_paroxysmal_positional_vertigo]. This still hasn’t cleared and I’m having to do some exercises[http://www.webmd.com/brain/brandt-daroff-exercise-for-vertigo-16844] to, supposedly, clear the debris.
A final few words of advice if you are going to fly (or dive):
If all this fails, I hope you will feel comforted by the fact that eardrums actually grow back!
On Wednesday 16th November we were lucky enough to get a tour of many of the live performance spaces in the University’s new £55m flagship Arts & Media building ‘New Adelphi‘. The tour was given by Salford alumni Matt Robinson, now of Sandy Brown Associates LLP, who was involved in the project as an acoustic consultant. In this role he had to specify the variety of acoustic treatments installed in each space to address both room acoustics and noise ingress and egress, and to liaise with the architect and the construction contractor to ensure that the required performance was achieved.
Our first stop was the basement recording studios. These include a variety of porous and resonant absorbers, to control reverberation time at high-frequencies and modal behavior at low-frequencies respectively, all hidden behind the colored scrim specified by the architect. The back wall features curved devices that offer a mixture of diffusion and absorption at mid-to-high frequencies. The rooms itself is room-in-room construction, to provide high sound isolation performance against noise ingress from the neighbouring live rooms & other control rooms, and from the dance studio above. The live rooms are of a similar construction, with absorptive panels that can opened and closed and mobile acoustic screens to provide variable acoustics. HVAC in both rooms is via a chilled beam system, with the live rooms having an additional forced ventilation system, the intention being that this is used to quickly change the air in the room in-between recording sessions.
The next space we visited was the very impressive New Adelphi Theatre. This is a re-configurable space, capable of working in proscenium-arch or traverse formats. In addition, the rear of the stage has doors that open up to a covered outdoor area (next to Engel’s Beard) to allow for combined outdoor / indoor performances. This is in addition to a large access door leading to the adjacent scenery workshops.
In terms of acoustic treatment, this space had to work for both amplified and un-amplified performances. The solution was to install acoustic diffusers on all the auditorium walls in front of which are heavy drapes. These will give the room a lively but acoustically pleasant character suitable for un-amplified delivery when revealed, or an acoustically dead response when covered, as is usually preferable when working with electroacoustic sound reinforcement. The fronts of the balconies were also designed to be mildly diffusing to avoid strong early reflections.
When introducing this space, Matt talked about how it is often an Acoustic Consultant’s role to suggest common sense solutions e.g. modifying building layout so that areas of noise creation are not adjacent to noise sensitive areas. In this case, other factors meant that the Studio Theatre, which will often be used for dance rehearsal and performance, had to be located directly above the recording studio suite. Structure-bourne noise from footfall therefore had the potential to be a serious problem, so a four-layer isolation scheme was put in place to mitigate this: first there is the vibration-isolated sprung floor; then there is the structural concrete slab; then there is the room-in-room construction of the studios; then there is their suspended acoustic ceiling. To prevent noise ingress / egress through the walls & ceiling, the studio theatre is also room-in-room construction. The inner shell was not capable of carrying the weight of the lighting grid so this is hung from the structural slab above, again with vibration isolators to prevent the possibility of vibration from the flown PA system entering the structure. HVAC requirements in this room were considerable due to the expectation of physically-demanding performances, so a very capable system was specified. This raises the noise floor of the room under intense use, but in those circumstances this is unlikely to be an issue.
This room is tucked away at one extreme of the 2nd floor, so by virtue of this location is relatively well protected from other noise sources. Acoustic treatment therefore was mostly concerned with reverberation control, so that boom microphones can pick up dialog clearly. A track system with heavy drapes was also installed to allow some additional control in this regard. HVAC requirements were quite substantial however as this must cope with the heat created by the studio lighting and is expected to be operated continually during filming.
The Atrium is a large impressive space which would, if left untreated, have had low absorption and a long reverberation time. It was however intended that it be a mixed use space – there are even private study booths installed on the first-floor mezzanine – so reverberation needed to be controlled. Acoustic absorption was added as hung baffles, installed on various ceilings and as a trellis above the study areas. These are particularly effective in such spaces as they offer twice the surface area that the same material would do if installed on a wall, plus the lack of a rigid backing means they are still relatively effective at low frequencies.
Our tour also took in some of the music practice rooms. These feature absorbent ceilings and chilled beam HVAC systems to respectively address room acoustics and eradicate the possibility of cross-talk through ductwork. In addition the percussion practice rooms have absorbent treatment on the walls to reduce the overall SPL by attenuating reflected sound. All rooms look out through the main glass facade of the building, the specification for this is uniform across the building. Since this does not possess sufficient sound insulating properties on its own, and up-speccing it for the entire building would be cost-prohibitive, these rooms have additional glazing to provide increased isolation. The cavity created includes acoustic absorption around its perimeter to prevent reverberant build up.
New Adelphi also has a substantial band practice room, but we were not able to see this on this occasion as it was in use.
Last month saw us take the latest version of our SALSA (Spatial Automated Live Sports Audio) software to NAB 2016. This is the second show Rob Oldfield and I have done in collaboration with DTS and Fairlight; last September saw us at IBC in Amsterdam showing our automated sports audio solution working with the MDA open object-based audio format. April saw the new version of University of Salford’s SALSA software demonstrated.
Our new version goes a step further than automated mixing and can now augment the on-
pitch audio (e.g. ball kicks) with pre-produced content to enhance the mix still further using acoustic signatures derived from the detected sounds. Real-time post-production if you like.
Results so far sound impressive. It’s not the first time sports audio has been enhanced of course. Sound supervisors and sound designers like Dennis Baxter have been doing this for years. Watching horse racing? That sound you hear isn’t the sound of horses coming round the track, instead it might be a slowed down recording of a buffalo charge. Downhill skiing? Samples played into the mix live from a MIDI keyboard. It all adds to viewers’ engagement in the entertainment of sports broadcast. Some of this is essentially Foley for live sport, the big difference is we are now automating the process in real time. Audio augmented reality for live sport. Expect more updates as we continue to develop the work…
AMS Neve came from a merger of two legendary English audio companies; Neve, famous for high quality analogue mixing consoles, and AMS (Advanced Music Systems) the Burnley based audio innovation company. Last night I was fortunate enough to be able to attend an Institute of Mechanical Engineer’s visit to AMS Neve in Burnley. Feeling something of a fraud (I’m not a mechanical engineer but the event was very kindly opened to a limited number of non-members) I was warmly welcomed by IMechE attendees, and also by Mark Crabtree, founder of AMS and Managing Director of AMS Neve.
The last time I’d visited the site, on a visit from college around 25 years ago, AMS were demonstrating their Logic 2 digital mixing console and Audiofile, one of the first hard disk digital audio editors regularly used in TV post-production. In around 1990 it was the first time I’d really understood what could be done with digital audio and at the time these were stunning pieces of technology. The Audiofile used by Editz, where I did a short placement during my college days, was much revered. As well it might be with its hefty price tag at the time.
During Mark’s presentation Read more…..
In addition to the Object Based Clean Audio demos that Rob Oldfield and I gave at IBC 2015 we were also showcasing the results of a long running development project in audio for live sports broadcast. Thanks to some internal funding from the University of Salford‘s Staff Innovation Challenge competition, and to some great work from Darius Satonger we have done some further development on research that we started on the EU FP7 FascinatE Project capturing audio objects from a live football match.
During the development work we spent quite a bit of time in outside broadcast trucks with some of the best mixing engineers around – the skill levels of these guys is impressive and watching them using mixing desk faders to follow the ball movement around the pitch during a football match made us realise the complexity of their job. The aim of our work was to find ways that would assist the mixing engineer in creating a great mix for both conventional, and object-based broadcast, and to ensure that transition to object-based broadcast was a painless one by tailoring our additions to existing workflows.
The software we have developed over the last few years works in real time by matching on-pitch sound events to a database of audio object templates in order to identify sounds that we want to capture such as ball kicks and referee whistle blows. Once a sound is identified the software identifies the location of the sound (typically to within around 50cm), isolates the sound as a short-duration audio object and tags it with metadata detailing the type of sound, its location on the pitch and its duration.
Once the tagged audio object is created it is used in Read more…..
Paper published in the Journal of the Audio Engineering Society by the Good Recording Project team.
For field recordings and user-generated content recorded on phones, tablets, and other mobile devices, poor audio quality arises in part from nonlinear distortions caused by clipping and limiting at pre-amplification stages and by dynamic range control. Based on the Hearing Aid Sound Quality Index (HASQI), a single-ended method to quantify perceived audio quality in the presence of nonlinear distortions has been developed. Validations on music and soundscapes yielded single-ended estimates within ±0.19 of HASQI on a quality range from 0.0 and 1.0. Perceptual tests were carried out to validate the method for music and soundscapes. HASQI has also been shown to predict quality degradations for processes other than nonlinear distortions including additive noise, linear filtering, and spectral changes. By including these other causes of quality degradations, the current model for nonlinear distortion assessment could be expanded.
To go with the publications the authors have also released a program so that if you have some audio that you suspect may be degradaed by amplitude clipping type distortions, this program will be able to detect distorted regions as well as providing a perceptual weighting. Please visit the following link for details on how to acquire the software,
The problems of hearing impaired people watching TV have been well documented of late. Loud music, background noise and other factors can ruin the enjoyment of TV for many people with hearing loss – around 10 million people in the UK according to Action on Hearing Loss.
In previous research funded by the ITC and Ofcom I looked at solutions that took advantage of the (then) recent introduction of 5.1 surround sound broadcast. Some of this ended up in broadcast standards and is being used by broadcasters. Now emerging new audio standards are opening the door to improving TV sound much more for hearing impaired people, and also for many others.
I’ve written about some of this work before, a recent blog post described our journal article in the Journal of the Audio Engineering Society where my colleague Rob Oldfield and I picked up where my PhD left off and looked at how we could improve TV sound for hearing impaired people by using features of emerging object-based audio formats. In object-based audio all component parts of a sound scene are broadcast separate and are combined at the set top box based on metadata contained in the broadcast transmission. This means that speech, and other elements important to understanding narrative, can be treated differently compared to background sound (such as music, noise etc).
I’ve just returned from IBC in Amsterdam where we’ve been demonstrating some University of Salford research outputs on object-based clean audio with DTS, a key player in object-based audio developments.
Last week we spent a week showing the results of our recent collaboration with DTS – presenting personalised TV audio and Read more…..
Interested in audio perception? We’re looking for someone to research games audio, binaural, emotional response to sound, or audio quality. You would be working with the S3A project, which is studying the future of spatial audio from production through to reproduction at home. The goal of S3A research is to introduce practical technologies for spatial audio that will revolutionise how listeners experience sound. S3A is a collaborative project between the Universities of Surrey, Salford and Southampton and the BBC Research and Development. You would be based in Salford.
Candidates should Read more…..
Prof Yiu Lam and Dr Jonathan Hargreaves have been working for the last three years in collaboration with Steve Langdon at the University of Reading and industrial partners Arup and the BBC on an EPSRC funded project developing new computational acoustic algorithms aimed at auralisation applications. As a culmination of this, they are organising a two day workshop to discuss the latest developments, and future research challenges and priorities, in this field, and specifically its application to built environment design consultation. The workshop will be hosted by Arup at their Manchester office, and will make use of their renowned SoundLab facility for audio demos.
For more information on the programme and speakers see http://hub.salford.ac.uk/acoustics/workshopSept2015/
MHiVec is an EU FP7 IAPP project which is applying a new modelling method called Dynamic Energy Analysis (DEA) to structural and acoustic modelling of vehicle NVH at mid-to-high frequencies. This method works by transferring distributions of directional wave energy between elements of a discretised mesh of a sub-system boundary, be that an acoustic cavity or a structural element (e.g. a metal plate) and then solving for the steady-state intensities. It therefore has much in common with some late-time acoustic reverberation models and SEA but, like SEA, can be applied to a wider class of vibrating systems.
The MHiVec team organised a workshop in Freising, Germany, at the start of September 2015. Dr Jonathan Hargreaves from the ARC was an invited speaker and gave a talk on synergies between BEM and beam-tracing methods. In return, we are lucky to have MHiVec PI Dr David Chappell from NTU attending and presenting at the computational acoustics and auralisation workshop we are organising in collaboration with Arup on the 22nd and 23rd September, where he will speak on the application of DEA to room acoustics in 3D.