Powered by OpenAIRE graph
Found an issue? Give us feedback

Phonak AG

Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
12 Projects, page 1 of 3
  • Funder: UK Research and Innovation Project Code: EP/M026957/1
    Funder Contribution: 565,346 GBP

    Current hearing aids suffer from two major limitations: 1) hearing aid audio processing strategies are inflexible and do not adapt sufficiently to the listening environment, 2) hearing tests and hearing aid fitting procedures do not allow reliable diagnosis of the underlying nature of the hearing loss and frequently lead to poor fitting of devices. This research programme will use new machine learning methods to revolutionise both of these aspects of hearing aid technology, leading to intelligent hearing devices and testing procedures which actively learn about a patient's hearing loss enabling more personalised fitting. Intelligent audio processing The optimal audio processing strategy for a hearing aid depends on the acoustic environment. A conversation held in a quiet office, for example, should be processed in a different way from one held in a busy reverberant restaurant. Current high-end hearing aids do switch between a small number of different processing strategies based upon a simple acoustic environment classification system that monitors simple aspects of the incoming audio. However, the classification accuracy is limited, which is one of the reasons why hearing devices perform very poorly in noisy multi-source environments. Future intelligent devices should be able to recognise a far larger and more diverse set of audio environments, possibly using wireless communication with a smart phone. Moreover, the hearing aid should use this information to inform the way the sound is processed in the hearing aid. The purpose of the first arm of the project is to develop algorithms that will facilitate the development of such devices. One of the focuses will be on a class of sounds called audio textures, which are richly structured, but temporally homogeneous signals. Examples include: diners babbling at a restaurant; a train rattling along a track; wind howling through the trees; water running from a tap. Audio textures are often indicative of the environment and they therefore carry valuable information about the scene that could be harnessed by a hearing aid. Moreover, textures often corrupt target signals and their suppression can help the hearing impaired. We will develop efficient texture recognition systems that can identify the noises present in an environment. Then we will design and test bespoke real-time noise reduction strategies that utilise information about the audio textures present in the environment. Intelligent hearing devices Sensorineural hearing loss can be associated with many underlying causes. Within the cochlea there may be dysfunction of the inner hair cells (IHCs) or outer hair cells (OHCs), metabolic disturbance, and structural abnormalities. Ideally, audiologists should fit a patient's hearing aid based on detailed knowledge of the underlying cause of the hearing loss, since this determines the optimal device settings or whether to proceed with the intervention at. Unfortunately, the hearing test employed in current fitting procedures, called the audiogram, is not able to reliably distinguish between many different forms of hearing loss. More sophisticated hearing tests are needed, but it has proven hard to design them. In the second arm of the project we propose a different approach that refines a model of the patient's hearing loss after each stage of the test and uses this to automatically design and select stimuli for the next stage that are particularly informative. These tests will be be fast, accurate and capable of determining the form of the patient's specific underlying dysfunction. The model of a patient's hearing loss will then be used to setup hearing devices in an optimal way, using a mixture of computer simulation and listening test.

    more_vert
  • Funder: UK Research and Innovation Project Code: MR/M025616/1
    Funder Contribution: 118,271 GBP

    The purpose of the network is to develop and pursue ideas for improvement to hearing aid technology. The network will focus on technologies associated with microphones. Four initial ideas are proposed, but the network will work to develop more. Two of the four are subject to parallel applications for funding from EPSRC. Concrete proposal for pilot work on the fourth idea are included. The four initial ideas are:- (1) Small microphones produce their own noise that, once powerfully amplified, becomes audible to the user. An associated application to the EPSRC will develop novel low-noise microphones to address this problem using MEMS technology. MEMS microphones could also facilitate multiple microphone noise reduction techniques (see (4)). (2) Strong amplification means that any sound leakage from the ear canal may be picked up by the nearby microphone and reamplified, causing a whistling feedback loop. The leakage can only be prevented by a tight seal, but in order to combat the occlusion effect, many hearing aids are deliberately "vented"; a hole is drilled in the moulding to make the user's voice sound more natural. Moreover, many "instant fit" hearing aids make no attempt to block the canal in the first place. Instead, modern digital hearing aids attempt to remove the feedback using digital signal processing. This works by attempting to model the ever changing feedback path and subtracting the predicted feedback signal from the microphone input or by detecting the presence of a whistling sound and briefly cutting amplification at that frequency in order to break the loop. These methods often fail to discriminate between sustained tones in the environment, notably those in music, and the whistle of feedback, so it has a bad effect on enjoyment of music. We will address methods of more accurately idendifying genuine feedback. (3) In day-to-day usage, the required amplification is often not achieved. It is difficult to verify that a hearing aid, once fitted, is producing the right level of amplification. It can be measured by an audiologist in a skilled procedure known as real-ear measurement, but this takes specialised time and equipment and is only reliable at low frequencies. Consequently, higher frequencies are not amplified for fear that excessive sound levels may occur. we will explore ways of better monitoriing the sound level in the ear canal and of delivering the right amplification across the freqeuncy spectrum. (4) Finally, even one sound is amplified to the right degree, this amplification helps users little with their principal difficulty of understanding speech in environmental noise, such as a room full of backgruond conversation. This is because a damaged auditory system has a wider range of deficits than a mere loss of sensitivity. Given the degraded state of the user's auditory system, removal of this background noise is the only established way to improve intelligibility. We will explore novel methods of reducing background noise, particularly through the use of multiple microphones. This idea is the subject of 3 linked proposals to the EPSRC. The network will conduct a series of meetings and workshops to bring forward these ideas. It will sponsor network participants to attend conference outside their immediate area of research and develop a special session on future hearing aids at the 2016 BSA annuanl conference.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/W019434/1
    Funder Contribution: 1,319,160 GBP

    Every culture has music. It brings people together and shapes society. Music affects how we feel, tapping into the pleasure circuits of the brain. In the UK each year, the core music industry contributes £3.5bn to the economy (UK Music 2012) with 30 million people attending concerts and festivals (UK Music 2017). Music listening is widespread in shops, movies, ceremonies, live gigs, on mobile phones, etc. Music is important to health and wellbeing. As a 2017 report by the All-Party Parliamentary Group on Arts, Health & Wellbeing demonstrates, "The arts can help keep us well, aid our recovery and support longer lives better lived. The arts can help meet major challenges facing health and social care: ageing, long-term conditions, loneliness and mental health. The arts can help save money in the health service and social care." 1 on 6 people in the UK has a hearing loss, and this number will increase as the population ages (RNID). Poorer hearing makes music harder to appreciate. Picking out lyrics or melody lines is more difficult; the thrill of a musician creating a barely audible note is lost if the sound is actually inaudible, and music becomes duller as high frequencies disappear. This risks disengagement from music and the loss of the health and wellbeing benefits it creates. We need to personalise music so it works better for those with a hearing loss. We will consider: 1. Processing and remixing mixing desk feeds for live events or multitrack recordings. 2. Processing of stereo recordings in the cloud or on consumer devices. 3. Processing of music as picked up by hearing aid microphones. For (1) and (2), the music can be broadcast directly to a hearing aid or headphones for reproduction. For (1), having access to separate tracks for each musical instrument gives greater control over how sounds are processed. This is timely with future Object-Based Audio formats allowing this approach. (2) is needed because we consume much recorded music. It's more efficient and effective to pre-process music than rely on hearing aids to improve the sound, as this allows more sophisticated signal processing. (3) is important because hearing aids are the solution for much live music. But, the AHRC Hearing Aids for Music project found that 67% of hearing-aid users had some difficulty listening to music with hearing aids. Hearing aid research has focussed mostly on speech with music listening being relatively overlooked. Audio signal processing is a very active and fast-moving area of research, but typically fails to consider those with a hearing loss. The latest techniques in signal processing and machine learning could revolutionise music for those with a hearing impairment. To achieve this we need more researchers to consider hearing loss and this can be achieved through a series of signal processing challenges. Such competitions are a proven technique for accelerating research, including growing a collaborative community who apply their skills and knowledge to a problem area. We will develop tools, databases and objective models needed to run the challenges. This will lower barriers that currently prevent many researchers from considering hearing loss. Data would include the results of listening tests into how real people perceive audio quality, along with a characterisation of each test subject's hearing ability, because the music processing needs to be personalised. We will develop new objective models to predict how people with a hearing loss perceive audio quality of music. Such data and tools will allow researchers to develop novel algorithms. The scientific legacy will be new approaches for mixing and processing music for people with a hearing loss, a test-bed that readily allows further research, better understanding of the audio quality required for music, and more audio and machine learning researchers considering the hearing abilities of the whole population for music listening.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/H028285/1
    Funder Contribution: 134,420 GBP

    Summary This project proposes to design, build and evaluate a new design of biologically-inspired hearing aid in collaboration with a world-leading manufacturer (Phonak). The design is specifically targeted at improving the perception of speech in noisy environments. The process of tuning the aid will use computer models of patient hearing developed in the on-going Hearing Dummy Project at Essex University. A successful outcome should prepare the way for a new generation of hearing aid designs and major changes in dispensing practice. The most common complaint associated with hearing impairment is difficulty in understanding speech in noisy backgrounds at work and in pubs, restaurants and parties. Conventional hearing aids restore normal thresholds and offer more comfortable levels of sound. However, they are not successful in solving the problem of hearing speech in noise. Recent research has suggested that normal hearing is successful because it uses a process of instantaneous compression combined with other methods of input level regulation linked to the level of the background noise. In a recent computer-based study we have shown that the implementation of these biological processes can improve the recognition of speech in noisy backgrounds. In this project we shall design and build a hearing aid that uses these principles to aid the perception of speech in challenging situations. The project involves a software design study at Essex and a hardware implementation study by a manufacturer. Phonak AG, the hearing aid company, is strongly supportive of the proposal and will collaborate by addressing the hardware design issues and implementing the new principles as a working, wearable hearing aid. This proposal is associated with an on-going EPSRC-funded project that will provide facilities and patients for testing the new algorithm. Computer models of hearing will play an important role in the design process. We have developed a model of normal hearing that incorporates the biological principles of the acoustic reflex, instantaneous compression and efferent depression. The 'normal' model forms the basis of the new hearing aid design with a view to restoring effects that are missing in patients. Hearing impairment is typically characterised as an inability to hear quiet sounds but this may be too simplistic. For many people with a hearing impairment, automatic regulation of input level is also ineffective. We have simulated this in individualised computer models of a number of impaired listeners and shown that this replicates their psychometric data. By combining the hearing aid based on the 'normal' model with an 'impaired' model in a software harness it will be possible to identify the optimum settings of the hearing aid needed for a given patient to restore normal hearing in a speech recognition task. Our 'impaired' computer models are based on measurements made on an individual patient (like a tailor's dummy). Recent research in our laboratory has developed rapid patient evaluation methods that measure thresholds, tuning and compression. These measurements show substantial differences among patients who have similar audiograms and would be prescribed similar hearing aids. We have made detailed measurements of a number of patients and created computer models of their hearing. These models will be used in the design of the new hearing aids and the same patients will be available to help us optimise these aids. The project will have strong clinical involvement. It is supported by an ENT surgeon, and a hearing aid dispenser. They will monitor and advise the project as well as direct suitable patients who wish to volunteer. The Essex hearing research team already includes two audiologists, a speech therapist/audiologist and a computer scientist in addition to the principle investigator (psychologist).

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/M026981/1
    Funder Contribution: 418,261 GBP

    Current commercial hearing aids use a number of sophisticated enhancement techniques to try and improve the quality of speech signals. However, today's best aids fail to work well in many everyday situations. In particular, they fail in busy social situations where there are many competing speech sources; they fail if the speaker is too far from the listener and swamped by noise. We have identified an opportunity to solve this problem by building hearing aids that can 'see'. This ambitious project aims to develop a new generation of hearing aid technology that extracts speech from noise by using a camera to see what the talker is saying. The wearer of the device will be able to focus their hearing on a target talker and the device will filter out competing sound. This ability, which is beyond that of current technology, has the potential to improve the quality of life of the millions suffering from hearing loss (over 10m in the UK alone). Our approach is consistent with normal hearing. Listeners naturally combine information from both their ears and eyes: we use our eyes to help us hear. When listening to speech, eyes follow the movements of the face and mouth and a sophisticated, multi-stage process uses this information to separate speech from the noise and fill in any gaps. Our hearing aid will act in much the same way. It will exploit visual information from a camera (e.g.using a Google Glass like system), and novel algorithms for intelligently combining audio and visual information, in order to improve speech quality and intelligibility in real-world noisy environments. The project is bringing together a critical mass of researchers with the complementary expertise necessary to make the audio-visual hearing-aid possible. The project will combine new contrasting approaches to audio-visual speech enhancement that have been developed by the Cognitive Computing group at Stirling and the Speech and Hearing Group at Sheffield. The Stirling approach uses the visual signal to filter out noise; whereas the Sheffield approach uses the visual signal to fill in 'gaps' in the speech. The vision processing needed to track a speaker's lip and face movement will use a revolutionary 'bar code' representation developed by the Psychology Division at Stirling. The MRC Institute of Hearing Research (IHR) will provide the expertise needed to evaluate the approach on real hearing loss sufferers. Phonak AG, a leading international hearing aid manufacturer, will provide the advice and guidance necessary to maximise potential for industrial impact. The project has been designed as a series of four workpackages that consider the key research challenges related to each component of the device's design. These questions have been identified by preliminary work at Sheffield and Stirling. Among the challenges are developing improved techniques for visually-driven audio-analysis; designing better metrics for weighting audio and visual evidence; developing techniques for optimally combining the noise-filtering and gap-filling approaches. A further key challenge is that, for a hearing aid to be effective, the processing cannot delay the signal by more than 10ms. In the final year of the project a full integrated, software prototype will be clinically evaluated using listening tests with hearing-impaired volunteers in a range of modern noisy reverberant environments. Evaluation will use a new purpose-built speech corpus that will be designed specifically for testing this new class of multimodal device. The project's clinical research partner, the Scottish Section of MRC IHR, will provide advice on the experimental design and analysis aspects throughout the trials. Industry leader Phonak AG will provide advice and technical support for benchmarking real-time hearing devices. The final clinically-tested prototype will be made available to the whole hearing community as a testbed for further research, development, evaluation and benchmarking.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.