Powered by OpenAIRE graph
Found an issue? Give us feedback

CereProc Limited

Country: United Kingdom

CereProc Limited

3 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/S026991/1
    Funder Contribution: 481,002 GBP

    The Radio Me project will deliver an aid for people with mild to moderate dementia (henceforth referred to as PWD) living alone in their homes, to provide memory reminders and agitation reduction. Radio listening is most common in the age range of PWD, and mainstream radio has two parts: played music, and spoken voice by DJs. Thus, it provides an interface involving voice and music, and one that is familiar to PWD. Radio Me (RM) is a system that maps these natural voice and music elements onto integrated aids for memory and agitation. RM will enable broadcast-style radio as a mode of human-computer interaction to support PWD who live alone in their own homes. The title Radio Me refers to the system rather than the audio resulting from the real-time generative remixing. An example of functionality is envisaged as follows: RM audio output through the speakers will, by default, sound like the live local radio. So, when the PWD switches on the radio in the morning, it initially sounds like their local station. However, at some pre-decided point (as entered into an electronic diary by PWD or their carer), and at the start of a song, a DJ-like voice seamlessly overrides the real DJ and reminds the listener to hydrate. A little later, the radio reminds the listener to eat lunch. Soon RM detects that the listener is becoming agitated (via a worn wrist-sensor). It overrides the next DJ song choice and selects a song from the user's personal library, which is known is likely to calm them. It can keep playing calming material until it detects the user has calmed. RM can give more frequent date / time checks for the listener than a normal radio station, and remind them to take their medication, to attend a Memory Café, etc. The project is timely because: (1) it addresses dementia, one of the UK's major national health and care priorities; (2) it addresses the UK's care profession crisis.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S022481/1
    Funder Contribution: 6,802,750 GBP

    1) To create the next generation of Natural Language Processing experts, stimulating the growth of NLP in the public and private sectors domestically and internationally. A pool of NLP talent will provide incentives for (existing) companies to expand their operations in the UK and lead to start-ups and new products. 2) To deliver a programme which will have a transformative effect on the students that we train and on the field as a whole, developing future leaders and producing cutting-edge research in both methodology and applications. 3) To give students a firm grounding in the challenge of working with language in a computational setting and its relevance to critical engineering and scientific problems in our modern world. The Centre will also train them in the key programming, engineering, and machine learning skills necessary to solve NLP problems. 4) To attract students from a broad range of backgrounds, including computer science, AI, maths and statistics, linguistics, cognitive science, and psychology and provide an interdisciplinary cohort training approach. The latter involves taught courses, hands-on laboratory projects, research-skills training, and cohort-based activities such as specialist seminars, workshops, and meetups. 5) To train students with awareness of user design, ethics and responsible research in order to design systems that improve user statisfaction, treat users fairly, and increase the uptake of NLP technology across cultures, social groups and languages.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/I031022/1
    Funder Contribution: 6,236,100 GBP

    Humans are highly adaptable, and speech is our natural medium for informal communication. When communicating, we continuously adjust to other people, to the situation, and to the environment, using previously acquired knowledge to make this adaptation seem almost instantaneous. Humans generalise, enabling efficient communication in unfamiliar situations and rapid adaptation to new speakers or listeners. Current speech technology works well for certain controlled tasks and domains, but is far from natural, a consequence of its limited ability to acquire knowledge about people or situations, to adapt, and to generalise. This accounts for the uneasy public reaction to speech-driven systems. For example, text-to-speech synthesis can be as intelligible as human speech, but lacks expression and is not perceived as natural. Similarly, the accuracy of speech recognition systems can collapse if the acoustic environment or task domain changes, conditions which a human listener would handle easily. Research approaches to these problems have hitherto been piecemeal and as a result progress has been patchy. In contrast NST will focus on the integrated theoretical development of new joint models for speech recognition and synthesis. These models will allow us to incorporate knowledge about the speakers, the environment, the communication context and awareness of the task, and will learn and adapt from real world data in an online, unsupervised manner. This theoretical unification is already underway within the NST labs and, combined with our record of turning theory into practical state-of-the-art applications, will enable us to bring a naturalness to speech technology that is not currently attainable.The NST programme will yield technology which (1) approaches human adaptability to new communication situations, (2) is capable of personalised communication, and (3) takes account of speaker intention and expressiveness in speech recognition and synthesis. This is an ambitious vision. Its success will be measured in terms of how the theoretical development reshapes the field over the next decade, the takeup of the software systems that we shall develop, and through the impact of our exemplar interactive applications.We shall establish a strong User Group to maximise the impact of the project, with a members concerned with clinical applications, as well as more general speech technology. Members of the User Group include Toshiba, EADS Innovation Works, Cisco, Barnsley Hospital NHS Foundation Trust, and the Euan MacDonald Centre for MND Research. An important interaction with the User Group will be validating our systems on their data and tasks, discussed at an annual user workshop.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.