
Emotech Ltd
Emotech Ltd
3 Projects, page 1 of 1
assignment_turned_in Project2019 - 2027Partners:Thomson Reuters Foundation, Mozilla Foundation, Quorate Technology Ltd, CereProc Limited, Naver Labs Europe +22 partnersThomson Reuters Foundation,Mozilla Foundation,Quorate Technology Ltd,CereProc Limited,Naver Labs Europe,BBC,nVIDIA,adeptmind,Emotech Ltd,Mozilla Foundation,Amazon Web Services, Inc.,BBC,TREL,Amazon Web Services, Inc.,RASA Technoligies GMBH,TREL,RASA Technoligies GMBH,SICSA,Sertis,CereProc Limited,MICROSOFT RESEARCH LIMITED,dMetrics,Fact Mata Ltd,University of Edinburgh,Facebook,MICROSOFT RESEARCH LIMITED,To Play For LtdFunder: UK Research and Innovation Project Code: EP/S022481/1Funder Contribution: 6,802,750 GBP1) To create the next generation of Natural Language Processing experts, stimulating the growth of NLP in the public and private sectors domestically and internationally. A pool of NLP talent will provide incentives for (existing) companies to expand their operations in the UK and lead to start-ups and new products. 2) To deliver a programme which will have a transformative effect on the students that we train and on the field as a whole, developing future leaders and producing cutting-edge research in both methodology and applications. 3) To give students a firm grounding in the challenge of working with language in a computational setting and its relevance to critical engineering and scientific problems in our modern world. The Centre will also train them in the key programming, engineering, and machine learning skills necessary to solve NLP problems. 4) To attract students from a broad range of backgrounds, including computer science, AI, maths and statistics, linguistics, cognitive science, and psychology and provide an interdisciplinary cohort training approach. The latter involves taught courses, hands-on laboratory projects, research-skills training, and cohort-based activities such as specialist seminars, workshops, and meetups. 5) To train students with awareness of user design, ethics and responsible research in order to design systems that improve user statisfaction, treat users fairly, and increase the uptake of NLP technology across cultures, social groups and languages.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::b35b8ab2da30b19b400a603db2c8ef42&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::b35b8ab2da30b19b400a603db2c8ef42&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2019 - 2027Partners:MICROSOFT RESEARCH LIMITED, Health & Social Care Information Centre, Ontotext, OCLC, Kollider +32 partnersMICROSOFT RESEARCH LIMITED,Health & Social Care Information Centre,Ontotext,OCLC,Kollider,FACTMATA,Amazon Web Services, Inc.,Scribetech (UK) Ltd,SoapBox Labs Ltd.,BTS Holdings Plc,VoiceBase,Solvay (International Chemical Group),Amazon Web Services, Inc.,University of Sheffield,ZOO Digital Group PLC,Emotech Ltd,Tech City UK,M*Modal Technologies, Inc,Nuance Communications UK Limited,gweek Limited,KCL,MapR Technologies,Netcall Telecom Limited,Sheffield Digital,Jam Creative Studios,TribePad,Signal Media Ltd,MICROSOFT RESEARCH LIMITED,TherapyBox,Jam Creative Studios,Textio,BTS Holdings Plc,Solvay (International Chemical Group),ZOO Digital Group PLC,ITSLanguage,Ieso Digital Health Ltd,Record Sure limitedFunder: UK Research and Innovation Project Code: EP/S023062/1Funder Contribution: 5,508,850 GBPA long term goal of Artificial Intelligence (AI) has been to create machines that can understand spoken and written human language. This capability would enable, for example, spoken language interaction between people and computers, translation between all human languages and tools to analyse and answer questions about vast archives of text and speech. Spectacular advances in computer hardware and software over the last two decades mean this vision is no longer science fiction but is turning into reality. Speech and Language Technologies (SLTs) are now established as core scientific/engineering disciplines within AI and have grown into a world-wide multi-billion dollar industry, with SLT global revenues predicted to rise from $33bn in 2015 to $127bn by 2024. The UK has long played a leading role in SLT and the government has recently identified AI, including SLT, as of national importance. Many international corporations such as Google, Apple, Amazon and Microsoft now have research labs in the UK, in part to leverage local SLT expertise, and a new and extensive eco-system of SLT SMEs has sprung up. There is huge demand for scientists with advanced training in SLT from these organisations, most of whom hire only at PhD level, evident in the support for this CDT by more than 30 partners. The result is fierce, international competition to attract talent and supply is falling far short of demand. It is critically important, therefore, to improve the UK's capacity to address this industrial need for high quality, high value postdoctoral SLT talent, to enhance the UK's position as a leader in the field and, in turn, attract investment in AI-related technologies and support UK economic growth. To address the shortfall in PhD-trained scientists we propose a CDT in "Speech and Language Technologies and Their Applications". Our vision is to create a CDT that will be a world-leading centre for training SLT scientists and engineers, giving students the best possible advanced training in the theory and application of computational speech and language processing, in a setting that fosters interdisciplinary approaches, innovation and engagement with real world users and awareness of the social and ethical consequences of our work. A cohort-based approach is necessary in SLT because: (1) the software infrastructure, tools and methods for SLT are highly complex and creating them is nearly always a collaborative endeavour -- a cohort offers an ideal setting to gain experience of such collaborative working (2) PhD topics tend to be narrow and focused on specifics and do not include the broad overview needed in students' later careers -- through cohort training we can expose students to a range of different SLT topics (3) peer learning within and across cohorts is a highly effective way to hand over tools and to teach methodology (4) a multi-year cohort programme allows significant and sustained progression in larger (i.e. multi-student) SLT projects, resulting in better research outcomes and more impact in partnering companies (5) cohort teaching is very attractive to students (6) an extended cohort-based training programme with strong group work and peer tutoring elements allows students with non-standard backgrounds be admitted, helping to promote diversity in SLT. To realise our vision we propose to build on Sheffield's unique strengths in SLT, which include (1) a large team of SLT academics with an outstanding, 30-year research track record in publication, research grant capture and PhD supervision, covering all the core areas of SLT (2) a large group of industrial partners who actively want to participate in the CDT (3) a track record of impact arising from our research, through creating new enterprises or enhancing the activities of existing organisations (4) an excellent research environment in terms of computing and data resources, study and work facilities, and commitment to and respect for diversity and equality.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2fae954a12e4b9aad29c3b1a594078dc&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2fae954a12e4b9aad29c3b1a594078dc&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2018 - 2022Partners:SRI, KCL, Emotech Ltd, SRI INTERNATIONAL, University of California, Berkeley +10 partnersSRI,KCL,Emotech Ltd,SRI INTERNATIONAL,University of California, Berkeley,University of California, Berkeley,BBC,BBC,Quorate Technology Ltd,Quorate Technology Limited,British Broadcasting Corporation (United Kingdom),SRI INTERNATIONAL,British Broadcasting Corporation - BBC,Emotech (United Kingdom),SRIFunder: UK Research and Innovation Project Code: EP/R012067/1Funder Contribution: 734,106 GBPSpeech recognition has made major advances in the past few years. Error rates have been reduced by more than half on standard large-scale tasks such as Switchboard (conversational telephone speech), MGB (multi-genre broadcast recordings), and AMI (multiparty meetings). These research advances have quickly translated into commercial products and services: speech-based applications and assistants such as such as Apple's Siri, Amazon's Alexa, and Google voice search have become part of daily life for many people. Underpinning the improved accuracy of these systems are advances in acoustic modelling, with deep learning having had an outstanding influence on the field. However, speech recognition is still very fragile: it has been successfully deployed in specific acoustic conditions and task domains - for instance, voice search on a smart phone - and degrades severely when the conditions change. This is because speech recognition is highly vulnerable to additive noise caused by multiple acoustic sources, and to reverberation. In both cases, acoustic conditions which have essentially no effect on the accuracy of human speech recognition can have a catastrophic impact on the accuracy of a state-of-the-art automatic system. A reason for such brittleness is the lack of a strong model for acoustic robustness. Robustness is usually addressed through multi-condition training, in which the training set comprises speech examples across the many required acoustic conditions, often constructed by mixing speech with noise at different signal-to-noise ratios. For a limited set of acoustic conditions these techniques can work well, but they are inefficient and do not offer a model of multiple acoustic sources, nor do they factorise the causes of variability. For instance, the best reported speech recognition results for transcription of the AMI corpus test set using single distant microphone recordings is about 38% word error rate (for non-overlapped speech), compared to about 5% error rate for human listeners. In the past few years there have been several approaches that have tried to address these problems: explicitly learning to separate multiple sources; factorised acoustic models using auxiliary features; and learned spectral masks for multi-channel beam-forming. SpeechWave will pursue an alternative approach to robust speech recognition: The development of acoustic models which learn directly from the speech waveform. The motivation to operate directly in the waveform domain arises from the insight that redundancy in speech signals is highly likely to be a key factor in the robustness of human speech recognition. Current approaches to speech recognition separate non-adaptive signal processing components from the adaptive acoustic model, and in so doing lose the redundancy - and, typically, information such as the phase - present in the speech waveform. Waveform models are particularly exciting as they combine the previously distinct signal processing and acoustic modelling components. In SpeechWave, we shall explore novel waveform-based convolutional and recurrent networks which combine speech enhancement and recognition in a factorised way, and approaches based on kernel methods and on recent research advances in sparse signal processing and speech perception. Our research will be evaluated on standard large-scale speech corpora. In addition we shall participate in, and organise, international challenges to assess the performance of speech recognition technologies. We shall also validate our technologies in practice, in the context of the speech recognition challenges faced by our project partners BBC, Emotech, Quorate, and SRI.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::5ca82ec0af7be76e31e0cf93caa643e8&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::5ca82ec0af7be76e31e0cf93caa643e8&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu