Powered by OpenAIRE graph
Found an issue? Give us feedback

Intel UK

INTEL CORPORATION (UK) LIMITED
Country: United Kingdom
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
16 Projects, page 1 of 4
  • Funder: UK Research and Innovation Project Code: EP/S023291/1
    Funder Contribution: 6,112,270 GBP

    The Centre for Doctoral Training MAC-MIGS will provide advanced training in the formulation, analysis, and implementation of state-of-the-art mathematical and computational models. The vision for the training offered is that effective modern modelling must integrate data with laws framed in explicit, rigorous mathematical terms. The CDT will offer 76 PhD students an intensive 4-year training and research programme that equips them with the skills needed to tackle the challenges of data-intensive modelling. The new generation of successful modelling experts will be able to develop and analyse mathematical models, translate them into efficient computer codes that make best use of available data, interpret the results, and communicate throughout the process with users in industry, commerce and government. Mathematical and computational models are at the heart of 21st-century technology: they underpin science, medicine and, increasingly, social sciences, and impact many sectors of the economy including high-value manufacturing, healthcare, energy, physical infrastructure and national planning. When combined with the enormous computing power and volume of data now available, these models provide unmatched predictive tools which capture systematically the experimental and observational evidence available. Because they are based on sound deductive principles, they are also the only effective tool in many problems where data is either sparse or, as is often the case, acquired in conditions that differ from the relevant real-world scenarios. Developing and exploiting these models requires a broad range of skills - from abstract mathematics to computing and data science - combined with expertise in application areas. MAC-MIGS will equip its students with these skills through a broad programme that cuts across disciplinary boundaries to include mathematical analysis - pure, applied, numerical and stochastic - data-science and statistics techniques and the domain-specific advanced knowledge necessary for cutting-edge applications. MAC-MIGS students will join the broader Maxwell Institute Graduate School in its brand-new base located in central Edinburgh. They will benefit from (i) dedicated academic training in subjects that include mathematical analysis, computational mathematics, multi-scale modelling, model reduction, Bayesian inference, uncertainty quantification, inverse problems and data assimilation, and machine learning; (ii) extensive experience of collaborative and interdisciplinary work through projects, modelling camps, industrial sandpits and internships; (iii) outstanding early-career training, with a strong focus on entrepreneurship; and (iv) a dynamic and forward-looking community of mathematicians and scientists, sharing strong values of collaboration, respect, and social and scientific responsibility. The students will integrate a vibrant research environment, closely interacting with some 80 MAC-MIGS academics comprised of mathematicians from the universities of Edinburgh and Heriot-Watt as well as computer scientists, engineers, physicists and chemists providing their own disciplinary expertise. Students will benefit from MAC-MIGS's diverse network of more than 30 industrial and agency partners spanning a broad spectrum of application areas: energy, engineering design, finance, computer technology, healthcare and the environment. These partners will provide internships, development programmes and research projects, and help maximise the impact of our students' work. Our network of academic partners representing ten leading institutions in the US and Europe, will further provide opportunities for collaborations and research visits.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/N024877/1
    Funder Contribution: 1,112,060 GBP

    Vascular disease is the most common precursor to ischaemic heart disease and stroke, which are two of the leading causes of death worldwide. Advances in endovascular intervention in recent years have transformed patient survival rates and post-surgical quality of life. Compared to open surgery, it has the advantages of faster recovery, reduced need for general anaesthesia, reduced blood loss and significantly lower mortality. However, endovascular intervention involves complex manoeuvring of pre-shaped catheters to reach target areas in the vasculature. Some endovascular tasks can be challenging for even highly-skilled operators. The use of robot assisted endovascular intervention aims to address some of these difficulties, with the added benefit of allowing the operator to remotely control and manipulate devices, thus avoiding exposure to X-ray radiation. The purpose of this work is to develop a new robot-assisted endovascular platform, incorporating novel device designs with improved human-robot control. It builds on our strong partnership with industry aiming to develop the next generation robots that are safe, effective, and accessible to general NHS populations.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R018537/1
    Funder Contribution: 2,557,650 GBP

    Bayesian inference is a process which allows us to extract information from data. The process uses prior knowledge articulated as statistical models for the data. We are focused on developing a transformational solution to Data Science problems that can be posed as such Bayesian inference tasks. An existing family of algorithms, called Markov chain Monte Carlo (MCMC) algorithms, offer a family of solutions that offer impressive accuracy but demand significant computational load. For a significant subset of the users of Data Science that we interact with, while the accuracy offered by MCMC is recognised as potentially transformational, the computational load is just too great for MCMC to be a practical alternative to existing approaches. These users include academics working in science (e.g., Physics, Chemistry, Biology and the social sciences) as well as government and industry (e.g., in the pharmaceutical, defence and manufacturing sectors). The problem is then how to make the accuracy offered by MCMC accessible at a fraction of the computational cost. The solution we propose is based on replacing MCMC with a more recently developed family of algorithms, Sequential Monte Carlo (SMC) samplers. While MCMC, at its heart, manipulates a single sampling process, SMC samplers are an inherently population-based algorithm that manipulates a population of samples. This makes SMC samplers well suited to the task of being implemented in a way that exploits parallel computational resources. It is therefore possible to use emerging hardware (e.g., Graphics Processor Units (GPUs), Field Programmable Gate Arrays (FPGAs) and Intel's Xeon Phis as well as High Performance Computing (HPC) clusters) to make SMC samplers run faster. Indeed, our recent work (which has had to remove some algorithmic bottlenecks before making the progress we have achieved) has shown that SMC samplers can offer accuracy similar to MCMC but with implementations that are better suited to such emerging hardware. The benefits of using an SMC sampler in place of MCMC go beyond those made possible by simply posing a (tough) parallel computing challenge. The parameters of an MCMC algorithm necessarily differ from those related to a SMC sampler. These differences offer opportunities for SMC samplers to be developed in directions that are not possible with MCMC. For example, SMC samplers, in contrast to MCMC algorithms, can be configured to exploit a memory of their historic behaviour and can be designed to smoothly transition between problems. It seems likely that by exploiting such opportunities, we will generate SMC samplers that can outperform MCMC even more than is possible by using parallelised implementations alone. Our interactions with users, our experience of parallelising SMC samplers and the preliminary results we have obtained when comparing SMC samplers and MCMC make us excited about the potential that SMC samplers offer as a "New Approach for Data Science". Our current work has only begun to explore the potential offered by SMC samplers. We perceive significant benefit could result from a larger programme of work that helps us understand the extent to which users will benefit from replacing MCMC with SMC samplers. We propose a programme of work that combines a focus on users' problems with a systematic investigation into the opportunities offered by SMC samplers. Our strategy for achieving impact comprises multiple tactics. Specifically, we will: use identified users to act as "evangelists" in each of their domains; work with our hardware-oriented partners to produce high-performance reference implementations; engage with the developer team for Stan (the most widely-used generic MCMC implementation); work with the Industrial Mathematics Knowledge Transfer Network and the Alan Turing Institute to engage with both users and other algorithmic developers.

    more_vert
  • Funder: European Commission Project Code: 619795
    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R034710/1
    Funder Contribution: 2,950,480 GBP

    There are tremendous demands for advanced statistical methodology to make scientific sense of the deluge of data emerging from the data revolution of the 21st Century. Huge challenges in modelling, computation, and statistical algorithms have been created by diverse and important questions in virtually every area of human activity. CoSInES will create a step change in the use of principled statistical methodology, motivated by and feeding into these challenges. Much of our research will develop and study generic methods with applicability in a wide-range of applications. We will study high-dimensional statistical algorithms whose performance scales well to high-dimensions and to big data sets. We will develop statistical theory to understand new complex models stimulated from applications. We will produce methodology tailored to specific computational hardware. We will study the statistical and algorithmic effects of mis-match between data and models. We shall also build methodology for statistical inference where privacy constraints mean that the data cannot be directly accessed. CoSInES willl also focus on two major application domains which will form stimulating and challenging motivation for our research: Data-centric engineering, and Defence and Security. To maximise the impact and speed of translation of our research in these areas, we will closely partner the Alan Turing Institute which is running large programmes in these areas funded respectively by the Lloyd's Register Foundation and GCHQ. Data is providing a disruptive transformation that is revolutionising the engineering professions with previously unimagined ways of designing, manufacturing, operating and maintaining engineering assets all the way through to their decommissioning. The Data centric engineering programme (DCE) at the Alan Turing Institute is leading in the design and operation of the worlds very first pedestrian bridge to be opened and operated in a major international city that will be completely 3-D printed. Fibre-optic sensors embedded in the structure will provide continuous streams of data measuring the main structural properties of the bridge. Unique opportunities to monitor and control the bridge via "digital twins" are being developed by DCE and this is presenting enormous challenges to existing applied mathematical and statistical modelling of these complex structures where even the bulk material properties are unknown and certainly stochastic in their values. A new generation of numerical inferential methods are being demanded to support this progress. Within the Defence and Security domain, there are many statistical challenges emerging from the need to process and communicate big and complex data sets, for example within the area of cyber-security. The virtual world has emerged as a dominant global marketplace within which the majority of organisations operate. This has motivated nefarious actors - from "bedroom hackers" to state-sponsored terrorists - to operate in this environment to further their economic or political ambitions. To counter this threat, it is necessary to produce a complete statistical representation of the environment, in the presence of missing data, significant temporal change, and an adversary willing to manipulate socio and virtual systems in order to achieve their goals. As a second example, to counter the threat of global terrorism, it is necessary for law-enforcement agencies within the UK to share data, whilst rigorously applying data protection laws to maintain individuals' privacy. It is therefore necessary to have mathematical guarantees over such data sharing arrangements, and to formulate statistical methodologies for the "penetration testing" of anonymised data.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.