Powered by OpenAIRE graph
Found an issue? Give us feedback

Graphcore

3 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/X019918/1
    Funder Contribution: 750,713 GBP

    Advances in Artificial Intelligence (AI) are transforming the world we live in today. The innovations are driving two, interconnected aspects: They augment our knowledge, for example, we understand the behaviour of a virus better and faster than we did a decade ago. This improved understanding fuels innovations, improving the quality of our life, such as better vaccines, or better batteries for our mobile phones or our electric vehicles. The role AI and thus of computing is rather crucial for such advancements. The desire to improve our knowledge on fundamentals, and thus to improve the quality of our life, has become central to our existence. Better and faster understanding leads to better and faster innovations being developed. This essential desire, in turn, demands computations to be performed at a faster rate than ever before - not only to understand very large datasets better, but also to perform very complex simulations, at least at a rate 50 times faster than most powerful computers we have on the planet today --- era of exascale computing. Exascale computers will be able to perform billion billion calculations per second. The general challenge is to have relevant software technologies ready when such exascale computing becomes a reality, and it is a significant challenge to the international community. This proposal aims to develop a software suite and relevant software designs to serve as blueprints for using AI for scientific discoveries at exascale --- Blueprinting AI for Science at Exascale (BASE-II). This project is a continuation of our previous work, carried out as part of Phase I, namely, Benchmarking for AI for Science at Exascale (BASE-I). In Phase I, we gathered an essential set of requirements from various scientific communities, which underpins our work in this phase, The resulting software and designs will cover the following: a) Facilitate better understanding of the interplay between different AI algorithms, and AI hardware systems across a range of scientific problems. We will be achieving this through a set of AI benchmarks, against which different AI software can be verified, b) Facilitating incredibly complex simulations using AI: Although exascale systems will facilitate complex simulations (which are essential for mimicking realistic cases), we will accelerate them using AI. This can result in remarkable speedups (e.g., from days to seconds). Such a transformation can provide a massive leap in scientific discoveries. c) Harmonising the efforts of scientific communities and of vendors through better partnerships: Exascale systems will have complex hardware capabilities, which may be difficult for scientists to understand. Equally, hardware system manufacturers working on the design of exascale systems, do not always understand the underpinning science. This unharmonised effort or non-synchronised advancements, hitherto has been sub-optimal. We intend to build better software / hardware through better partnerships, which we refer to as hardware-software co-design. d) The success of AI is primarily due to a technology called, deep learning, which inherently relies on very large volumes of data. With technological advances, we can foresee that in the exascale era, the data volumes will not only be huge but also will be multi-modal. Understanding these extremely large-scale datasets will remain key to ensuring that AI can be conducted at exascale. e) Finally, the community, whether scientific, or academic or industry, will need additional software technologies, or more specifically, an ecosystem of software tools to help with exascale computing. To this end, we will be producing a software toolbox. We will also be conducting various knowledge exchange activities, such as, workshops, training events and in-field placements to ensure multi-directional flow of information and knowledge across relevant stakeholders and communities.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R029229/1
    Funder Contribution: 1,530,590 GBP

    As we gain ever-greater control of materials on a very small scale, so a new world of possibilities opens up to be studied for their scientific interest and harnessed for their technological benefits. In science and technology nano often denotes tiny things, with dimensions measured in billionths of metres. At this scale structures have to be understood in terms of the positions of individual atoms and the chemical bonds between them. The flow of electricity can behave like waves, with the effects adding or subtracting like ripples on the surface of a pond into which two stones have been dropped a small distance apart. Electrons can behave like tiny magnets, and could provide very accurate timekeeping in a smartphone. Carbon nanotubes can vibrate like guitar strings, and just as the pitch of a note can be changed by a finger, so they can be sensitive to the touch of a single molecule. In all these effects, we need to understand how the function on the nanoscale relates to the structure on the nanoscale. This requires a comprehensive combination of scientific skills and methods. First, we have to be able to make the materials which we shall use. This is the realm of chemistry, but it also involves growth of new carbon materials such as graphene and single-walled carbon nanotubes. Second, we need to fabricate the tiny devices which we shall measure. Most commonly we use a beam of electrons to pattern the structures which we need, though there are plenty of other methods which we use as well. Third, we need to see what we have made, and know whether it corresponds to what we intended. For this we again use beams of electrons, but now in microscopes that can image how individual atoms are arranged. Fourth, we need to measure how what we have made functions, for example how electricity flows through it or how it can be made to vibrate. A significant new development in our laboratory is the use of machine learning for choosing what to measure next. We have set ourselves the goal that within five years the machine will decide what the next experiment should be to the standard of a second-year graduate student. The Platform Grant renewal 'From Nanoscale Structure to Nanoscale Function' will provide underpinning support for a remarkable team of researchers who bring together exactly the skills set which is needed for this kind of research. It builds on the success of the current Platform Grant 'Molecular Quantum Devices'. This grant has given crucial support to the team and to the development of their careers. The combination of skills, and the commitment to working towards shared goals, has empowered the team to make progress which would not have been possible otherwise. For example, our team's broad range of complementary skills were vital in allowing us to develop a method, now patented, for making nanogaps in graphene. This led to reproducible and stable methods of making molecular quantum devices, the core subject of that grant. The renewal of the Platform Grant will underpin other topics that also build on achievements of the current grant, and which require a similar set of skills to determine how function on the nanoscale depends on structure on the nanoscale. You can get a flavour of the research to be undertaken by the questions which motivate the researchers to be supported by the grant. Here is a selection. Can we extend quantum control to bigger things? Can molecular scale magnets be controlled by a current? How do molecules conduct electricity? How can we pass information between light and microwaves? How can we measure a thousand quantum devices in a single experiment? Are the atoms in our devices where we want them? Can computers decide what to measure next? As we make progress in questions like these, so we shall better understand how structure on the nanoscale gives rise to function on the nanoscale. And that understanding will in turn provide the basis for new discoveries and new technologies.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/W002965/1
    Funder Contribution: 2,623,130 GBP

    Modern Artificial Intelligence is dominated by methods that learn from large amounts of data. These machine learning methods underpin many current technologies such as voice recognition, face recognition, product recommendation, social media news feeds, online advertising, and autonomous vehicles. They are also the basis of recent breakthroughs in AI like the game-playing systems that can beat humans at chess, Go, and poker. Machine learning also underlies many practical advances in science, engineering and medicine, such as automated tools for analysing genomic data and medical images. These advances in machine learning have come about through the use of large complex deep learning models, open-source software, very large data sets, new computer hardware, and distributed computation. Despite the spectacular successes, industry investment and media attention, many limitations and therefore opportunities for research remain. The limitations of current AI systems include a poor handling of noise, uncertainty and changing circumstances, gaps in the ability to combine symbolic and statistical reasoning, and the lack of automation of many of the stages of learning. This project will advance modern data-driven AI methods by developing a number of new algorithms and applications to address these limitations. The work will bring together symbolic and statistical methods through new scalable deep probabilistic approaches. These approaches will generalise better to novel data, and "know when they don't know". The project will also develop better tools for automating the process of building and maintaining a machine learning system. We will also bring approaches from data-driven machine learning to the use of simulators, which are widely used to model and understand complex systems in science and engineering. Finally, we will apply the algorithms and software tools developed in this proposal to challenging problems in modelling and optimising complex systems with many interdependent components, in particular in the areas of electrical grid efficiency and transportation systems.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.