
Graphcore
Graphcore
3 Projects, page 1 of 1
assignment_turned_in Project2022 - 2025Partners:University of California, San Diego, EURATOM/CCFE, StackHPC Limited, CCFE/UKAEA, UNIVERSITY OF CAMBRIDGE +38 partnersUniversity of California, San Diego,EURATOM/CCFE,StackHPC Limited,CCFE/UKAEA,UNIVERSITY OF CAMBRIDGE,Diamond Light Source,IBM UNITED KINGDOM LIMITED,ANL,STFC - LABORATORIES,Cambridge Integrated Knowledge Centre,The Alan Turing Institute,DiRAC (Distributed Res utiliz Adv Comp),Cerebras Systems,EDF Energy Plc (UK),nVIDIA,British Antarctic Survey,University of California, San Diego,nVIDIA,University of Cambridge,Graphcore,The Rosalind Franklin Institute,Graphcore,University of California, San Diego,IBM (United Kingdom),STFC - Laboratories,NERC British Antarctic Survey,IBM (United States),Argonne National Laboratory,DDN (DataDirect Network) (International),Diamond Light Source,ORNL,The Alan Turing Institute,EDF Energy (United Kingdom),DDN (DataDirect Network) (International),Oak Ridge National Laboratory,NERC BRITISH ANTARCTIC SURVEY,Cerebras Systems,StackHPC Limited,Science and Technology Facilities Council,IBM (United Kingdom),British Energy Generation Ltd,The Rosalind Franklin Institute,Boston LtdFunder: UK Research and Innovation Project Code: EP/X019918/1Funder Contribution: 750,713 GBPAdvances in Artificial Intelligence (AI) are transforming the world we live in today. The innovations are driving two, interconnected aspects: They augment our knowledge, for example, we understand the behaviour of a virus better and faster than we did a decade ago. This improved understanding fuels innovations, improving the quality of our life, such as better vaccines, or better batteries for our mobile phones or our electric vehicles. The role AI and thus of computing is rather crucial for such advancements. The desire to improve our knowledge on fundamentals, and thus to improve the quality of our life, has become central to our existence. Better and faster understanding leads to better and faster innovations being developed. This essential desire, in turn, demands computations to be performed at a faster rate than ever before - not only to understand very large datasets better, but also to perform very complex simulations, at least at a rate 50 times faster than most powerful computers we have on the planet today --- era of exascale computing. Exascale computers will be able to perform billion billion calculations per second. The general challenge is to have relevant software technologies ready when such exascale computing becomes a reality, and it is a significant challenge to the international community. This proposal aims to develop a software suite and relevant software designs to serve as blueprints for using AI for scientific discoveries at exascale --- Blueprinting AI for Science at Exascale (BASE-II). This project is a continuation of our previous work, carried out as part of Phase I, namely, Benchmarking for AI for Science at Exascale (BASE-I). In Phase I, we gathered an essential set of requirements from various scientific communities, which underpins our work in this phase, The resulting software and designs will cover the following: a) Facilitate better understanding of the interplay between different AI algorithms, and AI hardware systems across a range of scientific problems. We will be achieving this through a set of AI benchmarks, against which different AI software can be verified, b) Facilitating incredibly complex simulations using AI: Although exascale systems will facilitate complex simulations (which are essential for mimicking realistic cases), we will accelerate them using AI. This can result in remarkable speedups (e.g., from days to seconds). Such a transformation can provide a massive leap in scientific discoveries. c) Harmonising the efforts of scientific communities and of vendors through better partnerships: Exascale systems will have complex hardware capabilities, which may be difficult for scientists to understand. Equally, hardware system manufacturers working on the design of exascale systems, do not always understand the underpinning science. This unharmonised effort or non-synchronised advancements, hitherto has been sub-optimal. We intend to build better software / hardware through better partnerships, which we refer to as hardware-software co-design. d) The success of AI is primarily due to a technology called, deep learning, which inherently relies on very large volumes of data. With technological advances, we can foresee that in the exascale era, the data volumes will not only be huge but also will be multi-modal. Understanding these extremely large-scale datasets will remain key to ensuring that AI can be conducted at exascale. e) Finally, the community, whether scientific, or academic or industry, will need additional software technologies, or more specifically, an ecosystem of software tools to help with exascale computing. To this end, we will be producing a software toolbox. We will also be conducting various knowledge exchange activities, such as, workshops, training events and in-field placements to ensure multi-directional flow of information and knowledge across relevant stakeholders and communities.
more_vert assignment_turned_in Project2018 - 2024Partners:Lancaster University, VTT Technical Research Centre of Finland, Private Address, Heriot-Watt University, ORNL +37 partnersLancaster University,VTT Technical Research Centre of Finland,Private Address,Heriot-Watt University,ORNL,University of Warwick,VTT Technical Research Centre of Finland,UO,Oak Ridge National Laboratory,Complutense University of Madrid,Amadeus Capital Partners Limited,Washington University in St Louis,University of Maryland,University of Waterloo (Canada),Max-Planck-Gymnasium,MV Portfolios Inc,Vienne University of Technology,Lancaster University,Graphcore,Max Planck Institutes,University of Cambridge,TUW,Amadeus Capital Partners Limited,Amazon Co UK Ltd,VTT ,Graphcore,Amazon.co.uk Ltd,MV Portfolios Inc,Oxford Nanopore Technologies (United Kingdom),Cambridge Integrated Knowledge Centre,Oxford Nanopore Technologies,University of Warwick,UMCP,Heriot-Watt University,University of Perugia,University of Oxford,Private Address,UNIVERSITY OF CAMBRIDGE,University of Waterloo (Canada),WSU,TU Wien,University of OregonFunder: UK Research and Innovation Project Code: EP/R029229/1Funder Contribution: 1,530,590 GBPAs we gain ever-greater control of materials on a very small scale, so a new world of possibilities opens up to be studied for their scientific interest and harnessed for their technological benefits. In science and technology nano often denotes tiny things, with dimensions measured in billionths of metres. At this scale structures have to be understood in terms of the positions of individual atoms and the chemical bonds between them. The flow of electricity can behave like waves, with the effects adding or subtracting like ripples on the surface of a pond into which two stones have been dropped a small distance apart. Electrons can behave like tiny magnets, and could provide very accurate timekeeping in a smartphone. Carbon nanotubes can vibrate like guitar strings, and just as the pitch of a note can be changed by a finger, so they can be sensitive to the touch of a single molecule. In all these effects, we need to understand how the function on the nanoscale relates to the structure on the nanoscale. This requires a comprehensive combination of scientific skills and methods. First, we have to be able to make the materials which we shall use. This is the realm of chemistry, but it also involves growth of new carbon materials such as graphene and single-walled carbon nanotubes. Second, we need to fabricate the tiny devices which we shall measure. Most commonly we use a beam of electrons to pattern the structures which we need, though there are plenty of other methods which we use as well. Third, we need to see what we have made, and know whether it corresponds to what we intended. For this we again use beams of electrons, but now in microscopes that can image how individual atoms are arranged. Fourth, we need to measure how what we have made functions, for example how electricity flows through it or how it can be made to vibrate. A significant new development in our laboratory is the use of machine learning for choosing what to measure next. We have set ourselves the goal that within five years the machine will decide what the next experiment should be to the standard of a second-year graduate student. The Platform Grant renewal 'From Nanoscale Structure to Nanoscale Function' will provide underpinning support for a remarkable team of researchers who bring together exactly the skills set which is needed for this kind of research. It builds on the success of the current Platform Grant 'Molecular Quantum Devices'. This grant has given crucial support to the team and to the development of their careers. The combination of skills, and the commitment to working towards shared goals, has empowered the team to make progress which would not have been possible otherwise. For example, our team's broad range of complementary skills were vital in allowing us to develop a method, now patented, for making nanogaps in graphene. This led to reproducible and stable methods of making molecular quantum devices, the core subject of that grant. The renewal of the Platform Grant will underpin other topics that also build on achievements of the current grant, and which require a similar set of skills to determine how function on the nanoscale depends on structure on the nanoscale. You can get a flavour of the research to be undertaken by the questions which motivate the researchers to be supported by the grant. Here is a selection. Can we extend quantum control to bigger things? Can molecular scale magnets be controlled by a current? How do molecules conduct electricity? How can we pass information between light and microwaves? How can we measure a thousand quantum devices in a single experiment? Are the atoms in our devices where we want them? Can computers decide what to measure next? As we make progress in questions like these, so we shall better understand how structure on the nanoscale gives rise to function on the nanoscale. And that understanding will in turn provide the basis for new discoveries and new technologies.
more_vert assignment_turned_in Project2021 - 2026Partners:Graphcore, Invenia Labs, University of Cambridge, Wayve Technologies Ltd., Tractable LtdGraphcore,Invenia Labs,University of Cambridge,Wayve Technologies Ltd.,Tractable LtdFunder: UK Research and Innovation Project Code: EP/W002965/1Funder Contribution: 2,623,130 GBPModern Artificial Intelligence is dominated by methods that learn from large amounts of data. These machine learning methods underpin many current technologies such as voice recognition, face recognition, product recommendation, social media news feeds, online advertising, and autonomous vehicles. They are also the basis of recent breakthroughs in AI like the game-playing systems that can beat humans at chess, Go, and poker. Machine learning also underlies many practical advances in science, engineering and medicine, such as automated tools for analysing genomic data and medical images. These advances in machine learning have come about through the use of large complex deep learning models, open-source software, very large data sets, new computer hardware, and distributed computation. Despite the spectacular successes, industry investment and media attention, many limitations and therefore opportunities for research remain. The limitations of current AI systems include a poor handling of noise, uncertainty and changing circumstances, gaps in the ability to combine symbolic and statistical reasoning, and the lack of automation of many of the stages of learning. This project will advance modern data-driven AI methods by developing a number of new algorithms and applications to address these limitations. The work will bring together symbolic and statistical methods through new scalable deep probabilistic approaches. These approaches will generalise better to novel data, and "know when they don't know". The project will also develop better tools for automating the process of building and maintaining a machine learning system. We will also bring approaches from data-driven machine learning to the use of simulators, which are widely used to model and understand complex systems in science and engineering. Finally, we will apply the algorithms and software tools developed in this proposal to challenging problems in modelling and optimising complex systems with many interdependent components, in particular in the areas of electrical grid efficiency and transportation systems.
more_vert