Powered by OpenAIRE graph
Found an issue? Give us feedback

BSC

Barcelona Supercomputing Center
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
9 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/X019446/1
    Funder Contribution: 406,428 GBP

    Computational biomedicine offers many avenues for taking full advantage of emerging exascale computing resources and, as such, will provide a wealth of benefits as a use-case within the wider ExCALIBUR initiative. These benefits will be realised not just via the medical problems we elucidate but also through the technical developments we implement to enhance the underlying algorithmic performance and workflows supporting their deployment. Without the technical capacity to effectively utilise resources at such unprecedented scale - either in large monolithic simulations spread over the equivalent of many hundreds of thousands of cores, in coupled code settings, or being launched as massive sets of tasks to enhance drug discovery or probe a human population - the advances in hardware performance and scale cannot be fully capitalised on. Our project will seek to identify solutions to these challenges and communicate them throughout the ExCALIBUR community, bringing the field of computational biomedicine and its community of practitioners to join those disciplines that make regular use of high-performance computing and are also seeking to reach the exascale. In this project, we will be deploying applications in three key areas of computational biomedicine: molecular medicine, vascular modelling and cardiac simulation. This scope and diversity of our use cases mean that we shall appeal strongly to the biomedical community at large. We shall demonstrate how to develop and deploy applications on emerging exascale machines to achieve increasingly high-fidelity descriptions of the human body in health and disease. In the field of molecular modelling, we shall develop and deploy complex workflows built from a combination of machine learning and physics-based methods to accelerate the preclinical drug discovery pipeline and for personalised drug treatment. These methods will enable us to develop highly selective small molecule therapeutics for cell surface receptors that mediate key physiological responses. Our vascular studies will utilise a combination of 1D, 3D models and machine learning to examine blood flow through complex, personalised arterial and venous structures. We will seek to utilise these in the identification of risk factors in clinical applications such as aneurysm rupture and for the management of ischaemic stroke. Within the cardiac simulation domain, a new GPU accelerated code will be utilised to perform multiscale cardiac electrophysiology simulations. By running large populations based on large clinical datasets such as UK Biobank, we can identify individual at elevated risk of various forms of heart disease. Coupling heart models to simulations of vascular blood flow will allow us to assess how problems which arise in one part of the body (such as the heart) can cause pathologies on remote regions. This exchange of knowledge will form a key component of CompBioMedX. Through this focussed effort, we will engage with the broader ExCALIBUR initiative to ensure that we take advantage of the efforts already underway within the community and in return reciprocate through the advances made with our use case. Many biomedical experts remain unfamiliar with high-performance computing and need to be better informed of its advantages and capabilities. We shall engage pro-actively with medical students early in their career to illustrate the benefits of using modelling and supercomputers and encourage them to exploit them in their own medical research. We shall engage in a similar manner with undergraduate biosciences students to establish a culture and practice of using computational methods to inform the experimental work underpinning the basic science that is the first step in the translational pathway from bench to bedside.

    more_vert
  • Funder: UK Research and Innovation Project Code: NE/I015612/1
    Funder Contribution: 703,951 GBP

    The volcanic plume from the Eyjafjallajökull eruption has caused significant disruption to air transport across Europe. The regulatory response, ensuring aviation safety, depends on dispersion models. The accuracy of the dispersion predictions depend on the intensity of the eruption, on the model representation of the plume dynamics and the physical properties of the ash and gases in the plume. Better characterisation of these processes and properties will require improved understanding of the near-source plume region. This project will bring to bear observations and modelling in order to achieve more accurate and validated dispersion predictions. The investigation will seek to integrate the volcanological and atmospheric science methods in order to initiate a complete system model of the near-field atmospheric processes. This study will integrate new modelling and insights into the dynamics of the volcanic plume and its gravitational equilibration in the stratified atmosphere, effects of meteorological conditions, physical and chemical behaviour of ash particles and gases, physical and chemical in situ measurements, ground-based remote sensing and satellite remote sensing of the plume with very high resolution numerical computational modelling. When integrated with characterisations of the emissions themselves, the research will lead to enhanced predictive capability. The Eyjafjallajökull eruption has now paused. However, all three previous historical eruptions of Eyjafjallajökull were followed by eruptions of the much larger Katla volcano. At least two other volcanic systems in Iceland are 'primed' ready to erupt. This project will ensure that the science and organisational lessons learned from the April/May 2010 response to Eyjafjallajökull are translated fully into preparedness for a further eruption of any other volcano over the coming years. Overall, the project will (a) complete the analysis of atmospheric data from the April/May eruption, (b) prepare for future observations and forecasting and (c) make additional observations if there is another eruption during within the forthcoming few years.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/M01567X/1
    Funder Contribution: 98,612 GBP

    We live in an era of multi-cores: computing processors are no longer marketed by their clock speeds, they are marked by the number of cores. The fundamental limits of energy and power density of processors will soon push us further into an age of dark-silicon where only a small portion of the chip can be powered at any time. In such a setting, putting more of the same processing cores on a chip (i.e. homogeneity) gives no advantage. This has forced computer architects to introduce heterogeneous many-core systems built around distinct processors -- which have different energy and performance characteristics and each is specialised for a certain class of applications. Computer architects now hope that software will find ways to unlock the potential of heterogeneous many-cores. Software developers, however, are struggling to cope with this dramatic increase in complexity; and the current compiler tools, whose role is to enable software makes effective use of the underlying hardware, are simply inadequate to the task. It is already a daunting task to build optimising compilers for homogeneous multi-cores consisting of identical cores, even just targeting performance (i.e. to make programs faster). It typically takes several generations of a compiler to start to effectively exploit the processor's potential, by which time a new processor appears and the process starts again. It will be a fundamentally more difficult task to design efficient compiler heuristics for optimising energy (i.e. to reduce energy consumption) and performance on heterogeneous many-cores, especially given the subtle interactions of different cores and inter-connections. Even if successfully achieved, the task of compiler design must likely to be started again when moving to a new released processor. This never ending game of catch-up inevitably delays time to market, meaning that we rarely fully exploit the hardware in its lifetime. If no solution is found, we will be faced with software stagnation and will be unable to offer scalable computing performance -- a driving force that has dramatically changed our society over the past 50 years. What is needed is an approach that evolves and adapts to the future hardware architectural change and delivers scalable performance over hardware generations. This project offers precisely that. It will achieve this by bringing together two distinct areas of computer science: parallel compiler design and machine learning to develop a new paradigm for energy and performance optimisation. Our key insight is that the best optimisation strategies can be learned from similar software/hardware settings; and the learnt knowledge can be constantly refreshed without human involvement. This project will deliver such a smart, adaptive compilation system. We will use machine learning to acquire knowledge of workloads, applications and the underlying hardware, testing new compilation strategies, learning how each individual program should be optimised for each specific computing environment, and constantly improving the optimisation heuristics over time. As knowledge of the application environment grows, our system will make programs faster and more energy efficient; for example, software will respond quicker and the battery life will last longer on mobile phones. It will reduce time to market for software products and deliver scalable performance as hardware advances. If successful, such as programme of work will help to the looming software crisis of dark silicon, which will be of benefit to academics and UK industry, and system software researchers and developers worldwide.

    more_vert
  • Funder: UK Research and Innovation Project Code: NE/T013516/1
    Funder Contribution: 356,284 GBP

    The Subpolar North Atlantic (SNA), which is the region of the Atlantic Ocean between 45-65N latitude, is a highly variable region. Surface temperatures and surface salinity here have varied on a range of time-scales, but the changes are dominated by large and slow changes on decadal or longer timescales. This decadal timescale variability appears to form a key component of a larger climate mode, the Atlantic Multidecadal Variability, which has been linked to a broad range of important climate impacts, including rainfall in the North African and south Asian monsoons, floods and droughts over Europe and North America, and the number of hurricanes. The SNA is also one of the most predictable places on Earth at decadal timescales, which suggests the potential for improved predictions of regional climate and high-impact weather years ahead. However, the origins of this variability in the SNA, and the processes controlling its impacts, are far from fully understood. There is significant evidence to suggest that anomalous heat loss from the subpolar North Atlantic Ocean to the atmosphere can instigate a cascade of changes across the North Atlantic basin in both the ocean and atmosphere. For example, changes in the SNA can change the strength of the ocean circulation to the south, affect the northward transport of heat and freshwater in the North Atlantic, and subsequently affect the upper ocean temperatures and salinity across the whole North Atlantic basin, and into the Arctic. Changes in the subpolar North Atlantic surface temperature are also thought to affect the atmospheric circulation - i.e. wind patterns - in both summer and winter. However, observational records are very short, and so there are significant problems with understanding causality, and considerable uncertainty about how well many of the important processes are represented in current climate models. WISHBONE will make use of new advanced climate simulations and forecast systems to make progress in understanding the impact of the subpolar North Atlantic on the wider North Atlantic basin. It will also test specific hypotheses related to understanding the specific role of heat loss over the subpolar North Atlantic in driving changes throughout the basin including the role of surface anomalies in driving wind patterns. WISHBONE is a collaboration between the National Centre for Atmospheric Science at the University of Reading, The National Oceanography Centre Southampton, The University of Oxford, and The University of Southampton from the U.K., and The National Center for Atmospheric Research, from the U.S.

    more_vert
  • Funder: UK Research and Innovation Project Code: NE/R005125/1
    Funder Contribution: 40,419 GBP

    The loss of Arctic sea-ice is one of the most compelling manifestations of man-made climate change. Profound environmental change is already affecting Arctic inhabitants and ecosystems. Increasing scientific evidence, including many key papers by the PI, suggests the impacts of sea-ice loss will be felt way beyond the poles. Linkages between Arctic sea-ice loss and extreme mid-latitude weather have become an area of increasing scholarly enquiry and societal interest. Yet, significant knowledge gaps remain that demand urgent attention; in particular, the robustness of response to sea-ice loss - and its underpinning physical causes - across different climate models. The Polar Amplification Model Intercomparison Project (PA-MIP) will significantly advance the state-of-the-art in understanding and modelling the climate response to Arctic and Antarctic sea-ice loss. It will enable deeper understanding of the causes and global effects of past and future polar change, and the physical mechanisms involved. PA-MIP is a novel and unique collaboration of UK and international scientists. To promote fruitful collaboration and drive research excellence, this proposal supports two key activities: a secondment scheme and a synthesis workshop, both with direct benefit to NERC-funded science.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.