
Sorbonne University
Sorbonne University
2 Projects, page 1 of 1
assignment_turned_in Project2024 - 2026Partners:Sorbonne University, QUB, TUBITAK, EPFLSorbonne University,QUB,TUBITAK,EPFLFunder: CHIST-ERA Project Code: CHIST-ERA-22-SPiDDS-08**Summary** Artificial Neural Nets (ANNs) are at the core of an increasing number of applications integrated with critical systems and using sensitive user data, making security and privacy concerns critical. Compromising a classifier for object detection in a robotic application can lead to safety breakdowns, while leakage of sensitive data, such as medical records, raises privacy concerns for users and legal exposure for providers. Recently, Federated Learning (FL) emerged as a promising distributed learning approach that enables learning from data belonging to multiple participants, without compromising privacy since user data is never directly exchanged. While FL has been promoted as a privacy-preserving approach, recent studies show that this approach is vulnerable to sophisticated attacks that are able to jeopardize both integrity and privacy of these systems, or otherwise disrupt their operation. Existing defences fall short of covering the range of threats that face FL systems, and in some cases defending against a class of attacks increases the vulnerability to other attacks. Moreover, the state-of-the-art defenses require high power overhead that might not be practical for embedded systems and Edge nodes in a FL system. While ANNs are the de-facto architectures for Machine Learning (ML), neuromorphic architectures like Spiking Neural Networks (SNNs) have recently emerged as an attractive alternative, due to their biological plausibility and brain-inspired functionality. Moreover, neuromorphic hardware can exploit the asynchronous neurons’ behaviour to achieve significantly high energy efficiency. In TruBrain, we propose a research effort towards privacy-preserving, secure and low power distributed intelligent systems. Our research objectives are as follows: - Objective1: Investigating the security and privacy threats for Neuromorphic nodes and characterising their inherent security and privacy-preserving characteristics - Objective2: Building a secure brain-inspired FL architecture: We leverage brain-inspired architectures to develop provably-secure practical neuromorphic FL systems. - Objective3: Bridging the gap between theory and practice in distributed neuromorphic learning systems’ security through a hardware-aware theoretical study. - Objective4: Designing and implementing a Hardware platform for neuromorphic FL nodes on FPGA and Integrating it in a RISC-V architecture. - Objective 5: Demonstrating our neuromorphic FL paradigm in a medical application use case, and validating its trustworthiness from a security and privacy perspective. ** Relevance** TruBrain is highly relevant to SPiDDS: we address the problem of security and privacy in decentralised and distributed learning platforms. Specifically, we aim to rigorously analyse the security, using the formal framework of Byzantine resilience, and privacy, using the notion of differential privacy, of the novel neuromorphic distributed learning paradigm. Second, in parallel to building theoretical foundations, we propose a comprehensive design that offers security and privacy while being suitable for low-power hardware platforms. Our proposal is relevant to the following topics specified in the call: • Design of hybrid software-hardware security and privacy solutions; • Design of verification models for real-world applications of privacy and security solutions; • Production of advanced use cases; • Development of self-aware systems to identify and adapt to new threat vectors; • Development of user-friendly federated learning techniques that deliver enhanced security or privacy; Lastly, we strive for the following specified objectives: • Develop methods to improve security and/or privacy of distributed and decentralised systems whilst maintaining system performance; • Lead to decentralised and distributed systems which are capable of being adaptable and reconfigurable end-to-end in order to maintain security and privacy for users.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=chistera____::d07094dd7ce2748579a326957227e8dd&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=chistera____::d07094dd7ce2748579a326957227e8dd&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2024 - 2027Partners:Jagiellonian University, Durham University, University College London, University of Glasgow, KNU +6 partnersJagiellonian University,Durham University,University College London,University of Glasgow,KNU,CNRS-LPSC,KNU,CNRS-LPSC,Sorbonne University,Boğaziçi University,Boğaziçi UniversityFunder: CHIST-ERA Project Code: CHIST-ERA-22-ORD-06The Large Hadron Collider (LHC), and other major particle-physics experiments past, present, and future, are vast public investments in fundamental science. However, while the data-analysis and publication mechanisms of such experiments are sufficient for well-defined targets such as the discovery of the Higgs boson in 2012 (and the W, Z, and gluon bosons before), they limit the power of the experimental data to explore more subtle phenomena. In the 10 years since the Higgs-boson discovery, the LHC has published many analyses testing the limits of the Standard Model (SM) — the established, but suspected-incomplete central paradigm of particle physics. Each direct-search paper has statistically disproven some simplified models of physics beyond the SM, but such models are no more a priori likely than more complex ones: the latter feature a mixture of the simplified ones’ new phenomena, but at lower intensity, rather than concentrated into a single characteristic. To study such “dispersed signal” models requires a change in how LHC results are interpreted: the emphasis must shift to combining measurements of many different event types and characteristics into holistic meta-analyses. Only such a global, maximum-information approach can optimally exploit the LHC results. This project will provide a step towards building the infrastructure needed to make this change. It will facilitate experiments to provide fast, re-runnable versions of data-analysis logic through enhancements of a domain-specific language and event-analysis toolkits. It will join up the network of such toolkits with the public repositories of research data and metadata. It will provide common interfaces for controlling preserved analyses in the multiple toolkits, and for statistically combining the thousands of measurements and assessing which combinations can provide the most powerful scientific statement about any beyond-SM theory. At the start of the 3rd major data-taking run of the LHC, the time is now ripe to put this machinery and culture in place, so that the LHC legacy is publicly preserved for all to reuse. The project specifically aims to enhance the extent to which public analysis data from particle-physics experiments (in a general sense, but particularly summary results such as those used in publication plots and statistical inference, rather than raw collider events) can be combined and re-used to test theories of new physics. These tests, pursued by theorists and experimentalists alike, can also go beyond particle physics and also connect to astrophysics and cosmology, nuclear-physics direct searches for dark-matter. The value of combining information from different individual analyses was made clear early in the LHC programme, as early experimental data proved crucial for improving models of SM physics. The huge scientific volume, greater model-complexity, and increased precision of the full LHC programme requires pursuing this approach in a more systematic and scalable manner, open to the whole community and including use of reference datasets to ensure validity into the far future. The time is right for this step, as the key technologies (DOI minting and tracking, RESTful Web APIs, version-control hosting with continuous integration, containerisation) have become mature in the last 5 or so years. Particle physics already has established open data and publication repositories, but the crucial link of connecting those to scalable preservations of the analysis logic needs to be made, as does normalising the culture of providing such preservations and engaging in the FAIR principles for open science data. Individual physicists are generally enthusiastic about such ideals, as evidenced by the uptake of open data policies at particle-physics labs, and preservation of full collider software workflows. But an explicit, funded effort is required to eliminate the technical barriers and make these desirable behaviours more accessible and rewarded.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=chistera____::5f614fa23b8607f9796fc2c32a4cf63b&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=chistera____::5f614fa23b8607f9796fc2c32a4cf63b&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu