Powered by OpenAIRE graph
Found an issue? Give us feedback

LLR

Leprince-Ringuet Laboratory
13 Projects, page 1 of 3
  • Funder: French National Research Agency (ANR) Project Code: ANR-13-JS05-0002
    Funder Contribution: 263,624 EUR

    The recent discovery of a new boson X by the two experiments (ATLAS and CMS) of the Large Hadron Collider (LHC) is a major breakthrough in the understanding of the fundamental interactions. It could help to elucidate the nature of the spontaneous electroweak symmetry breaking (EWSB) mechanism presumed responsible for the appearance of the mass of the elementary particles in the early moments of the Universe. The decay channels with charged leptons in the final state offer the best opportunities for an exhaustive study of the properties of the new boson. The X-> ZZ*-> 4l channel (where l is an electron or a muon) is the « golden » mode for the discovery. It allows in stand-alone for a precise determination of the mass mX. It moreover can provide a determination of the spin-parity state (S^CP) by exploiting the kinematics of disintegration in the center of mass of the resonance. The X -> 2 taus channel is essential to establish the existence of a direct coupling to leptons. The coupling to tau leptons is the only lepton coupling eventually accessible at the LHC in the timescale of this project. It moreover can provide a good sensitivity to SP, through the study of the polarization of the tau leptons, while offering some sensitivity to boson mass. The X boson production modes can be distinguished experimentally in a rather clean manner in both of the X-> ZZ* -> 4l and X -> 2 taus channels, by the presence of additional leptons or jets. In particular, the distinction between the production by gluon fusion (through virtual loops of top quarks), by associated production with a vector boson W or Z, or by vector boson fusion (ZZH and WWH) is particularly important for the constraint on the couplings. The aim of this very challenging project is to develop and deploy innovative and powerful analysis techniques to provide the best measurements of the properties of the new boson, via leptonic final states, at the LHC with higher energy and luminosity. It consists of providing for the first time at the LHC an analysis chain with new lepton reconstruction techniques incorporated in a global description of the events, interleaved with a full interpretation via event weighting based on a novel Matrix Element approach at next-to-leading order. It will benefit from the unique expertise in Europe at the Laboratoire Leprince Ringuet (LLR) of the Ecole Polytechnique, and with partners from the Fakultet elektrotehnike strojarstva i brodogradnje (FESB) in Split (Croatia), on leptons (reconstruction, identification and isolation of charged leptons), as well as on the X-> ZZ*->4l and X ->2 taus analysis in the CMS experiment at the LHC.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-18-CE31-0007
    Funder Contribution: 295,777 EUR

    New calorimetry techniques developed for future high-energy and high-luminosity accelerators are providing more granular detectors that give access to a 3D view of particle showers. The amount of data produced by such detectors is enormous and raises new challenges for the trigger systems. In addition, the information from the inner trackers will be included in the first level (L1) of these systems, providing the possibility to develop so-called Particle Flow algorithms already at the electronics level of the trigger. The objective of this project is to develop and implement innovative event reconstruction techniques for L1 trigger systems, based on highly granular calorimeters coupled with trackers. This relies on the resolution of technological obstacles in several points of the trigger chain in order to ensure that the best trigger decisions are made. Major discoveries in high-energy physics always relied on the development of innovative detectors and data acquisition techniques. These technologies had also numerous applications in various domains (e.g. health, energy). New calorimetry techniques developed for future high-energy and high-luminosity accelerators are providing more granular detectors that give access to a 3D view of particle showers. This is in particular the case for the calorimeters which are being developed for the very high luminosities at the LHC (HL-LHC). The fine segmentation of these calorimeters is a powerful tool to reconstruct very busy collision events produced in such colliders, made of the products of more than a hundred proton-proton interactions (pile-up). But the amount of data produced by such detectors is enormous and raises new challenges for the trigger systems that need to transfer and process these data in the most effective way. The architecture of these systems and the algorithmic techniques implemented need to be completely redesigned to make use of this unprecedented data flow. In addition, more global pictures of the collision events are also necessary at trigger level to maintain an efficient selection of interesting physics events at high luminosities. This is why the information from the inner trackers will be included in the level-1 (L1) trigger systems of the ATLAS and CMS experiments for the HL-LHC, providing the possibility to develop so-called Particle Flow algorithms already at the electronics level of the trigger system. The topologies of interesting collision events need to be identified rapidly despite the extremely harsh environment induced by the pile-up of more than a hundred of collisions. There are currently no algorithms that can identify electrons, photons, tau leptons and hadron jets in 3D calorimeters and trackers, within the time window available in L1 trigger systems. The objective of this project is to develop and implement innovative event reconstruction techniques for L1 trigger systems, based on highly granular calorimeters coupled with trackers. These techniques will allow to make use of the full potential of the upgraded detectors for the HL-LHC, such as the CMS new highly granular endcap calorimeters (HGCal) and track trigger. This relies on the resolution of technological obstacles in several points of the trigger chain in order to ensure that the best trigger decisions are made.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-21-CE31-0030
    Funder Contribution: 215,338 EUR

    Most of the recent discoveries in particle physics are linked to the increase of the detector volume and / or granularity to observe complex phenomena that were inaccessible previously due to a lack of precision. This approach increases the available statistics and precision by multiple orders of magnitude which facilitate the detection of rare events, at the price of a significant increase of the number of channels. The challenge is that most of the standard techniques for reconstruction and triggering are not operative in such a context. For example, the energy threshold-based triggers fail to handle the complexity of the high pile-up collisions. The neural network methods are known to handle well the noisy and complex data inputs to deliver high level classification and regression. In particular, the convolution techniques have allowed outstanding improvement in the computer vision field. Unfortunately, they do not cope with the very peculiar topologies of the particle detectors and the irregular distribution of their sensors. Alternatives have been discovered to obtain the same classification power in that kind of non-euclidean environment, for example, the spatial graph convolution which applies adapted convolution kernels to the data represented as an undirected graph labeled by the sensor measurements. These techniques have proven to give excellent results on the particle detector data at Large Hadron Collider but also for neutrinos experiments. They allow particle identification and continuous parameter regression, but also segmentation of entangled data which is a typical concern in secondary particle showers. The operations that transform the data into a graph are often very computationally expensive. In particular, all the techniques in which this operation is based on learned parameters (in the sense of machine learning) prevent the system from being used in a context where the computational time or latency are constrained (any triggering electronics, real-time data monitoring systems or even offline systems with a too big data volume). For example, in the Super-Kamiokande neutrino experiment, a complex shape identifier would advantageously replace the current energy cut during the reconstruction phase that rejects many low energy events despite their physical interest. Another example is the future high-granularity endcap calorimeter (HGCal) of CMS for which it becomes crucial to be able to extract high level trigger primitives directly from the electronics to handle the complexity of the high luminosity collisions and take accurate triggering decisions. This is why, it is of utmost importance to design high-performance versions of these algorithms, which can increase the performance in all the constrained situations and allow their realization in the detectors. The objective of this project is to develop and implement a new efficient selection algorithms for constrained computational environments by combining three main ideas • Reducing the graph construction complexity by developing algorithms based on pre-calculated graph connectivity which would allow obtaining an almost linear complexity for the online part by exploiting intrinsic parallelism of the problem. This is made possible by the fixed positionning of the sensors in the particle detectors. • Developing segmented version of graph convolution, allowing to distribute it over multiple computational unit. • Optimizing the size and the nature of the convolution networks with advanced techniques of derivative-free optimization and adaptation to the electronic implementation. These objectives will be declined in the three experiment contexts: Offline HGCal reconstruction, Online HGCal level 1 trigger and Super-Kamiokande reconstruction of the Diffused Supernova Neutrinos Background (DSNB).

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-23-CE31-0017
    Funder Contribution: 386,312 EUR

    The project aims to study the impact of the time accuracy in calorimeters on the reconstruction quality of Particle Flow Algorithms (PFA) and in particular to determine which time accuracy is necessary to improve the separation of close by hadronic and electromagnetic showers. To do so, the modelling of the timing response of the prototype calorimeters SiWECAL (electromagnetic) and SDHCAL (hadronic) equipped with MGRPC detectors (Multi-layer Glass Resistive Plate Chambers) will be performed and included as input to the ARBOR and APRIL PFA algorithm.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-16-CE31-0002
    Funder Contribution: 429,937 EUR

    This project entails the study of a novel state of matter, the Quark-Gluon Plasma (QGP), via collisions of heavy ions at the Large Hadron Collider (LHC) with the Compact Muon Solenoid (CMS). More specifically, the project investigates the jet quenching phenomenon, whereby partons lose energy as they traverse the QGP. Run 1 of the LHC allowed for the first clean measurement of fully reconstructed jets in heavy ions. The large dijet transverse momentum asymmetries observed from the first Pb-Pb data remain among the most acclaimed results from the LHC thus far and have stimulated a wealth of theoretical developments. Despite these advances, an accurate modeling of parton energy loss, to the extent that it can be reliably encoded in Monte Carlo generators, has not yet been achieved. This would require more differential measurements to address remaining ambiguities on the theory side. One of the most important open questions is the dependence of energy loss on the flavor of the initiating parton. This proposal aims to address this important open question using jets initiated by massive quarks, whose interest is twofold: 1) Based on the non-Abelian nature of QCD, quark jets are widely expected to lose less energy than gluon jets. This flavor dependence, which has not yet been directly observed, is predicted to depend on the details of the model (e.g., strong vs. weak coupling models). 2) Radiation from massive quarks is known to be damped in the direction of propagation. This should lead to reduction of radiative energy loss in the QGP, particularly for energies not much larger than the heavy quark mass. The ongoing Run 2 of the LHC enables precision studies of heavy quark jets in heavy-ion collisions for the first time. While such jets have long been a standard tool of the high-energy physicist, the first proof-of-principle measurement of such jets in heavy ions was only recently performed, namely a measurement of the b-jet nuclear modification factor in Pb-Pb collisions. While this measurement has already ruled out a dramatic flavor dependence at large transverse momentum, it is limited by the sizable systematic uncertainties inherent to jet spectrum measurements, as well as by an irreducible background arising from collinear splitting of gluons to b-quark pairs in the final state. HotShowers aims to address these limitations using back-to-back correlations of b jets, which a) greatly reduce sensitivity to systematic effects such as uncertainty from the jet energy scale, and b) largely eliminate the contribution from gluon splitting. As jet quenching measurements in Pb-Pb collisions become increasingly precise, it becomes correspondingly necessary to constrain cold nuclear matter effects, accessible through p-Pb collisions, which are scheduled to be delivered at the end of this year. Pairs of b-jets will provide increased sensitivity to the nuclear gluon distribution, as they are produced almost exclusively through gluon fusion. I further plan a first measurement of heavy flavor jets correlated to prompt photons. This measurement sets the stage for a precision measurement in this channel in Pb-Pb collisions in LHC Run 3 that will allow for a more direct estimate of heavy quark energy loss for both charm and bottom. To facilitate our physics objectives the team will lead the effort to develop tracking algorithms for heavy ions for the upgraded pixel detector that will be installed for the 2018 Pb-Pb run. Finally, the interpretation of our measurements will be facilitated by phenomenological studies that will investigate the sensitivity of the heavy flavor observables to cold and hot nuclear matter effects.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right
3 Organizations, page 1 of 1

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.