Powered by OpenAIRE graph
Found an issue? Give us feedback
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
510 Projects, page 1 of 102
  • Funder: European Commission Project Code: 209797
    more_vert
  • Funder: European Commission Project Code: 945878
    Overall Budget: 1,420,000 EURFunder Contribution: 1,420,000 EUR

    Almost ten years into the highly successful program both in ATLAS and CMS, our understanding of the Standard Model (SM) of particle physics has deepened. Nonetheless, what lies beyond the SM remains one of the most urgent questions of physics in the 21st century. To move forward, one must think outside of the box and leap into uncharted waters. Searches today are aiming at the high-energy frontier, while low-mass resonances are mostly overlooked by the Large Hadron Collider (LHC). Consequently, far-reaching hints of new physics may silentlyhide in the data. Motivated by numerous New Physics (NP) scenarios that often predict light states, such as extended Higgs sectors, axion physics, or dark sector models, among others, the PI will develop new techniques to search for low-mass resonances decaying into two collimated low-pT hadronic τ leptons. τs, being the heaviest, third-generation leptons, provide a unique experimental opportunity to search for low-lying states that would otherwise go undetecte

    more_vert
  • Funder: European Commission Project Code: 101116258
    Overall Budget: 1,419,380 EURFunder Contribution: 1,419,380 EUR

    Arguably, the most crucial objective of Learning Theory is to understand the basic notion of generalization: How can a learning agent infer from a finite amount of data to the whole population? Today's learning algorithms are poorly understood from that perspective. In particular, best practices, such as using highly overparameterized models to fit relatively few data, seem to be in almost contradiction to common wisdom, and classical models of learning seem to be incapable of explaining the impressive success of such algorithms. The objective of this proposal is to understand generalization in overparameterized models and understand the role of algorithms in learning. Toward this task, I will consider two mathematical models of learning that shed light on this fundamental problem. The first model is the well-studied, yet only seemingly well-understood, model of Stochastic Convex optimization. My investigations, so far, provided a new picture that is much more complex than was previously known or assumed, regarding fundamental notions such as regularization, inductive bias as well as stability. These works show that even in this, simplistic setup of learning, understanding such fundamental principles may be a highly ambitious task. On the other hand, given the simplicity of the model, it seems that such an understanding is a prerequisite to any future model that will explain modern Machine Learning algorithms. The second model considers a modern task of synthetic data generation. Synthetic data generation serves as an ideal model to further study the tension between concepts such as generalization and memorization. Here we with a challenge to model the question of generalization, and answer fundamental questions such as: when is synthetic data original and when is it a copy of the empirical data?

    more_vert
  • Funder: European Commission Project Code: 631323
    more_vert
  • Funder: European Commission Project Code: 630974
    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.