Powered by OpenAIRE graph
Found an issue? Give us feedback

SAFRAN ELECTRONICS & DEFENSE

Country: France

SAFRAN ELECTRONICS & DEFENSE

Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
20 Projects, page 1 of 4
  • Funder: European Commission Project Code: 665044
    Overall Budget: 2,702,400 EURFunder Contribution: 2,702,400 EUR

    Future data processing challenges in science will enter the "Big Data" era, involving massive, as well as complex and heterogeneous data. Extracting, with high precision, every bit of information from scientific data requires overcoming fundamental statistical challenges, which mandate the design of dedicated methods that must be both effective enough to capture the intricacy of real-world datasets and robust to the high complexity of instrumental measurements. Moreover, future datasets, such as those provided by the space mission Euclid, will involve at least gigascale data, which will make mandatory the development of new, physically relevant, data models and the implementation of effective and computationally efficient processing tools. The recent emergence of novel data analysis methods in machine learning should foster a new modeling framework, allowing for a better preservation of the intrinsic physical properties of real data that generally live on intricate spaces, such as signal manifolds. Furthermore, advances in operations research and optimization theory pave the way for effective solutions to overcome the large-scale data processing bottlenecks. In this context, the objective of the DEDALE project is threefold: i) introduce new models and methods to analyze and restore complex, multivariate, manifold-based signals; ii) exploit the current knowledge in optimization and operations research to build efficient numerical data processing algorithms in the large-scale settings; and iii) show the reliability of the proposed data modeling and analysis technologies to tackle Scientific Big Data challenges in two different applications: one in cosmology, to map the dark matter mass map of the universe, and one in remote sensing to increase the capabilities of automatic airborne imaging analysis systems.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-23-MOXE-0005
    Funder Contribution: 449,244 EUR

    The consortium associates Safran Electronics & Defense (SED) with the robotics laboratory (CAOR) of Mines Paris Science et Lettres. The project applies to the MOBILEX challenge call for proposals and addresses two fundamental points of autonomous mobile robotics : the mobility in unstructured areas disrupting the autonomous vehicle control. This results into uncertainty and unknowns in route planning, feasibility of trajectories, control and temporal or spatial uncertainty at different points on the trajectory. The semantic interpretation of the environment done by the vehicle (function equivalent to understanding a scene by humans), which the areas of machine learning, the perception functions and the proximal sensors (cameras, LIDAR) contribute to. Indeed, planning and control techniques are based on these representations by interpreting them in order to establish an intelligent behavior of navigation, maneuver and mobility. The scientific approach consists in building and maintaining a representation of the interpretable environment and thus in deducing models that are essential for localization, computation of navigation plans and control strategies. In terms of automatic learning for geolocation, perception and cartography, the flexibility of the models gives the robot the expected level of autonomy, in interaction with an operator supervising the mission progress. Several technological breakthroughs are expected in order to meet the challenges of unstructured environments and disturbances such as the loss of GNSS or communications. The project uses the existing assets developed by SED and used in the scope of FURIOUS, COHOMA and iMUGS projects. This architecture provides an on-board computing framework and a set of software components. Thanks to CAOR research activities, the localization, perception and control functions will be enhanced with new methods addressing physical models hybridization with machine learning. The project is structured around five work packages A: “Advanced Autonomous Functions” relating to research activities, work-package B “Engineering of the Autonomous System Architecture” ensures the evolution of the SED system, work-package C “Development and integration” includes development activities, and work-package D “Analysis and execution of the challenges” focuses on challenges activity. Finally, work-package E “Project management” includes all Project management activities. This project has a fundamental impact in the positioning of SED in France and in Europe on autonomous systems, including drones, by building an offer based on a suite of autonomy modules, integrating control/command, perception, navigation, artificial intelligence and dependability. For the CAOR, this project enables launching an effective team with a concrete research objective linked to the installation of its future unit in Satory.

    more_vert
  • Funder: European Commission Project Code: 807081
    Overall Budget: 158,178,000 EURFunder Contribution: 113,185,000 EUR

    The Systems ITD will develop and build highly integrated, high TRL demonstrators in major areas such as power management, cockpit, wing, landing gear, to address the needs of future generation aircraft in terms of maturation, demonstration and Innovation.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-17-CE33-0011
    Funder Contribution: 611,223 EUR

    MOBI-DEEP addresses the development of technologies for autonomous navigation in unknown environments using low cost vision sensors. The project relies on the assumption that the inference of semantic information (presence of particular structures, identification of objects of interest, obstacles, etc.), the inference of depth maps as well as the one of motion maps describing a scene given by a monocular camera can be sufficient for guiding a person, robot, etc. in an open and unfamiliar environment. This project departs from the current dominant approaches where good prior knowledge of the environment and the ability to reconstruct 3D metric structure of this environment (SLAM, Lidar, etc.) are needed. It allows to deal with situations where the systems should be able to navigate with limited knowledge of their environment and using a perceptual system as light as possible. MOBI-DEEP will address these situations through two use cases: the guidance of the visually impaired and the navigation of mobile robots in open areas. In both cases, the problem studied can be formulated as follows: an on-board camera, roughly localized by GPS, has to move to a specified position given by GPS coordinates. No accurate map is available, and the navigation should be done through a series of local displacements. The image sensor has to extract from the images sufficient information to make the navigation possible. The carrier can be a robot or a person. We further assume that it is possible to reach the destination by simply moving toward this direction. The problem studied is the one of the planning a path in an unknown environment by building over time an egocentric and semantics representation of the navigable space. This raises three main questions which be studied in the project for both use cases: what are the minimum semantic/3D/dynamic information required to allow the navigation? How to extract the information from monocular images? How to dynamically navigate through local representations, in a geometrically and semantically described environment? Special emphasis will be given to experiments within a Living Lab that will have a dual purpose: conducting real scale experiments and allowing to conduct scientific mediation.

    more_vert
  • Funder: European Commission Project Code: 945535
    Overall Budget: 69,432,000 EURFunder Contribution: 47,733,500 EUR

    The Systems ITD will develop and build highly integrated, high TRL demonstrators in major areas such as power management, cockpit, wing, landing gear, to address the needs of future generation aircraft in terms of maturation, demonstration and Innovation. Integrated Cockpit Environment for New Functions & Operations - D1: Extended Cockpit - D24: Enhanced vision and awareness - D25: Integrated Modular Communications Innovative Cabin and Cargo technologies - D2: Equipment and systems for Cabin & Cargo applications Innovative and Integrated Electrical Wing Architecture and Components - D3: Smart Integrated Wing Demonstrator - D4: Innovative Electrical Wing Demonstrator Innovative Technologies and Optimized Architecture for Landing Gears - D5: Advanced Landing Gears Systems - D6: Electrical Nose Landing Gear System - D7: Electrical Rotorcraft Landing Gear System - D17: Advanced Landing Gear Sensing & Monitoring System High Power Electrical Generation and Conversion Architecture - D8.1: Innovative Power Generation and Conversion for large A/C - D8.2: Innovative Power Generation and Conversion for small A/C Innovative Energy Management Systems Architectures - D9: Innovative Electrical and Control/Command Networks for distribution systems - D10: HVDC Electrical Power Network Demonstrator Innovative Technologies for Environmental Control System - D11: Next Generation EECS for Large A/C - D12: Next Generation EECS Demonstrator for Regional A/C - D13: Next Generation Cooling systems Demonstrators - D16: Thermal Management demonstration on AVANT test rig Ice protection demonstration - D14: Advanced Electro-thermal Wing Ice Protection Demonstrator - D15: Ice Detection System Small Air Transport (SAT) Innovative Systems Solutions - D18, D19, D21: More Electric Aircraft level 0 - D20: Low power de-ice for SAT - D22: Safe and Comfortable Cabin - D23: Affordable future avionic solution for small aircraft ECO Design T2: Production Lifecycle Optimisation Long-term Technologies T1: Power Electronics T3: Modelling and Simulation Tools for System Integration on Aircraft

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.