Powered by OpenAIRE graph
Found an issue? Give us feedback

Samsung Electronics Research Institute

Country: United Kingdom

Samsung Electronics Research Institute

22 Projects, page 1 of 5
  • Funder: UK Research and Innovation Project Code: EP/V050869/1
    Funder Contribution: 1,131,070 GBP

    Knowledge graphs are graph-structured knowledge resources which are often expressed as triples such as ("UK", "hasCapital", "London") and ("London", "instanceOf", "City"). As well as such basic "facts", knowledge graphs often include structural knowledge about the domain, typically based on a hierarchy of entity types (AKA classes or concepts); e.g., ("City", "subClassOf", "HumanSettlement"). A knowledge graph that consist largely or wholly of structural knowledge is often called an ontology. Some knowledge graphs are general purpose, such as Wikidata and the Google knowledge graph, while others are developed for specific domains such as medicine. They are rapidly gaining in importance and are playing a key role in many applications. For example, Google uses its knowledge graph for search, question answering and Google Assistant, while Amazon and Apple also use knowledge graphs to power their personal assistants Alexa and Siri, respectively. Knowledge graphs are widely used in the domain of health and wellbeing, e.g., for organising and exchanging information and to power clinical artificial intelligence (AI). One example is FoodOn, an ontology representing food knowledge such as fine-grained food product categorization, nutrition and allergens, as well as related activities such as agriculture. Knowledge graph construction and maintenance is, however, very challenging, and may require a considerable amount of human effort. Notwithstanding the high cost of knowledge creation, knowledge graphs are often still biased, incomplete or too coarse-grained. Take HeLis, an ontology for health and lifestyle, as an example. Its food knowledge is quite simple and often represents many different variants with a single entity (e.g., "Banana" for all kinds and derivatives of bananas), and its knowledge of health is highly incomplete when compared with dedicated biomedical ontologies. In addition, it is hard to avoid errors such as incorrect facts and categorisations in knowledge graphs; e.g., FoodOn categorises soy milk as a kind of milk, but not as a kind of soy product. Such errors may be inherited from the information source or be caused by the construction procedure. These issues significantly impact the usefulness of knowledge graphs and the reliability of the systems that use them; e.g., the categorisation of soy milk could be dangerous if the knowledge graph were used in a food allergen alert system. Therefore, effective knowledge graph construction and curation is urgently required and will play a critical role in exploiting the full value of knowledge graphs. As there are now many available knowledge resources, one possible approach is to use multiple sources to address both coverage and quality issues, e.g., via integration and cross-checking. For example, integrating HeLis with FoodOn would combine fine-grained categorization of food products (including bananas) with lifestyle knowledge. Moreover, cross-checking FoodOn with HeLis will reveal the problem with soy milk, which is correctly categorized as a soy product in HeLis. Automating the integration of knowledge resources is challenging, but combining semantic and learning-based techniques seems to be a very promising approach, and we have already obtained some encouraging preliminary results in this direction. The proposed research will therefore study a range of semantic and machine learning techniques, and how to combine them to support knowledge graph construction and curation. As well as its application to knowledge graph construction and curation, this research will also contribute to the development of new neural-symbolic theories, paradigms and methods, such as deep semantic embedding for learning representations for expressive knowledge, and knowledge-guided learning for addressing sample shortage problems. These techniques promise to revolutionize many AI and big data technologies.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S008101/1
    Funder Contribution: 617,539 GBP

    In recent years there has been a huge explosion in the use of mobile devices such as smartphones, laptop computers and tablets which require a wireless connection to the internet. Numbers are forecast to reach 40 billion worldwide by 2020 as areas as diverse as the home, transport, healthcare, military and infrastructure experience increasing levels of embedded 'smart' functionality and user operability. Major applications such as future 5G communications systems, the Internet of Things and Autonomous Vehicles are driving this technology. At present wireless systems operate at frequencies up to 6GHz. However, there is a growing realisation that the spectrum below 6GHz cannot support the huge data rates being demanded by future users and applications. The next step is to develop technologies utilising much higher frequencies to give data rates compatible with future demand. Currently, world licencing bodies such as ETSI and ITU have identified millimetre wave frequencies up to 90 GHz as most likely for this expansion in the spectrum. Strategically, the UK must develop wireless technologies to compete on the world stage and increase its competitiveness particularly in competition with the Far East. Superfast 5G level Telecoms infrastructure is central to the Industrial Strategy Green Paper, which the UK government has been championing and highlighting in the ten pillars of combined strategy. Two technology bottlenecks in millimetre wave receivers, which are important aspects of future communication systems, are: 1) current receiver architectures are unable to directly digitise millimetre wave signals with acceptable power consumption, and 2) antenna arrays are not sufficiently frequency agile. This project aims to address both bottlenecks using new techniques developed on the FARAD project. The proposed research will embrace the co-design of antennas, filters and amplifiers with track-and-hold-amplifiers, analogue-to-digital-convertors and digital down conversion. This will result in new receiver architectures for fully digital massive MIMO systems. The techniques and architectures developed in this project will enable future high-frequency networks to operate efficiently in the new millimetre wave transmission bands. The research will have far-reaching consequences for solving the wireless capacity bottleneck over the next 20 to 30 years and keeping the UK at the forefront of millimetre wave technology and innovation.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T022574/1
    Funder Contribution: 2,931,660 GBP

    The Future Places Centre will explore how ubiquitous and pervasive technologies, the IoT, and new data science tools can let people reimagine what their future spaces might be. Today, the footprint of such systems extends well beyond the work environments where they first showed themselves and are now, quite literally, ubiquitous. Combined with advances in data science, particularly in the general area of AI, these are enabling entirely new forms of applications and expanding our understanding of how we can shape our physical spaces. The result of these trends is that the potential impact of these systems is no longer confined to work settings or the scientific imagination; it points towards all contexts in which the relationship between space and human practice might be altered through digitally-enabled comprehension of the worlds we inhabit. Such change necessitates enriching the public imagination about what future places might be and how they might be understood. In particular, it points towards new ways of using pervasive technologies (such as the IoT), to shape healthy, sustainable living through the creation of appropriate places. To paraphrase Churchill: if he said we make our buildings, and our buildings come to shape us, the Future Places centre starts from the premise that new understanding of places (enabled by pervasive computing, data science and AI tools), can be combined with a public concern for sustainability and the environment to help shape healthier places and thus make healthier people. It is thus the goal of the centre to reimagine and develop further Mark Weiser's original vision of ubiquitous computing. As it does this so it will cohere Lancaster's pioneering DE projects and create a world-class interdisciplinary research endeavour that binds Lancaster to the local community, to industry and government, making the North West a test-bed for what might be.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/P027628/1
    Funder Contribution: 2,031,830 GBP

    cQD are attracting significant interest as the key components for next-generation smart displays/lightings, photo detectors and image sensors, and solar cells. This is because they show excellent and unique physical properties such as i) high sensitivity and quantum efficiency, ii) excellent colour gamut with narrow emission (absorption) bandwidths, iii) colour tunability/band gap engineering through size control, iv) high photostability and v) high air stability as they are based on inorganic materials. Therefore, since the latest results on cQD LEDs and image sensors/photodetector have demonstrated the possibility of integration of cQD optoelectronics with current semiconducting technologies, the pace of research in the cQD area has been accelerated dramatically and an increasing number of research groups and companies are currently active in this area worldwide. The investigators expect that cQD LED will replace current technologies through: (1) Superior reliability of the inorganic structure in an almost air barrier free architecture w.r.t OLED (WVTR of 10-6 g/m2/day), (2) Lower power consumption and low product cost, 60 and 50 % less than current OLED, respectively, and (3) Colour purity of 110% or greater compared to typically 80% for OLED. This project will address will enhance the current state of the art to achieve cost reduction through using continuous, as opposed batch, cQD synthesis, mono layer resin free processing, all inorganic interface materials such as ETL (electron transport layer) and HTL (hole transport layer), device integration and packaging for EL cQD LED, with Cd-free cQDs for smart lighting and displays. The project proposed builds upon research established in the investigators' groups in Cambridge and Oxford. We are well equipped with facilities for pilot fabrication using technologies which will underpin the commercialisation of cQD LED based lighting/displays. The final deliverable will be energy efficient 4" active devices with predictable life times, and sustainable high brightness for flexible smart lighting. The elements of the smart light which will include colour hue and brightness control based on active matrix switching of pixels will also be applicable to displays, but without the same high pixel definition. We shall explore the design and synthesis of Cd-free cQDs with the core/shell structures using continuous flow production methods which can then be incorporated into active devices. Key to successfully implementing devices are the scalable production of high quality cQDs with specific surface passivation and functionalisation which limit the effects of impurities and defects and produce high quality thin films with well understood interfaces. In this project we will use scalable production techniques that can be transferred to in-line process for mass production. We shall focus on the manufacturing and processing aspects to create mono layer-controlled cQD films with entire close-packed and almost void free structure using dry-transfer printing methods. This will enhance efficiency and reliability of film for the desired mode of devices. Interface control based on a monolayer level layer-by-layer transfer process will be employed in order to obtain highly uniform monolayers which can be expanded to multilayer stacked film processing including interface layers. The interface materials for emissive cQD film with inorganic HTL and ETL layer for EL devices will also be designed and fabricated at the device integration step (WP 2-3). Driving electronics using TFTs will be designed for reliable and stable operation. Industrial partners in the supply chain for smart flexible lighting production, are: CDT Ltd for materials, lighting, metrology; CPI Ltd, Dupont-Teijin Films UK for flexible films for lighting; Emberion UK, Dyson, FlexEnable, Samsung UK for device processing, and system integration; Aixtron UK for TCF; Nanoco and Merck as materials suppliers and EAB members.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/P022723/1
    Funder Contribution: 560,633 GBP

    This proposal starts with the notion that, when considering future visual sensing technologies for next-generation Internet-of-Things surveillance, drone technology, and robotics, it is quickly becoming evident that sampling and processing raw pixels is going to be extremely inefficient in terms of energy consumption and reaction times. After all, the most efficient visual computing systems we know, i.e., biological vision and perception in mammals, do not use pixels and frame-based sampling. Therefore, IOSIRE argues that we need to explore the feasibility of advanced machine-to-machine (M2M) communications systems that directly capture, compress and transmit neuromorphically-sampled visual information to cloud computing services in order to produce content classification or retrieval results with extremely low power and low latency. IOSIRE aims to build on recently-devised hardware for neuromorphic sensing, a.k.a. dynamic vision sensors (DVS) or silicon retinas. Unlike conventional global-shutter (frame) based sensors, DVS cameras capture the on/off triggering corresponding to changes of reflectance in the observed scene. Remarkably, DVS cameras achieve this with (i) 10-fold reduction in power consumption (10-20 mW of power consumption instead of hundreds of milliwatts) and (ii) 100-fold increase in speed (e.g., when the events are rendered as video frames, 700-2000 frames per second can be achieved). In more detail, the IOSIRE project proposes a fundamentally new paradigm where the DVS sensing and processing produces a layered representation that can be used locally to derive actionable responses via edge processing, but select parts can also be transmitted to a server in the cloud in order to derive advanced analytics and services. The classes of services considered by IOSIRE require a scalable and hierarchical representation for multipurpose usage of DVS data, rather than a fixed representation suitable for an individual application (such as motion analysis or object detection). Indeed, this is the radical difference of IOSIRE from existing DVS approaches: instead of constraining applications to on-board processing, we propose layered data representations and adaptive M2M transmission frameworks for DVS data representations, which are mapped to each application's quality metrics, response times, and energy consumption limits, and will enable a wide range of services by selectively offloading the data to the cloud. The targeted breakthrough by IOSIRE is to provide a framework with extreme scalability: in comparison to conventional designs for visual data processing and transmission over M2M networks, and under comparable reconstruction, recognition or retrieval accuracy in applications, up to 100-fold decrease in energy consumption (and associated delay in transmission/reaction time) will be pursued. Such ground-breaking boosting of performance will be pursued via proof-of-concept designs and will influence the design of future commercial systems.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.