Powered by OpenAIRE graph
Found an issue? Give us feedback

Consequential Robots

Consequential Robots

6 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/V026682/1
    Funder Contribution: 3,056,750 GBP

    Engineered systems are increasingly being used autonomously, making decisions and taking actions without human intervention. These Autonomous Systems are already being deployed in industrial sectors but in controlled scenarios (e.g. static automated production lines, fixed sensors). They start to get into difficulties when the task increases in complexity or the environment is uncontrolled (e.g. drones for offshore windfarm inspection), or where there is a high interaction with people and entities in the world (e.g. self-driving cars) or where they have to work as a team (e.g. cobots working in a factory). The EN-TRUST Vision is that these systems learn situations where trust is typically lost unnecessarily, adapting this prediction for specific people and contexts. Stakeholder trust will be managed through transparent interaction, increasing the confidence of the stakeholders to use the Autonomous Systems, meaning that they can be adopted in scenarios never before thought possible, such as doing the jobs that endanger humans (e.g. first responders or pandemic related tasks). The EN-TRUST 'Trust' Node will perform foundational research on how humans and Autonomous Systems (AS) can work together by building a shared reality, based on mutual understanding through trustworthy interaction. The EN-TRUST Node will create a UK research centre of excellence for trust that will inform the design of Autonomous Systems going forward, ensuring that they are widely used and accepted in a variety of applications. This cross-cutting multidisciplinary approach is grounded in Psychology and Cognitive Science and consists of three "pillars of trust": 1) computational models of human trust in AS; 2) adaptation of these models in the face of errors and uncontrolled environments; and 3) user validation and evaluation across a broad range of sectors in realistic scenarios. This EN-TRUST framework will explore how to best establish, maintain and repair trust by incorporating the subjective view of human trust towards Autonomous Systems, thus maximising their positive societal and economic benefits.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/W017466/1
    Funder Contribution: 443,263 GBP

    Life in sound occurs in motion. As human listeners, audition - the ability to listen - is shaped by physical interactions between our bodies and the environment. We integrate motion with auditory perception in order to hear better (e.g., by approaching sound sources of interest), to identify objects (e.g., by touching objects and listening to the resulting sound), to detect faults (e.g., by moving objects to listen to anomalous creaks), and to offload thought (e.g., by tapping surfaces to recall musical pieces). Therefore, the ability to make sense of and exploit sounds in motion is a fundamental prerequisite for embodied Artificial Intelligence (AI). This project will pioneer the underpinning, probabilistic framework for active robot audition that enables embodied agents to control the motion of their own bodies ('ego-motion') for auditory attention in realistic, acoustic environments (households, public spaces, and environments involving multiple, competing sound sources). By integrating sound with motion, this project will enable machines to imagine, control and leverage the auditory consequences of physical interactions with the environment. By transforming the ways in which machines make sense of life in sound, the research outcomes will be pivotal for new, emerging markets that enable robots to augment, rather than rival, humans in order to surpass the limitations of the human body (sensory accuracy, strength, endurance, memory). Therefore, the proposed research has the potential to transform and disrupt a whole host of industries involving machine listening, ranging from human-robot augmentation (smart prosthetics, assistive listening technology, brain-computer interfaces) to human-robot collaboration (planetary exploration, search-and-rescue, hazardous material removal) and automation (environmental monitoring, autonomous vehicles, AI-assisted diagnosis in healthcare). This project will consider the specific case study of a collaborative robot ('cobot') that augments the auditory experience of a hearing-impaired human partner. Hearing loss is the second most common disability in the UK, affecting 11M people. The loss of hearing affects situational awareness as well as the ability to communicate, which can impact on mental health and, in extreme cases, cognitive function. Nevertheless, for complex reasons that range from discomfort to social stigma, only 2M people choose to wear hearing aids. The ambition of this project is to develop a cobot that will augment the auditory experience of a hearing-impaired person. The cobot will move autonomously within the human partner's household to assist with everyday tasks. Our research will enable the cobot to exploit ego-motion in order to learn an internal representation of the acoustic scene (children chattering, kettle boiling, spouse calling for help). The cobot will interface with its partner through an on-person smart device (watch, mobile phone). Using the human-cobot interface, the cobot will alert its partner of salient events (call for help) via vibrating messages, and share its auditory experiences via interactive maps that visualise auditory cues and indicate saliency (e.g., loudness, spontaneity) and valence (positive vs concerning). In contrast to smart devices, the cobot will have the unique capability to actively attend to and explore uncertain events (thump upstairs), and take action (assist spouse, call ambulance) without the need for permanently installed devices in personal spaces (bathroom, bedroom). Therefore, the project has the potential to transform the lives of people with hearing impairments by enabling long-term independent living, safeguarding privacy, and fostering inclusivity.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026801/1
    Funder Contribution: 2,923,650 GBP

    Autonomous systems promise to improve our lives; driverless trains and robotic cleaners are examples of autonomous systems that are already among us and work well within confined environments. It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and provide evidence of their trustworthiness. Due to the complexity of autonomous systems, typically involving AI components, low-level hardware control, and sophisticated interactions with humans and the uncertain environment, evidence of any nature requires efforts from a variety of disciplines. To tackle this challenge, we gathered consortium of experts on AI, robotics, human-computer interaction, systems and software engineering, and testing. Together, we will establish the foundations and techniques for verification of properties of autonomous systems to inform designs, provide evidence of key properties, and guide monitoring after deployment. Currently, verifiability is hampered by several issues: difficulties to understand how evidence provided by techniques that focus on individual aspects of a system (control engineering, AI, or human interaction, for example) compose to provide evidence for the system as whole; difficulties of communication between stakeholders that use different languages and practices in their disciplines; difficulties in dealing with advanced concepts in AI, control and hardware design, software for critical systems; and others. As a consequence, autonomous systems are often developed using advanced engineering techniques, but outdated approaches to verification. We propose a creative programme of work that will enable fundamental changes to the current state of the art and of practice. We will define a mathematical framework that enables a common understanding of the diverse practices and concepts involved in verification of autonomy. Our framework will provide the mathematical underpinning, required by any engineering effort, to accommodate the notations used by the various disciplines. With this common understanding, we will justify translations between languages, compositions of artefacts (engineering models, tests, simulations, and so on) defined in different languages, and system-level inferences from verifications of components. With such a rich foundation and wealth of results, we will transform the state of practice. Currently, developers build systems from scratch, or reusing components without any evidence of their operational conditions. Resulting systems are deployed in constrained conditions (reduced speed or contained environment, for example) or offered for deployment at the user's own risk. Instead, we envisage the future availability of a store of verified autonomous systems and components. In such a future, in the store, users will find not just system implementations, but also evidence of their operational conditions and expected behaviour (engineering models, mathematical results, tests, and so on). When a developer checks in a product, the store will require all these artefacts, described in well understood languages, and will automatically verify the evidence of trustworthiness. Developers will also be able to check in components for other developers; equally, they will be accompanied by evidence required to permit confidence in their use. In this changed world, users will buy applications with clear guarantees of their operational requirements and profile. Users will also be able to ask for verification of adequacy for customised platforms and environment, for example. Verification is no longer an issue. Working with the EPSRC TAS Hub and other nodes, and our extensive range of academic and industrial partners, we will collaborate to ensure that the notations, verification techniques, and properties, that we consider, contribute to our common agenda to bring autonomy to our everyday lives.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S005099/1
    Funder Contribution: 1,723,460 GBP

    This fellowship will bring together a variety of people from different walks of life, including academics, industry, civil societies, policy makers and members of the public, in order to create new ways of developing and managing technological innovations. There is often a tension between the economic needs for increasing technological innovation and the ways in which these innovations may be developed responsibly - that is in a manner that is societally acceptable and desirable. We will develop an approach that aims to anticipate not only the positive outcomes but also the potentially negative consequences of technological innovations for society. We will draw on this and an understanding of people's lived rights and obligations to provide creative resources and methods for designers to develop responsible and accountable new technologies. Responsible Innovation lies at the heart of technologies in the Digital economy that aim to promote trust, identity, privacy and security. Although it has been drawn on in other scientific domains, as yet we have no complete example of how responsible innovation can be successfully applied in the DE sector. The fellowship will consider a motivating example to develop responsible innovation in action. We will look into one particular domain of technology and develop an agile process which will take account of the views of a wide range of people in a fast-changing context, in order to have some influence over the trajectory of an innovation. We will focus on the domain of social robots, those which interact with people and make decisions about what to do on their own accord. Because they make their own decisions in order to perform actions, we need to be able to recover what they did and why they did it, when things seem to go wrong. We will develop an ethical black box (EBB) through which the social robot will be able to explain its behaviour in simple and understandable ways. The development of the EBB will be an example of responsible innovation. We will test this out in particular accident investigations as a social process and we will do this in 3 different study domains. In the final stages of the fellowship, we will show the outcomes of the technological development and the investigations through a variety of means, including through the web and a final public showcase event. This will be to a variety of people including the general public, policy makers, and developers.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026801/2
    Funder Contribution: 2,621,150 GBP

    Autonomous systems promise to improve our lives; driverless trains and robotic cleaners are examples of autonomous systems that are already among us and work well within confined environments. It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and provide evidence of their trustworthiness. Due to the complexity of autonomous systems, typically involving AI components, low-level hardware control, and sophisticated interactions with humans and the uncertain environment, evidence of any nature requires efforts from a variety of disciplines. To tackle this challenge, we gathered consortium of experts on AI, robotics, human-computer interaction, systems and software engineering, and testing. Together, we will establish the foundations and techniques for verification of properties of autonomous systems to inform designs, provide evidence of key properties, and guide monitoring after deployment. Currently, verifiability is hampered by several issues: difficulties to understand how evidence provided by techniques that focus on individual aspects of a system (control engineering, AI, or human interaction, for example) compose to provide evidence for the system as whole; difficulties of communication between stakeholders that use different languages and practices in their disciplines; difficulties in dealing with advanced concepts in AI, control and hardware design, software for critical systems; and others. As a consequence, autonomous systems are often developed using advanced engineering techniques, but outdated approaches to verification. We propose a creative programme of work that will enable fundamental changes to the current state of the art and of practice. We will define a mathematical framework that enables a common understanding of the diverse practices and concepts involved in verification of autonomy. Our framework will provide the mathematical underpinning, required by any engineering effort, to accommodate the notations used by the various disciplines. With this common understanding, we will justify translations between languages, compositions of artefacts (engineering models, tests, simulations, and so on) defined in different languages, and system-level inferences from verifications of components. With such a rich foundation and wealth of results, we will transform the state of practice. Currently, developers build systems from scratch, or reusing components without any evidence of their operational conditions. Resulting systems are deployed in constrained conditions (reduced speed or contained environment, for example) or offered for deployment at the user's own risk. Instead, we envisage the future availability of a store of verified autonomous systems and components. In such a future, in the store, users will find not just system implementations, but also evidence of their operational conditions and expected behaviour (engineering models, mathematical results, tests, and so on). When a developer checks in a product, the store will require all these artefacts, described in well understood languages, and will automatically verify the evidence of trustworthiness. Developers will also be able to check in components for other developers; equally, they will be accompanied by evidence required to permit confidence in their use. In this changed world, users will buy applications with clear guarantees of their operational requirements and profile. Users will also be able to ask for verification of adequacy for customised platforms and environment, for example. Verification is no longer an issue. Working with the EPSRC TAS Hub and other nodes, and our extensive range of academic and industrial partners, we will collaborate to ensure that the notations, verification techniques, and properties, that we consider, contribute to our common agenda to bring autonomy to our everyday lives.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.