Powered by OpenAIRE graph
Found an issue? Give us feedback

Privitar

4 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/S022503/1
    Funder Contribution: 5,733,540 GBP

    Recent reports from the Royal Society, the government Cybersecurity strategy, as well as the National Cyber Security Center highlight the importance of cybersecurity, in ensuring a safe information society. They highlight the challenges faced by the UK in this domain, and in particular the challenges this field poses: from a need for multi-disciplinary expertise and work to address complex challenges, that span from high-level policy to detailed engineering; to the need for an integrated approach between government initiatives, private industry initiatives and wider civil society to tackle both cybercrime and nation state interference into national infrastructures, from power grids to election systems. They conclude that expertise is lacking, particularly when it comes to multi-disciplinary experts with good understanding of effective work both in government and industry. The EPSRC Doctoral Training Center in Cybersecurity addresses this challenge, and aims to train multidisciplinary experts in engineering secure IT systems, tacking and interdicting cybercrime and formulating effective public policy interventions in this domain. The training provided provides expertise in all those areas through a combination of taught modules, and training in conducting original world-class research in those fields. Graduates will be domain experts in more than one of the subfields of cybersecurity, namely Human, Organizational and Regulatory aspects; Attacks, Defences and Cybercrime; Systems security and Cryptography; Program, Software and Platform Security and Infrastructure Security. They will receive training in using techniques from computing, social sciences, crime science and public policy to find appropriate solutions to problems within those domains. Further, they will be trained in responsible research and innovation to ensure both research, but also technology transfer and policy interventions are protective of people's rights, are compatible with democratic institutions, and improve the welfare of the public. Through a program of industrial internships all doctoral students will familiarize themselves with the technologies, polices and also challenges faced by real-world organizations, large and small, trying to tackle cybersecurity challenges. Therefore they will be equipped to assume leadership positions to solve those problems upon graduation.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V011189/1
    Funder Contribution: 6,972,600 GBP

    The REsearch centre on Privacy, Harm Reduction and Adversarial INfluence online (REPHRAIN) will bring together the UK's substantial academic, industry, policy and third sector capabilities to address the current tensions and imbalances between the substantial benefits to be gained by full participation in the digital economy and the potential for harm through loss of privacy, insecurity, disinformation and a myriad of other online harms. Combining world-leading experts from the Universities of Bristol, Edinburgh, Bath, King's and UCL, the REPHRAIN Centre will use an interdisciplinary approach - alongside principles of responsible innovation and creative engagement - to develop new insights that allow the socio-economic benefits of a digital economy to be maximised whilst minimising the online harms that emerge from this. REPHRAIN's leadership team will drive these insights in technical, social, behavioural, policy and regulatory research on privacy, privacy enhancing technologies and online harms, through an initial scoping phase and 25 inaugural projects. The work of REPHRAIN will be focused around three core missions and four engagement and impact objectives. Mission 1 emphasises the requirement to deliver privacy at scale whilst mitigating its misuse to inflict harms. This will focus on reconciling the tension between data privacy and lawful expectations of transparency by not only drawing heavily on advances in privacy-enhancing technologies (PETs), but also leveraging the full range of socio-technical approaches to rethink how we can best address potential trade-offs. Mission 2 emphasises the need to minimise harms whilst maximising the benefits from a sharing-driven digital economy, redressing citizens' rights in transactions in the data-driven economic model by transforming the narrative from privacy as confidentiality only to also include agency, control, transparency and ethical and social values. Finally, Mission 3 focuses on addressing the balance between individual agency and social good, developing a rigorous understanding of what privacy represents for different sectors and groups in society (including those hard to reach), the different online harms to which they may be exposed, and the cultural and societal nuances impacting effectiveness of harm-reduction approaches in practice. These missions are supported by four engagement and impact objectives that represent core pillars of REPHRAIN's approach: (1) design and engagement; (2) adoption and adoptability; (3) responsible, inclusive and ethical innovation; and (4) policy and regulation. Combined, these objectives will deliver co-production, co-creation and impact at scale across academia, industry, policy and the third sector. These activities will be complemented by a capability fund, which will ensure that REPHRAIN activities remain flexible and responsive to current issues, addressing emerging capability gaps, maximising impact and cultivating a public space for collaboration. REPHRAIN will be managed by a Strategic Board and supported by an External Advisory Group, the REPHRAIN Ethics Board, and will work with multiple external stakeholders across industry, public, and the third sector. Outcomes from the centre will be synthesised into the REPHRAIN Toolbox - a one-stop resource for researchers, practitioners, policy-makers, regulators and citizens - which will contribute to developing a culture of continuous learning, collaboration and open engagement and reflection within the area of online harm reduction. Overall, REPHRAIN focuses on interdisciplinary leadership provided by a highly experienced team and supported by state-of-the-art facilities, to develop and apply scientific expertise to ensure that the benefits of a digital society can be enjoyed safely and securely by all.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Y009800/1
    Funder Contribution: 30,712,000 GBP

    Artificial Intelligence (AI) can have dramatic effects on industrial sectors and societies (e.g., Generative AI, facial recognition, autonomous vehicles). AI UK will pioneer a reflective, inclusive approach to responsible AI development that does not ignore AI's potential harms but acknowledges, understands and mitigates them for diverse societies. AI UK adopts a strong human-centred approach to ensure societies deploy and use AI in a responsible way by providing the AI community with a toolkit of technological innovations, case studies, guidelines, policies and frameworks for all key sectors of the economy. To achieve this, AI UK will deliver and drive a collaborative ecosystem of researchers, industry, policymakers and stakeholders that will be responsive to the needs of society, led by a team of experienced, well-connected leaders from all four nations of the UK, committed to an inclusive approach to the management of the programme. AI UK grows an interdisciplinary ecosystem that adopts Equality, Diversity and Inclusivity (EDI), Trusted Research, and Responsible Research and Innovation (RRI) as fundamental principles. AI UK will champion a research culture where everyone is respected, valued and able to contribute and benefit and coordinate the UK's AI research networks and programmes, working with key Research Council (and other funding) programmes, The Alan Turing Institute, The Ada Lovelace Institute, AI Standards hub, Centres for Doctoral Training, UKRI AI Research Hubs, Public Sector Research Establishments (PSREs) as well as the wider landscape of university-based Responsible/Ethical AI research institutes. AI UK will connect UK research to internationally leading research centres and institutions around the world. Ultimately, through this ecosystem, AI UK will deliver world-leading best practices for the design, evaluation, regulation and operation of AI-systems that benefit the nation and society. AI UK will invest in the following strands: Ecosystem Creation and Management: to define the portfolio of thematic areas, translational activities, and strategic partnerships with academia, business and government and associated impact metrics. This will broaden and consolidate the network nationally and internationally and identify course corrections to national policy (e.g., industrial strategy). Research & Innovation Programmes: to deliver consortia-led research that address fundamental challenges with multi-disciplinary and industrial perspectives, integrative research projects that link connected and established research teams across the community, and early stage and industry-led research and innovation projects to expand the UK's ecosystem and develop the next generation of leaders. Skills Programme: to translate research into skills frameworks and training for users, customers, and developers of AI, and to contribute to the call for the UK AI Strategy's Online Academy. Public and Policy Engagement: working with the network of policy makers, regulators, and key stakeholders to respond to arising concerns, need for new standards, build capacity for public accountability and provide evidence-based advice to the public and policymakers.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V056883/1
    Funder Contribution: 3,266,200 GBP

    AI technologies have the potential to unlock significant growth for the UK financial services sector through novel personalised products and services, improved cost-efficiency, increased consumer confidence, and more effective management of financial, systemic, and security risks. However, there are currently significant barriers to adoption of these technologies, which stem from a capability deficit in translating high-level principles (of which there is an abundance) concerning trustworthy design, development and deployment of AI technologies ("trustworthy AI"), including safety, fairness, privacy-awareness, security, transparency, accountability, robustness and resilience, to concrete engineering, governance, and commercial practice. In developing an actionable framework for trustworthy AI, the major research challenge that needs to be overcome lies in resolving the tensions and tradeoffs which inevitably arise between all these aspects when considering specific application settings.For example, reducing systemic risk may require data sharing that creates security risks; testing algorithms for fairness may require gathering more sensitive personal data; increasing the accuracy of predictive models may pose threats to fair treatment of customers; improved transparency may open systems up to being "gamed" by adversarial actors, creating vulnerabilities to system-wide risks. This comes with a business challenge to match. Financial service providers that are adopting AI approaches will experience a profound transformation in key areas of business as customer engagement, risk, decisioning, compliance and other functions transition to largely data-driven and algorithmically mediated processes that involve less and less human oversight. Yet, adapting current innovation, governance, partnership and stakeholder relation management practice in response to these changes can only be successfully achieved once assurances can be confidently given regarding the trustworthiness of target AI applications. Our research hypothesis is based on recognising the close interplay between these research and business challenges: Notions of trustworthiness in AI can only be operationalised sufficiently to provide necessary assurances in a concrete business setting that generates specific requirements to drive fundamental research into practical solutions, with solutions which balance all of these potentially conflicting requirements simultaneously. Recognising the importance of close industry-academia collaboration to enable responsible innovation in this area, the partnership will embark on a systematic programme of industrially-driven interdisciplinary research, building on the strength of the existing Turing-HSBC partnership. It will achieve a step change in terms of the ability of financial service providers to enable trustworthy data-driven decision making while enhancing their resilience, accountability and operational robustness using AI by improving our understanding of sequential data-driven decision making, privacy- and security- enhancing technologies, methods to balance ethical, commercial, and regulatory requirements, the connection between micro- and macro-level risk, validation and certification methods for AI models, and synthetic data generation. To help drive innovation across the industry in a safe way which will help establish the appropriate regulatory and governance framework, and a common "sandbox" environment to enable experimentation with emerging solutions and to test their viability in a real-world business context. This will also provide the cornerstone for impact anticipation and continual stakeholder engagement in the spirit of responsible research and innovation.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.