Powered by OpenAIRE graph
Found an issue? Give us feedback

DW

Deutsche Welle
36 Projects, page 1 of 8
  • Funder: European Commission Project Code: 965576
    Overall Budget: 2,615,060 EURFunder Contribution: 2,050,220 EUR

    Media Monitoring is the systematic recording of media output related to a specific target, its activities and topics of interest. With the galloping growth of sources, many media monitoring companies have addressed this issue by increasing human resources. However, human expertise which should be focused in advanced analysis is being wasted on time-consuming mechanical tasks. Current commercial solutions peddle the use of “Artificial Intelligence”, but they are still highly dependent on human expertise to filter out irrelevant content. The worldwide market for media monitoring is valued at US$2.23 bn (2017), with a CAGR of 13,6% until 2022. Building on 3 years of cutting-edge AI research funded by H2020, Priberam is developing a real-time crosslingual global media monitoring platform that delivers actionable insight beyond human capabilities. Our system, based on a scalable SaaS business model, continuously ingests massive multilingual data sources and automatically translates, filters, categorizes and generates reports for media monitoring professionals. MONITIO will be co-created with end-users at Deutsche Welle and improved with cutting-edge technology from Cambridge University, focusing on GDPR and the new EU Copyright legislation, making it the first media monitoring tool copyright compliant-by-design (in opposition to the majority of competitor solutions, which are US-based and not centred in these issues). Priberam is a Portuguese SME that provides cutting-edge Natural Language Understanding and Artificial Intelligence technologies to companies in the media, legal and healthcare industries, and exports its technologies to international top companies such as Microsoft, Amazon, Kobo, and the main media publishers in Portugal, Spain and Brazil. Our team of 24 generated a turnover of 1,4M€ in 2018. We believe that MONITIO will bring Media Monitoring to a new disruptive level, and place European technology in the lead of AI-powered Media Monitoring services.

    more_vert
  • Funder: European Commission Project Code: 825299
    Overall Budget: 2,906,100 EURFunder Contribution: 2,906,100 EUR

    Machine translation (MT) is an increasingly important technology for supporting communication in a globalised world. MT technology has gradually increased over the last ten years, but recent advances in neural machine translation (NMT), have resulted in significant interest in industry and have lead to very rapid adoption of the new paradigm (eg. Google, Facebook, UN, World International Patent Office). Although these models have shown significant advances in state-of-the-art performance they are data intensive and require parallel corpora of many millions of human translated sentences for training. Neural Machine translation is currently not able to deliver usable translations for the vast majority of language pairs in the world. This is especially problematic for our user partners, the BBC and DW who need access to fast and accurate translation for languages with very few resources. The aim of GoURMET is to significantly improve the robustness and applicability of neural machine translation for low-resource language pairs and domains. GoURMET has five objectives: - Development of a high-quality machine translation for under-resourced language pairs and domains; - Adaptable to new and emerging languages and domains; - Development of tools for analysts and journalists; - Sustainable, maintainable platform and services; - Dissemination and communication of project results to stakeholders and user group. The project will focus on two use cases: - Global content creation - managing content creation in several languages efficiently by providing machine translations for correction by humans; - Media monitoring for low resource language pairs - tools to address the challenge of international news monitoring problem. The outputs of the project will be field-tested at partners BBC and DW, and the platform will be further validated through innovation intensives such as the BBC NewsHack.

    more_vert
  • Funder: European Commission Project Code: 957017
    Overall Budget: 3,452,510 EURFunder Contribution: 3,452,510 EUR

    SELMA builds a continuous deep learning multilingual media platform using extreme analytics. Large amounts of multilingual text and speech data are available in the internet, but the potential to fully take advantage of this data has remained largely untapped. Recent advances in deep learning and transfer learning have opened the door to new possibilities – in particular integrating knowledge from these large unannotated datasets into plugable models for tackling machine learning tasks. The aim of the Stream Learning for Multilingual Knowledge Transfer (SELMA) is to address three tasks: ingest large amounts of data and continuously train machine learning models for several natural language tasks; monitor these data streams using such models to improve multilingual Media Monitoring (use case 1); and improve the task of multilingual News Content Production (use case 2), thereby closing the loop between content monitoring and production. SELMA has eight goals: 1. Enable processing of massive video and text data streams in a distributed and scalable fashion 2. Develop new methods for training unsupervised deep learning language models in 30 languages 3. Enable knowledge transfer across tasks and languages, supporting low-resourced languages 4. Develop novel data analytics methods and visualizations to facilitate the media monitoring decision-making process 5. Develop an open-source platform to optimize multilingual content production in 30 languages 6. Fine-tune deep learning models from user feedback, reducing recurring errors 7. Ensure a sustainable exploitation of the SELMA platform 8. Encourage active user involvement in the platform. Achieving these aims requires advancing the state of the art in multiple technologies (transfer learning, language modelling, speech recognition, machine translation, summarization, speech synthesis, named entity linking, learning from user feedback), while building upon previous project results and existing services.

    more_vert
  • Funder: European Commission Project Code: 101070351
    Overall Budget: 4,444,230 EURFunder Contribution: 4,444,230 EUR

    The revolutionary opportunities opened by eXtended Reality (XR) technologies will only materialize if concepts, techniques, and tools are provisioned to ensure the social acceptance of XR systems. For that, we need XR systems that are not just innovative and functionally complex, but also provide an experience that: satisfies the goals and needs of the user, is in compliance with the social context in which the system is being used, and is transparent, safe, secure, explainable and is trusted by the user. However, current generations of XR systems fail to provide the XR experience they were envisioned for since state-of-the-art models and technologies of XR systems fail to ensure full-fledged social acceptance. A truly XR experience requires a major paradigm shift in the way XR systems are designed, implemented, deployed and consumed. The SERMAS project will develop innovative, formal and systematic methodologies and technologies to model, develop, analyze, test and user-study socially-acceptable XR systems. This will be achieved by pursuing the following four main objectives: 1. Follow an inter-disciplinary, multi-sectorial, case-study-driven, scientific and technological methodology to implement the SERMAS Toolkit, a set of methods and tools that will greatly simplify the design, development, deployment, and management of socially-acceptable XR systems. 2. Apply the Toolkit to industrial case studies drawn from real-world application scenarios, thus paving the way to transferring project results to industrial practice. This will be possible through the active participation in the consortium of the developers of mass-use industrial XR applications. 3. Enable innovators to leverage the Toolkit to improve the social acceptance and cut down the time-to-market of their XR systems, thereby enhancing the competitiveness of the vendors. 4. Produce the wider SERMAS Methodology to position the use of the Toolkit and enlarge its outreach.

    more_vert
  • Funder: European Commission Project Code: 825297
    Overall Budget: 2,931,000 EURFunder Contribution: 2,499,450 EUR

    Online disinformation and fake media content have emerged as a serious threat to democracy, economy and society. Content verification is currently far from trivial, even for experienced journalists, human rights activists or media literacy scholars. Moreover, recent advances in artificial intelligence (deep learning) have enabled the creation of intelligent bots and highly realistic synthetic multimedia content. Consequently, it is extremely challenging for citizens and journalists to assess the credibility of online content, and to navigate the highly complex online information landscapes. WeVerify aims to address the complex content verification challenges through a participatory verification approach, open source algorithms, low-overhead human-in-the-loop machine learning and intuitive visualizations. Social media and web content will be analysed and contextualised within the broader online ecosystem, in order to expose fabricated content, through cross-modal content verification, social network analysis, micro-targeted debunking and a blockchain-based public database of known fakes. A key outcome will be the WeVerify platform for collaborative, decentralised content verification, tracking, and debunking. The platform will be open source to engage communities and citizen journalists alongside newsroom and freelance journalists. To enable low-overhead integration with in-house content management systems and support more advanced newsroom needs, a premium version of the platform will also be offered. It will be furthermore supplemented by a digital companion to assist with verification tasks. Results will be validated by professional journalists and debunking specialists from project partners (DW, AFP, DisinfoLab), external participants (e.g. members of the First Draft News network), the community of more than 2,700 users of the InVID verification plugin, and by media literacy, human rights and emergency response organisations.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.