Powered by OpenAIRE graph
Found an issue? Give us feedback

BBC

British Broadcasting Corporation (United Kingdom)
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
180 Projects, page 1 of 36
  • Funder: UK Research and Innovation Project Code: AH/P005837/1
    Funder Contribution: 787,310 GBP

    Since 1922 the BBC has been where Britain learns about itself and the world. It is a cultural institution of global significance, its history central to our understanding of the 20th and 21st centuries. Yet one vital piece of its history has never been accessible to anyone beyond a tiny circle of BBC staff and official historians: its internal archive of 632 recorded interviews - with key programme-makers and presenters such as David Attenborough, Sydney Newman, and John Cole, producers of early television such as Cecil Madden and Grace Wyndham Goldie, pioneering engineers, past directors-general, even Home Secretaries. All were interviewed as they retired and encouraged to speak frankly. Their testimonies offer unique 'ringside' accounts of how the BBC has developed the arts of broadcasting and seen the world of politics and culture. Yet, not only are they inaccessible to all but a select few; they are also unusable - scattered, un-catalogued, preserved in multiple formats from videotape to crumbling paper. BBC CONNECTED HISTORIES brings new digital humanities thinking to bear on this problem. It will digitise these materials to the highest standards and create a digital catalogue of the entire collection. Through generating metadata and tagging each interview, it doesn't just make available individual testimonies; the collection as a whole becomes searchable. Single accounts can be related to one another, themes or events mapped from several angles. Biographies become networked. By publishing this catalogue as 'linked open data' (LOD), the oral histories (OH) become connected to other digitised resources, including those of our Partners - the Science Museum (incl. the National Media Museum), Mass Observation (MO), and the British Entertainment History Project (BEHP), as well as all the BBC's other collections. So anyone searching for material on, say, 'Mrs Thatcher resigns', 'Diana', 'immigration' or 'comedy' can simultaneously discover relevant passages in the OH collection, the BBC's own vast programme archive, the personal accounts of listeners and viewers in MO, or the interviews of broadcasting technicians in BEHP. Or vice versa. This radically expands the ability of any public or academic researcher to connect different sets of evidence - and different perspectives - on BBC history. It provides programme-makers planning output for the BBC's 2022 Centenary with ready access to important though neglected material. The project will present highlights from the OH and linked collections on a series of BBC-hosted '100 Voices' websites, each on a broad theme (entertainment, war, national identity, etc.). These act (a) as high-profile shop-windows for research, (b) as public portals through which the OH catalogue and related resources can be searched, and (c) access-points for the public to upload their own recollections via a 'memory-share' facility, thereby 'crowd-sourcing' a new body of data. 25 new oral history interviews with former BBC staff will be filmed. These will be of individuals not included in the official OH, and will cover their whole lives, not just their BBC career. Each will be transcribed and tagged, linking them to existing resources. This demonstrates to the BBC the value of adopting a different (deeper, more connected) practice in future archival work - as will be written into a 'White Paper' to be presented formally to the BBC. This will also set out how the BBC's attempt to build a 'Digital Public Space' of shared resources might be improved through new policies on openness and user-engagement. Four journal articles, co-authored by the project team and the research fellow, will explore other methodological insights - in media history, oral history, and digital humanities. Finally, the PI, Hendy, is the authorised Centenary historian for the BBC. The new perspectives generated throughout this project will directly inform the monograph single-volume history he publishes in 2022.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T03324X/1
    Funder Contribution: 257,481 GBP

    This proposal addresses the important and not well-studied area in the fast-developing engineering space of the virtual reality and immersive technology. Our final vision is to endow any final user with an unprecedented sense of full immersion in virtual environments for any applicative scenario and with any connectivity level. This requires VR systems that operate at scale, in a personalized manner, remaining bandwidth-tolerant whilst meeting quality and latency criteria. This can be accomplished only by a fundamental revolution of the coding/streaming/rendering chain that has to put the interactive user at the heart of the system rather than at the end of the chain. Specifically, bringing our final vision to reality requires constant innovation in VR streaming technology to overcome the following main challenges: i) the spherical format of the video content in a scenario in which media codecs are optimized for planar content; ii) the interactive behaviour of users that introduces uncertainty on the popularity of the video content. To overcome the former challenges, the spherical video is currently projected on a bidimensional planar domain, but this introduces noticeable deformation since the sphere is not a developable surface. To address the uncertainty of users' behaviours, predicting models have been investigated recently, but none of them actually work on the spherical (not deformed) domain. To overcome the above challenges in immersive technology, there is the need to develop an efficient tool for navigation patterns analysis in the spherical domain and leverage on that to predict users' behaviour and build the entire coding-delivery-rendering pipeline as a user- and geometry-centric system. This proposal focuses on develop novel VR delivery strategies, which will rely on the analysis and prediction of the navigation patterns of the omnidirectional content navigation, in a user, content- and application-dependent fashion. Predicting users' behaviour will allow to delivery only the content that will be actually consumed by the users, minimising the transmission resources consumption while maximising the user's Quality of Experience. The key novelty will be to address the above challenges working directly on the spherical domain, rather than on the projected (thus deformed) bidimensional domain. The developed VR system will be tested via a real omnidirectional video streaming testbed that will be implemented within the scope of the project. The outcomes of the project will be three-folds: i) user- content- and application-dependent VR streaming platform, optimized from the source codec to the delivery platform; ii) users analysis tools able to assess users' behaviour (when interactive with the content) in an objective fashion to identify common navigation patterns, and to predict future behaviours; iii) VR streaming testbed to test the proposed technology.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/P011586/1
    Funder Contribution: 533,267 GBP

    The cost of producing dynamically-updated media content - such as online video news packages - across multiple languages is very high. Maintaining substantial teams of journalists per language is expensive and inflexible. Modern media organisations like the BBC or the Financial Times need a more agile approach: they must be able to react quickly to changing world events (e.g., breaking news or emerging markets), dynamically allocating their limited resources in response to external demands. Ideally, they would like to create `pop-up' services & products in previously-unsupported languages, then to scale them up or down later. The government has set the BBC a target of reaching a global audience of 500 million people by 2022, compared with today's 308 million. The only way to reach such a huge audience is through new language services and efficient production techniques. Text-to-speech - which automatically produces speech from text - offers an attractive solution to this challenge, and the BBC have identified computer assisted translation and text-to-speech as key technologies that will provide them with new ways of creating and reversioning their content across many languages. This project's objectives are to push text-to-speech technology towards "broadcast quality" computer-generated speech (i.e., good enough for the BBC to broadcast) in many languages, and to make it cheap and easy to add more languages later. We will do this by combining and extending several distinct pieces of our previous basic research on text-to-speech. We will use the latest data-driven machine learning techniques, and extend them to produce much higher quality output speech. At the same time, we will enable the possibility of human control over the speech. This will allow the user (e.g., a BBC journalist) to adjust the speech to make sure the quality and the speaking style is right for their purposes (e.g., correcting the pronunciation of a difficult word, or putting emphasis in the right place). The technology we will create for the likes of the BBC will also enable smaller companies and other organisations, state bodies, charities, and individuals to rapidly create high-quality spoken content, in whatever language or domain they are operating. We will work with other types of organisation during the project, to make sure that the technology we create has broad appeal and will be useful to a wide range of companies and individuals.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/F003420/1
    Funder Contribution: 236,179 GBP

    This research project proposes to advance state-of-the-art image recognition techniques to be able to recognize a large number ofscenes and object categories in real and unconstrained indoor andoutdoor environments i.e. traffic scenes (cars, bicycle vehicles,pedestrians, human faces, street signs etc.), urban and naturalscenes (buildings, landscapes etc.) with various rigid andarticulated objects as well as textures. Nowadaysalmost everybody carries a digital camera and taking a photo or ashort video has never been easier. Broadcasting companies receivethousands of pictures from the general public after every majorevent and the annotation of those documents is done manually. Crime investigators collect large amounts ofvisual evidence and its classification is also done manually. The UKhas the largest number of security cameras in Europe but the dataprovided by the cameras is very little explored. Furthermore,recognition and interpretation of visual information is one of themajor requirements for autonomous intelligent robots. There is therefore a dire need for a reliable recognition system capable of automatic classification and annotation of large amounts of visual documents. Any success towards achieving that goal i.e., automatic prioritizing of document browsing for experts, will be seen as a clear benefit in improvingthe efficiency of work.To fulfil the objectives of this project major progress has to bemade in the domain of features extraction, category representationand efficient search. Recent interest point based approachesdemonstrate the capability of dealing with large numbers ofcategories in the context of visual recognition. These methods showpromising directions towards successful scene and objectrecognition. Based on these results we propose to develop noveltechniques for extracting image features robust to backgroundclutter and viewpoint change, which are currently great challengesin image recognition domain. Those features will be suitable forsimultaneous representation of scenes and objects at variousappearance and structure levels as well as for segmentation ofobjects. Mid-level image segmentation methods have a potential toprovide such features and can bridge the gap between interest pointdetectors and semantic segmentation in the context of categoryrecognition. There has been little overlap between recognition andsegmentation domains although the goal is to solve both problemssimultaneously.We also propose to introduce novel hierarchical representationswhich will exploit the properties of new features and allow to dealefficiently with large number of image categories. Therepresentation will model the categories in multiple hierarchies ofvarious image attributes i.e., intensity, color and texture as wellas relations between different object parts and views. The multiplehierarchies will allow for coarse-to-fine classification based onimage cues relevant to the query. Very little work has been done inthis area and the proposed research can shed new light on imagerepresentation problems. Finally, efficient tree structures andnearest neighbor search techniques will be employed to handle largeamounts of data in multi-category learning.Developing novel, efficient and robust techniques which may providesuccessful solutions to fundamental recognition problems and advancethe state-of-the-art in feature extraction, categoryrepresentation and data exploration, make this project verychallenging and adventurous. The project is expected to achieve theobjectives within 36 months and it will involve a research student,a research assistant and the principal investigator.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K005952/1
    Funder Contribution: 505,830 GBP

    The human visual system has been fine-tuned over generations of evolution to operate effectively in our particular environment, allowing us to form rich 3D representations of the objects around us. The scenes that we encounter on a daily basis produce 2D retinal images that are complex and ambiguous. From this input, how does the visual system achieve the immensely difficult goal of recovering our surroundings, in such an impressively fast and robust way? To achieve this feat, humans must use two types of information about their environment. First, we must learn the probabilistic relationships between 3D natural scene properties and the 2D image cues these produce. Second, we must learn which scene structures (shapes, distances, orientations) are most common, or probable in our 3D environment. This statistical knowledge about natural 3D scenes and their projected images allows us to maximize our perceptual performance. To better understand 3D perception, therefore, we must study the environment that we have evolved to process. A key goal of our research is to catalogue and evaluate the statistical structure of the environment that guides human depth perception. We will sample the range of scenes that humans frequently encounter (indoor and outdoor environments over different seasons and lighting conditions). For each scene, state-of-the-art ground based Light Detection and Ranging (LiDAR) technology will be used to measure the physical distance to all objects (trees, ground, etc.) from a single location - a 3D map of the scene. We will also take High Dynamic Range (HDR) photographs of the same scene, from the same vantage point. By collating this paired 3D and 2D data across numerous scenes we will create a comprehensive database of our environment, and the 2D images that it produces. By making the database publicly available it will facilitate not just our own work, but research by human and computer vision scientists around the world who are interested in a range of pure and applied visual processes. There is great potential for computer vision to learn from the expert processor that is the human visual system: computer vision algorithms are easily out-performed by humans for a range of tasks, particularly when images correspond to more complex, realistic scenes. We are still far from understanding how the human visual system handles the kind of complex natural imagery that defeats computer vision algorithms. However, the robustness of the human visual system appears to hinge on: 1) exploiting the full range of available depth cues and 2) incorporating statistical 'priors': information about typical scene configurations. We will employ psychophysical experiments, guided by our analyses of natural scenes and their images, to develop valid and comprehensive computational models of human depth perception. We will concentrate our analysis and experimentation on key tasks in the process of recovering scene structure - estimating the location, orientation and curvature of surface segments across the environment. Our project addresses the need for more complex and ecologically valid models of human perception by studying how the brain implicitly encodes and interprets depth information to guide 3D perception. Virtual 3D environments are now used in a range of settings, such as flight simulation and training systems, rehabilitation technologies, gaming, 3D movies and special effects. Perceptual biases are particularly influential when visual input is degraded, as they are in some of these simulated environments. To evaluate and improve these technologies we require a better understanding of 3D perception. In addition, the statistical models and inferential algorithms developed in the project will facilitate the development of computer vision algorithms for automatic estimation of depth structure in natural scenes. These algorithms have many applications, such as 2D to 3D film conversion, visual surveillance and biometrics.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.