
FOUNDRY
FOUNDRY
Funder
12 Projects, page 1 of 3
assignment_turned_in Project2021 - 2026Partners:FOUNDRY, Sony (Europe), MirriAd, Sony (Europe), Framestore +35 partnersFOUNDRY,Sony (Europe),MirriAd,Sony (Europe),Framestore,To Play For Ltd,Synthesia,BT Group (United Kingdom),SalsaSound,SalsaSound,To Play For Ltd,BBC,Imagination Technologies (United Kingdom),Framestore CFC,Telefonica I+D (Spain),Audioscenic,University of Surrey,BT Group (United Kingdom),Synthesia,Dimension Studios,BBC,University of Surrey,Dimension Studios,Network Media Communications,Figment Productions,Intel Corporation,FOUNDRY,Foundry (United Kingdom),Imagineer Systems Ltd,Figment Productions,Intel (United States),British Broadcasting Corporation (United Kingdom),Telefonica Research and Development,Intel (United States),Imagination Technologies (United Kingdom),Mirriad (United Kingdom),Audioscenic,Network Media Communications,Boris FX (United Kingdom),Framestore CFCFunder: UK Research and Innovation Project Code: EP/V038087/1Funder Contribution: 3,003,240 GBPPersonalisation of media experiences for the individual is vital for audience engagement of young and old, allowing more meaningful encounters tailored to their interest, making them part of the story, and increasing accessibility. The goal of the BBC Prosperity Partnership is to realise a transformation to future personalised content creation and delivery at scale for the public at home or on the move. Evolution of mass-media audio-visual 'broadcast' content (news, sports, music, drama) has moved increasingly towards Internet delivery, which creates exciting potential for hyper-personalised media experiences delivered at scale to mass audiences. This radical new user-centred approach to media creation and delivery has the potential to disrupt the media landscape by directly engaging individuals at the centre of their experience, rather than predefining the content as with existing media formats (radio, TV, film). This will allow a new form of user-centred media experience which dynamically adapts to the individual, their location, the media content and producer storytelling intent, together with the platform/device and the network/compute resources available for rendering the content.The BBC Prosperity Partnership will position the BBC at the forefront of this 'Personalised Media' revolution enabling the creation and delivery of new services, and positioning the UK creative industry to lead future personalised media creation and intelligent network distribution to render personalised experiences for everyone anywhere. Realisation of personalised experiences at scale presents three fundamental research challenges: capture of object-based representations of the content to enable dynamic adaption for personalisation at the point of rendering; production to create personalised experiences which enhance the perceived quality of experience for each user; and delivery at scale with intelligent utilisation of the available network, edge and device resources for mass audiences. The BBC Prosperity Partnership will address the major technical and creative challenges to delivering user-centred personalised audience experiences at scale. Advances in audio-visual AI for machine understanding of captured content will enable the automatic transformation of captured 2D video streams to an object-based media (OBM) representation. OBM will allow adaptation for efficient production, delivery and personalisation of the media experience whilst maintaining the perceived quality of the captured video content. To deliver personalised experiences to audiences of millions requires transformation of media processing and distribution architectures into a hybrid and distributed low-latency computation platform, allowing flexible deployment of compute-intensive tasks across the network. This will achieve efficiency in terms of cost and energy use, while providing optimal quality of experience for the audience within the technical constraints of the system.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::e29b79416e7b8eb4ccf6fe2fd0305ac2&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::e29b79416e7b8eb4ccf6fe2fd0305ac2&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2013 - 2016Partners:Saarland University, FOUNDRY, IMEC, STARGATE GERMANY GMBH, NCAM +12 partnersSaarland University,FOUNDRY,IMEC,STARGATE GERMANY GMBH,NCAM,STARGATE STUDIOS MALTA LIMITED,FILMAKADEMIE BADEN-WURTTEMBERG GMBH,CREW,FOUNDRY,IMEC,NCAM,STARGATE STUDIOS MALTA LIMITED,IMINDS,FILMAKADEMIE BADEN-WURTTEMBERG GMBH,CREW,STARGATE GERMANY GMBH,IBBTFunder: European Commission Project Code: 610005All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=corda_______::087d3b49b838302bcd44d38e6cf16a81&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=corda_______::087d3b49b838302bcd44d38e6cf16a81&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2013 - 2018Partners:Imagination Technologies (United Kingdom), Cirrus Logic (United Kingdom), FOUNDRY, ARM (United Kingdom), HP Research Laboratories +31 partnersImagination Technologies (United Kingdom),Cirrus Logic (United Kingdom),FOUNDRY,ARM (United Kingdom),HP Research Laboratories,FOUNDRY,Oracle Corporation,MICROSOFT RESEARCH LIMITED,Agilent Technologies (United Kingdom),Agilent Technologies (United Kingdom),HP Research Laboratories,Dyson Appliances Ltd,Samsung Electronics Research Institute,Wolfson Microelectronics,HP Research Laboratories,Oracle (United States),Samsung (United Kingdom),Agilent Technologies (United Kingdom),ARM Ltd,ARM Ltd,University of Salford,Foundry (United Kingdom),Dyson Appliances Ltd,Imagination Technologies (United Kingdom),Imagination Technologies Ltd UK,MICROSOFT RESEARCH LIMITED,Samsung Electronics Research Institute,Wolfson Microelectronics,University of Manchester,ARM Ltd,Hewlett-Packard (United Kingdom),Oracle (United States),Microsoft Research (United Kingdom),The University of Manchester,Dyson Limited,Oracle (United States)Funder: UK Research and Innovation Project Code: EP/K008730/1Funder Contribution: 4,135,050 GBPThe last decade has seen a significant shift in the way computers are designed. Up to the turn of the millennium advances in performance were achieved by making a single processor, which could execute a single program at a time, go faster, usually by increasing the frequency of its clock signal. But shortly after the turn of the millennium it became clear that this approach was running into a brick wall - the faster clock meant the processor got hotter, and the amount of heat that can be dissipated in a silicon chip before it fails is limited; that limit was approaching rapidly! Quite suddenly several high-profile projects were cancelled and the industry found a new approach to higher performance. Instead of making one processor go ever faster, the number of processor cores could be increased. Multi-core processors had arrived: first dual core, then quad-core, and so on. As microchip manufacturing capability continues to increase the number of transistors that can be integrated on a single chip, the number of cores continues to rise, and now multi-core is giving way to many-core systems - processors with 10s of cores, running 10s of programs at the same time. This all seems fine at the hardware level - more transistors means more cores - but this change from one to many programs running at the same time has caused many difficulties for the programmers who develop applications for these new systems. Writing a program that runs on a single core is much better understood than writing a program that is actually 10s of programs running at the same time, interacting with each other in complex and hard-to-predict ways. To make life for the programmer even harder, with many-core systems it is often best not to make all the cores identical; instead, heterogeneous many-core systems offer the promise of much higher efficiency with specialised cores handling specialised parts of the overall program, but this is even harder for the programmer to manage. The Programme of projects we plan to undertake will bring the most advanced techniques in computer science to bear on this complex problem, focussing particularly on how we can optimise the hardware and software configurations together to address the important application domain of 3D scene understanding. This will enable a future smart phone fitted with a camera to scan a scene and not only to store the picture it sees, but also to understand that the scene includes a house, a tree, and a moving car. In the course of addressing this application we expect to learn a lot about optimising many-core systems that will have wider applicability too, and the prospect of making future electronic products more efficient, more capable, and more useful.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::404e3611a34c43846ef69d3a161f6efd&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::404e3611a34c43846ef69d3a161f6efd&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2013 - 2017Partners:FOUNDRY, BBC, DNEG (United Kingdom), FOUNDRY, Foundry (United Kingdom) +17 partnersFOUNDRY,BBC,DNEG (United Kingdom),FOUNDRY,Foundry (United Kingdom),Walt Disney (United States),Gobo Games Limited,University of Bath,Crytek Ltd,DNEG (United Kingdom),BBC,Gobo Games Limited,Crytek Ltd,DNEG (United Kingdom),Crytek Ltd,Walt Disney World Company,British Broadcasting Corporation (United Kingdom),University of Bath,Gobo Games Limited,British Broadcasting Corporation - BBC,Bath Spa University,Walt Disney World CompanyFunder: UK Research and Innovation Project Code: EP/K02339X/1Funder Contribution: 959,780 GBPImagine being able to take a camera out of doors and use it to capture 3D models of the world around you. The landscape at large, including valleys and hills replete with trees, rivers, waterfalls, fields of grass, clouds; seasides with waves rolling onto shore here and crashing onto rocks over there; urban environments complete with incidentals such as lamposts, balconies, and the detritus of modern life. Imagine models that look and move like the real thing. Models that you can use with to make up new scenes of your own, which you can control as you please, and render in how you like. You can zoom into to see details, and out to get a wide impression. This is an impressive vision, and one that is well beyond current know-how. Our plan is to take a major step towards meeting this vision. We will enable users to use video and images to capture large scale scenes of selected types and populate them with models trees, fountains, street furniture and such like, again carefully selecting the types of objects. We will provide software that recognises the sort of environment the camera is in, and objects in that environment, so that 3D moving models can be automatically created. This will prove very useful to our intended user group, which is the creative industries in the UK: films, games, broadcast. Modelling outdoor scenes is expensive and time consuming, and the industry recognises that video and images are excellent sources for making models they can use. To help them further we will develop software that makes use of their current practice of acquiring survey shots of scenes, so that all data is used at many levels of detail. Finally we will wrap all of our developments into a single system that shows the acquisition, editing and control of complete outdoor environments is one step closer.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2d62145278390a8e54b9cc3ce909c265&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2d62145278390a8e54b9cc3ce909c265&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2018 - 2022Partners:FXhome Limited, FXhome Limited, FOUNDRY, SyncNorwich, FaceMe +7 partnersFXhome Limited,FXhome Limited,FOUNDRY,SyncNorwich,FaceMe,FOUNDRY,Emteq Ltd,SyncNorwich,Emteq (United Kingdom),Foundry (United Kingdom),UEA,FaceMeFunder: UK Research and Innovation Project Code: EP/S001816/1Funder Contribution: 557,530 GBPOur bodies move as we speak. Evidently, movement of the jaw, lips and tongue is required to produce coherent speech. Furthermore, additional body gestures both synchronise with the voice and significantly contribute to speech comprehension. For example, a person's eyebrows raise when they are stressing a point, their head shakes when they disagree and a shrug might express doubt. The goal is to build a computational model that learns the relationship between speech and upper body motion so that we can automatically predict face and body posture for any given audio speech. The predicted body pose can be transferred to computer graphics characters, or avatars, to automatically create character animation directly from speech, on the fly. A number of approaches have previously been used for mapping from audio to facial motion or head motion, but the limited amount of speech and body motion data that is available has hindered progress. Our research programme will use a field of machine learning called transfer learning to overcome this limitation. Our research will be used to automatically and realistically animate the face and upper body of a graphics character along with a user's voice in real time. This is valuable for a) controlling the body motion of avatars in multiplayer online gaming, b) driving a user's digital presence in virtual reality (VR) scenarios, and c) automating character animation in television and film production. The work will enhance the realism of avatars during live interaction between users in computer games and social VR without the need for full body tracking. Additionally, we will significantly reduce the time required to produce character animation by removing the need for expensive and time-consuming hand-animation or motion capture. We will develop novel artificial intelligence approaches to build a robust speech-to-body motion model. For this, we will design and collect a video and motion capture dataset of people speaking, and this will be made publicly available. The project team is comprised of Dr. Taylor and a PDRA at the University of East Anglia, Norwich, UK.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2d64ec66fe91ffddb072220058165736&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::2d64ec66fe91ffddb072220058165736&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
chevron_left - 1
- 2
- 3
chevron_right