Loading
'Musically Embodied Machine Learning' is an investigation into the musically expressive potential of machine learning (ML) when embodied within physical musical instruments. It proposes 'tuneable ML', a novel approach to exploring the musicality of ML models, when they can be adjusted, personalised and remade, using the instrument as the interface. Moving forward from the static preset models used in today's instruments, musicians playing instruments with a tuneable approach will be able to customise the ML models within their instruments, adapting to personal needs and varying situations, just as one might change the strings or pickups on an electric guitar, reconfigure modules in a synthesiser, or retune a set of drums for a particular performance. ML has been highly successful in the broader cultural and technical landscape, in allowing us to build novel creative tools for musicians. For example, generative models that bring new approaches to sound design, or models that allow musicians to build complex, nuanced mappings with musical gestures. These instruments offer new forms of creative expression, because they are configurable in intuitive ways using data that can be created by musicians. They can also offer new modes of control, with techniques such as latent space manipulation. Currently, to train a ML model, standard practice is to collect data (e.g sound or sensor data), create and pre-test the model within a data science environment, before testing it with the instrument. This distributed approach creates a disconnection between the instrument and the machine learning processes. With ML embodied within an instrument, musicians will be able to take a more creative and intuitive approach to making and tuning models, that will also be more inclusive to those without expertise in ML. This embodied approach to ML fits with wider views in the philosophy of artificial intelligence, on how we need to situate and embody models within the real world to improve them. Musicians can get the most value from ML if the whole process of machine learning is accessible; there are many creative possibilities in the training and tuning of models, so it's valuable if the musician can have access to the curation of data, curation of models, and to methods for ongoing retuning of models over their lifetime. We have reached the point where ML technology will run on lightweight embedded hardware at rates sufficient for audio and sensor processing. This opens up innumerable additions to our electronic, digital, and hybrid augmented acoustic instruments. Our instruments will contain lightweight embedded computers with ML models that shape key elements of the instruments behaviour, for example sound modification or gesture processing, responding to sensory input player and/or environment. This project will demonstrate how Tuneable ML creates novel musical possibilities, as it allows to create self-contained instruments, that can evolve independently from the complex data science tools conventionally used for ML. The project asks how instruments can be designed to make effective and musical use of embedded ML processes, and questions the implications for instrument designers and musicians when tunable processes are a fundamental driver of an instrument's musical feel and musical behaviour. With the speed of modern AI, it is clear that new instruments will emerge with tuneable ML, and is vital that we have a nuanced understanding of them, through experimental use in the hands of musicians, developers and designers. That way, we will support the design of future instruments, and offer new insights into our creative workflow with ML tools.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://beta.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::6ceb23f545e9a3ef3d9903920212f54d&type=result"></script>');
-->
</script>