Jess+


jan - june 2023

A real time collaborative Musicking robot interacting with a musical ensemble through drawing. Part of the wider ERC research project ‘The Digital Score’ based at the University of Nottingham Mixed Reality Lab.

About

My second case study as part of the Digital Score project, Jess+ was my first time working with robotics and my first time working with Python outside of small practice exercises. I find it extremely rewarding and enjoyable, yet also quite daunting, to dive into a project with little experience, working and learning simultaneously. I was supported by a great but small team at the Mixed Reality Lab in the University of Nottingham, UK. Craig Vear, project lead, and Johann Benerradi, AI and machine learning specialist, were great to work with and led to a very open, collaborative environment where we could each make suggestions outside of our area of expertise.

The project included three major components; the AI factory, the individual robot behaviours, and the decision making module or ‘brain’. The AI factory trained on past recordings of improvisational Jazz sessions, which processed live microphone and physiological data from the musicians. This was Johann’s realm.

My main task in the project was programming the individual robot behaviours, such as drawing shapes and letters, both on and off the page. This involved me interfacing directly with the robot APIs in Python, creating paths for the robot to follow which would then be triggered by the brain in response to certain outputs from the AI factory. I started by implementing basic primitive shapes and as I gained a better understanding of the API, I moved to more complex compound shapes such as letters and numbers. I also implemented a basic memory into the code so the robot could decide to return to a previously drawn shape and add to it.

Initially we used the Dobot Magician Mini robot arm, a small desktop arm with a roughly 30 cm range and about 220 degree mobility. The API was poorly documented and maintained, most of the documentation not translated from the original Chinese. This led to a lot of experimentation to get simple commands working, for which I found the ChatGPT AI very useful. There was very little on the internet in the way of forum posts and documentation but ChatGPT was able to help point me in the right direction most of the time. The code it produced was by no means copy and paste-able but it certainly helped me understand how the API worked and what direction I needed to head in.

Eventually we acquired two new xArm robot arms, a drastic upgrade from the Dobot with a range of 70 cm and full 360 degree mobility. It was much larger and moved and looked almost organic, adding to the feeling of it being a real, thinking, collaborative member of the ensemble. This also meant that the code base had to be made API agnostic, working with both the Dobot and xArm robots.

The final part of development was the overall logic of the system that takes the output from the AI factory, evaluates it based on a number of rules, and decides what robot behaviours to trigger. This was developed by Craig Vear but in close collaboration with Johann and myself to ensure that our work was being utilised to it’s full potential in the final product.

This was all developed in collaboration with a cohort of musicians from the Royal College of Music, London. We had regular test sessions with the musicians where they would provide feedback from an end-user point of view. They viewed the robot as a co-author and collaborator in the music-making process and saw it almost as a person with a distinct personality.

After finishing the robot code, I used my remaining time on the project to develop a 3D data representation of the session recordings. This can be found on my project page Jess+ artefact.

Links

Github repo - github.com/DigiScore/jess_plus

The Digital Score - digiscore.github.io/
Digiscore project page - digiscore.github.io/pages/jess+/

UoN Mixed Reality Lab - nottingham.ac.uk/research/groups/mixedrealitylab/