Evolving the Living Looper: Artistic Research, Online Learning, and Tentacle Pendula
The Living Looper: Rethinking the Musical Loop as a Machine Action-Perception Loop
I came across the concept of ‘living loops’ while going through the NIME papers. Naturally intrigued, I wanted to learn more about this interesting topic. It is a special music interaction project that can yield vastly different outputs, mainly due to the use of generative machine learning models. The implementation focuses on the RAVE encoder, created by IRCAM, where the audio data is mapped into a compressed latent space and trained using an autoregressive predictive model incorporating partial least squares regression. The dataset primarily focuses on the timbre of the sound rather than the overall musical structure.
It is a unique approach to looping as the artist has the ability to control the loops and train them for real-time audio synthesis, which help create more unique textures and tones. Though the instruments used for the interface were primarily stringed and wind instruments like the violin and saxophone, I wonder how the results would vary if percussion was used. Was it possibly avoided due to the fast attack and low sustain times? A thought to ponder, for sure. I am also curious to learn how turntablists and DJs could use this tool in their performances.
The 2025 paper introduces a user-centric improvement to the previous model, which lacked a visual interface. The upgrade also features a visualisation for each loop by introducing a unique ‘tentacle pendulum’ that interacts when the loop is played. The design was intended to represent the RAVE latent dimensions in order of importance, where the value of each latent dimension determines angular displacement and hue of each segment. The introduction of incremental algorithms which distribute the computation across audio frames, potentially allows more computational resources to be brought to bear on each loop.
One of the other things I really like about this project is its compatibility and open-source accessibility. Any model following the RAVE API can now be packaged into a Living Looper instance using a Python CLI available on PyPI. Since these instances are now themselves nn~ models, the core functions can be loaded directly into Pure Data and Max, and the new graphical version can be easily installed as a SuperCollider extension, making it very accessible for most sound and music enthusiasts.
One issue I assume they would face is latency from the signal processing, making the timbre of the sound a crucial indicator for the looper. The parallel processing of effects like delays and reverb could prove to be quite a challenge as well, and I would be interested to know how they could overcome this challenge.
Overall, I thought it was a super interesting project and especially useful for artists and musicians to experiment, compose and express themselves. Excited to see what the next follow-up to this project is.