TUTORIALS

Challenging Misleading Metaphors: a Hands-On Tutorial on the Fluid Corpus Manipulation Toolkit

1.5-hour tutorial, led by Pierre Alexandre Tremblay

TUTORIAL 1

MONDAY 3 NOV at 11:30 – 13:00
Pool Street Cinema

Important:

participants will need to download the demo version of Max if they do not already have this installed: https://support.cycling74.com/hc/en-us/articles/32781890060947-Is-there-a-Max-demo.

Description:

This tutorial introduces the Fluid Corpus Manipulation environment. This toolbox enables creative coding, musicking, and musicking-driven research through machine listening and machine learning, within the creative coding environment already mastered by techno-fluent musicians [1]. Starting with a contextualisation and setting the biases of this interdisciplinary research, the tutorial is tailored towards a first hands-on experience with the toolset, presenting some of its possibilities and tackling the first hurdles usually encountered by beginning users. It is strongly linked to online material that enables further and deeper exploration after the tutorial. 

We observed that the current AI renaissance, characterised by tools with embedded assumptions and increasing opacity, has shifted algorithmic control from creative coders to specialised techno-scientific domains that often employ oversimplified metaphors and reductive views of musical practice. We posit that, by enabling creative coders to programmatically engage with sound collections through machine learning and listening technologies, the toolset highlights broader political implications and exposes the oversimplification prevalent in music information retrieval and AI music discourse. Ultimately, this work empowered stakeholders in musical creative coding to resist reductive computational approaches to musicking, maintaining the nuanced complexity inherent in human-computer musical interaction. 

The coding activities are aimed at creative coders with a good understanding of Max (or Pure Data to a certain extent). It has also been successful for observant attendees in the past, who benefited from the theoretical framing of the design process and the explanations of the various machine listening and machine learning concepts as they observed them being implemented by the coding participants. 

References:

  1. Tremblay, P.A., Roma, G., & Green, O. (2022) Enabling Programmatic Data Mining as Musicking: The Fluid Corpus Manipulation Toolkit. Computer Music Journal 2022; 45 (2): 9–23
  2. Moore, T., Bradbury, J., Tremblay, P.A., Green, O. (2025) Making Machine Learning Musical: Reflections on a Year of Teaching FluCoMa. In Journal SEAMUS, Volume 32, Number 1–2, 2021. USA.

Acknowledgments:

This tutorial is the fruit of many design iterations of its material by the FluCoMa team at the end of the funded project (Owen Green, Ted Moore, James Bradbury, and Pierre Alexandre Tremblay). All the material, as well as the thinking behind the design, is available at [2]. 

TUTORIAL LEADER

PIERRE ALEXANDRE TREMBLAY

Conservatorio della Svizzera italiana

Pierre Alexandre Tremblay (Montréal, 1975) is a composer and performer on bass guitar and electronic devices, in solo and group settings, between electroacoustic music, contemporary jazz, mixed music and improvised music. He also worked in popular music, and practises creative coding. His music is available on empreintes DIGITALes. He studied composition with Michel Tétreault, Marcelle Deschênes, and Jonty Harrison; bass guitar with Jean-Guy Larin, Sylvain Bolduc, and Michel Donato; analysis with Michel Longtin, and Stéphane Roy; studio technique with Francis Dhomont, Robert Normandeau, and Jean Piché. Pierre Alexandre Tremblay was Professor of Composition and Improvisation at the University of Huddersfield (England, UK) from 2005 to ’24. In September 2024, he joined the team of the Conservatorio della Svizzera italiana as a research professor in composition. He likes spending time with his family, reading prose, and going on long walks. As a founding member of the no-tv collective, he does not own a working television set.

🔗

Real-time Audiovisual Composition Framework with Unreal Engine and Max/MSP

1.5-hour tutorial, led by Chenghao Xu

TUTORIAL 2

FRIDAY 7 NOV at 11:30 – 13:00
Pool Street, Room 301

Important:

participants will need to download the demo version of Max if they do not already have this installed: https://support.cycling74.com/hc/en-us/articles/32781890060947-Is-there-a-Max-demo.

They will also need to download and install Unreal Engine 5.3 from https://www.epicgames.com/site/en-US/home.

Please follow this tutorial to install Unreal Engine 5.3: https://www.youtube.com/watch?v=FE202G7fKjM

Description:

This tutorial introduces a real-time audiovisual composition workflow using Unreal Engine, Max/MSP, and Max for Live. Participants will explore the fundamental programming logic behind the system, including how sound is generated and processed in Max/MSP and how visuals are created and controlled in Unreal Engine. 

The session will cover the design principles and operational workflow of the integrated framework, highlighting how OSC (Open Sound Control) communication enables real-time interaction between audio and visual components. By the end of the tutorial, participants will have a foundational understanding of how to use a game engine as a powerful tool for creating interactive audiovisual works. 

The session will include demonstrations of the Unreal Engine, Max/MSP, and Max for Live frameworks, hands-on exercises where participants explore real-time audiovisual interactions, and discussions on design strategies, multimodal perception, and creative applications. Participants will actively experiment with the designed interface and interactive software system to understand the framework’s potential for artistic practice and research. 

This session will benefit artists working in the visual or sound field, composers, sound designers, and researchers interested in real-time interactive audiovisual composition and game environments. It is particularly valuable for those working with Ableton Live, Max/MSP, and Unreal Engine who wish to integrate sound and controllable visuals into their creative practice. Educators and students exploring multimodal interaction, extended reality, or creative coding will also gain practical insights and transferable skills. 

TUTORIAL LEADER

CHENGHAO (HAL) XU

University of Edinburgh

Chenghao (Hal) Xu is a PhD researcher in Creative Music Practice at the University of Edinburgh. His research investigates the development of a real-time audiovisual design framework across performance, installation and immersive media context using Max/MSP and the Unreal Engine. 

Procedural audio for video games

2.5-hour tutorial, led by Nelly Garcia & Joshua Reiss

TUTORIAL 3

FRIDAY 7 NOV at 14:00 – 16:30
Pool Street Cinema

Important:

Participants will need to have installed Unity Engine, and a DAW (reaper or Ableton are recommended). Unity can be downloaded from https://unity.com. They will need to create an account for Unity and also one for Nemisindo (https://www.nemisindo.com). Please also watch out for a tutorial file which will be distributed in advance to registered tutorial participants to check their set up.

Description:

As video game technology evolves, sound design becomes increasingly important. A key industry goal is enhancing player immersion—making players feel more connected to the environments they explore and the actions they take [1] . Achieving this often relies heavily on audio, particularly environmental sounds that enrich storytelling and gameplay [2]. 

Traditionally, sound teams use extensive libraries [3], triggering varied effects whenever players interact with the game. However, this approach can lead to repetition and significant memory usage. [4] Procedural audio—often called “digital foley” offers an innovative alternative. By generating sounds algorithmically rather than storing them in large libraries, it reduces memory demands while enabling dynamic, responsive, and immersive soundscapes. This emerging technology holds great promise for the future of game development [5][6]. 

However, the rise of artificial intelligence and immersive technologies has spread up so fast that practitioners, most of the time do not know how to apply them. 

The core activity of the workshop will guide participants in creating their own interactive soundscape within a video game environment using procedural audio tools. This hands-on tutorial will take participants step by step through: a. Initializing a Unity project; b. Connecting game objects to a procedural audio engine; c. Implementing real-time audio behaviours that respond dynamically to player actions; d. Creating an immersive environment within the game engine. 

Participants will also work with sound design and mixing tools, learning how to integrate these assets into real-time audio engines. Through practical exercises, they will gain experience in designing, implementing, and fine-tuning interactive sound systems, equipping them with essential skills for procedural audio development. 

This workshop is designed for early-career researchers and students in the field of audio processing. 

References:

  1. Haggis-Burridge, Mata. “Four categories for meaningful discussion of immersion in video games.” Published on Researchgate (2020)
  2. Broderick, James, Jim Duggan, and Sam Redfern. “The importance of spatial audio in modern games and virtual environments.” 2018 IEEE games, entertainment, media conference (GEM). IEEE, 2018.
  3. Özcan, Elif, and Rene Van Egmond. “Product sound design and application: An overview.” Proceedings of the fifth international conference on desing and emotion, Gothenburg. 2006.
  4. Menexopoulos, Dimitris, Pedro Pestana, and Joshua Reiss. “The state of the art in procedural audio.” Journal of the Audio Engineering Society 71.12 (2023): 826-848.
  5. Böttcher, Niels. “Current problems and future possibilities of procedural audio in computer games.” Journal of Gaming & Virtual Worlds 5.3 (2013): 215-234. 6. Yee-King, Matthew, and Igor Dall’Avanzi. “Procedural Audio in Video Games.” Encyclopedia of Computer Graphics and Games. Cham: Springer International Publishing, 2024. 1483-1487. 

TUTORIAL LEADER

NELLY GARCIA

Centre for Digital Music, Queen Mary University of London

Nelly Garcia is a research scientist specializing in sound synthesis and the optimization of procedural audio. Her work investigates the process of sound design, focusing on how sound aesthetics and audience perception intersect to create memorable and emotionally resonant auditory experiences. Through this work, she contributes to the development of new tools for sound designers and advances understanding of how procedural audio can be more effectively perceived and applied. 

🔗