Demos

Singing Without a Larynx

1.5 hour demo, led by Gala Lucia Gonzalez Barrios

DEMO 1

MONDAY 3 NOV at 11:30 – 13:00
UCL Marshgate, Room 118

Description:

This demonstration presents a web-based interactive system designed to support vocal expression for individuals who have undergone a laryngectomy. Using computer vision and vowel classification based on mouth shape detection, the interface enables users to generate real-time vocal synthesis without phonation. The system aims to expand conventional understandings of voice beyond speech recovery, offering a space for sonic and musical experimentation grounded in embodied interaction.

Through the interactive app we aim to explore how participatory design and critical disability studies can inform the development of assistive vocal technologies that move beyond compensatory or rehabilitative paradigms. Rather than attempting to restore normative vocal function, the system invites a reimagining of voice as spatial, malleable, and co-constructed through interface, gesture, and intention. Anchored in a critique of ableist assumptions within technological design, the project aligns with identity recovery, creative autonomy, and user agency for individuals who have experienced structural silencing.

Attendees will observe the vowel classifier in action, experiment with input-output relationships, and engage in discussion around the system’s conceptual and technical foundations. Fully browser-based, the interface integrates flexibly into live performance or research settings. By centering aural diversity and embodied musical interaction, this demo contributes to the CMMR 2025 theme “Sound, Music: Space, Place” through a redefinition of voice as a site of expression that can be felt, shaped, and reclaimed.

Software/ hardware/experience requirements:

Attendees are encouraged to use their own laptops for brief hands-on interaction with the system. Optionally: attendees can use headphones for private listening of the system.

A sign-up sheet will be available at the registration desk.

DEMO LEADER

GALA LUCIA GONZALEZ BARRIOS

Virginia Polytechnic Institute

Gala is a transdisciplinary researcher and PhD candidate in Human-Centered Design at Virginia Tech. Her work focuses on multi-actor engagement at the intersection of critical disability studies, participatory design, and innovation policy. She investigates how AI design for health applications can be co-created with diverse stakeholders to uphold agency, dignity, and social justice within vulnerable communities. Bringing together researchers, policymakers, practitioners, and citizens, Gala facilitates collaborative approaches to complex societal challenges. Her research challenges dominant health-tech narratives by valuing experiential knowledge and lived experience over market efficiency, demonstrating how inclusive innovation can bridge the divides between science, policy, practice, and society.

🔗

Real-time Visualizer for Beats and Scratches of Breaking DJ Performances

1.5 hour demo, led by Masatoshi Hamanaka

DEMO 2

MONDAY 3 NOV at 11:30 – 13:00
UCL Marshgate, Black Box

Description:

This work will demonstrate a system that visualizes the improvised beats and scratches of a breakin‘ DJ for the deaf to enjoy when the Japanese broadcaster (NHK) broadcasts the breakin‘ competition.

Breakin’ is a dance duel between two dancers to music improvised by a DJ. Since breaking was chosen as an official competition at the Paris Olympics, many people have begun to watch the competitions on television. When a dancer freezes in time with a beat or scratch in breaking, it is called “kill the beat”, and when it is successful, the audience gets very excited. However, TV viewers may have difficulty hearing the music due to the announcer’s or commentator’s voice, making it difficult to judge if the kill the beat is successful or not. In addition, some in the audience may be hard of hearing. Therefore, we developed a visualizer for the beats and scratches of breaking DJ performances.

Software/ hardware/experience requirements:

None.

A sign-up sheet will be available at the registration desk.

DEMO LEADER

MASATOSHI HAMANAKA

Center for Advanced Intelligence, RIKEN

Masatoshi Hamanaka received PhD degree from the University of Tsukuba, Japan, in 2003. He is currently a leader of Music Information Intelligence team in Center for Advanced Intelligence Project, RIKEN. His research interest is in music information technology, biomedical and unmanned aircraft systems. He received the Journal of New Music Research Distinguished Paper Award in 2005.

🔗

Echo Fossils: A Fossil-Inspired AI-Generated Ambient Soundscape Interactive System for Emotional Resonance and Emotional Soothing

1.5 hour demo, led by Jingjing Sun

DEMO 3

FRIDAY 7 NOV at 14:00 – 15:30
UCL Marshgate, Room 118

Description:

This study introduces Echo Fossils, a conceptual digital music instrument that combines natural artefacts with AI-generated soundscapes to support emotional well-being and mindfulness. Inspired by the geometry of ammonite fossils, the design embeds generative music into fossil forms, allowing users to engage through tactile gestures like touch, tap, and twist. These interactions activate ambient audio that evokes oceanic and prehistoric environments, creating a multisensory experience rooted in both physical and emotional resonance. The inspiration was drawn from the acoustic characteristics of deep-sea marine life and the sound of ocean waves interacting with coastal surfaces, intended to symbolically represent the ammonite being carried ashore. The approach adopts a gradual transition from minimal soundscapes to complex and layered sonic textures. The system promotes nonlinear exploration through unstructured input and output, encouraging playful and contemplative engagement. By bridging natural forms with generative audio technologies, Echo Fossils enables a co-creative relationship between human and machine, supporting mindfulness through sound and material. This research explores how integrating symbolic natural geometry with adaptive music may deepen users’ connection to nature and self-awareness. The audio part is primarily informed by auditory elements derived from marine environments and organisms.

Software/ hardware/experience requirements:

None.

A sign-up sheet will be available at the registration desk.

DEMO LEADER

JINGJING SUN

Imperial College, London

Jingjing is a PhD Student of Design Engineering, Imperial College London.

🔗

Tonebank: The architecture of African harmonic loop music

1.5 hour demo, led by Barak Schmool

DEMO 4

FRIDAY 7 NOV at 14:00 – 15:30
UCL Marshgate, Black Box

Description:

TONEBANK is a new DAW plug-in inspired by cross-cultural African musical structures. The demo will explain the hidden mathematical and generative architecture found in Sub-Saharan music traditions and show how the plug-in enables producers to explore and compose within this framework. The session includes a live acoustic performance illustrating these concepts in practice.

Across cultures in Sub-Saharan Africa from Kenya to Ivory Coast and all places south, there is a hidden level of architecture in the looping musical structures of much music. This is most clearly realised in music with xylophones, harps, lamellophones etc, but also connects to vocal polyphony and popular music. The substructure of this stratum of African traditional music is rich in symmetry, self-similarity and has a natural and logical generative process with mathematical and sonic beauty. The term ‘Tonebank’, created by ethnomusicologist Peter Cooke, describes the surface of the architecture within Bugandan xylophone music of Southern Uganda. My Demo will both explain this hidden governing architecture, how it is definable, how you have already heard it providing strength to music, and show “Tonebank” a DAW plug-in that allows music producers to access, program, manipulate and control improvisation within this environment.

Software/ hardware/experience requirements:

Demonstrator will provide all necessary software and hardware for attendees to use.

A sign-up sheet will be available at the registration desk.

DEMO LEADER

BARAK SCHMOOL

Royal Academy of Music, London

Barak is the founder of the F-IRE Collective and a long-time member of Django Bates’ big band, Delightful Precipice
He has also played with Bosco De Oliveira, Nana Tsiboe, Robert Mitchell and Hilaire Penda (sax/percussion). His own bands, Timeline (Afro-rhythmic jazz), Meta Meta (Cuban Bata fusion) and Synergy (14-piece Big Band) present music often integrated with dance. He is also responsible for drumming group, Rhythms of the City (samba bateria) which arose from years of education work at City University, Trinity Laban, Goldsmiths College, Royal Academy of Music and numerous community projects. Barak has been responsible for developing the education projects of F-IRE, the record label, the programming of festivals and the international connections to the continent.

🔗

Undine Gloves : A Hybrid CV-MIDI Wireless Gestural Controller for Modular Synth Ecosystems

1.5 hour demo, led by Bruno Newman

DEMO 5

FRIDAY 7 NOV at 14:00 – 15:30
UCL Marshgate, 2nd Floor Breakout

Description:

Undine Gloves are wireless, dual-hand gestural controllers for electronic music that provide six degrees of motion sensing with hybrid CV+MIDI output for modular and DAW-based systems. Each glove streams three axes via ESP-NOW to a receiver that generates 12-bit CV (0–5 V) alongside USB-MIDI, achieving measured end-to-end latency of ≈11 ms. On-glove AMOLED displays show real-time bar-graphs of each axis, supporting calibration, drift compensation and audience comprehension. The CMMR demonstration features a labelled video showing gesture-to-parameter mappings, live performance with a minimal Eurorack voice (VCO, filter, reverb), explicit control mappings and audience hands-on participation. While primarily effective for continuous timbral control and constrained-pitch interaction, the system roadmap includes quantisation modes, haptic feedback and integration with biosensors and environmental modules, positioning Undine Gloves as a Eurorack-native interface for embodied performance in modular/DAW-hybrid contexts.

Software/ hardware/experience requirements:

Demonstrator will provide all necessary software and hardware for attendees to use.

A sign-up sheet will be available at the registration desk.

DEMO LEADER

BRUNO NEWMAN

University of Chichester

Bruno Newman is an award-winning composer, orchestrator, arranger, and conductor whose music has been described as both “fresh and powerful” and “intensely emotional.” His compositions span orchestral, choral, chamber, and electronic forms, often blurring the boundaries between classical instrumentation and contemporary sonic environments. Following over two decades as founder and creative lead at London’s Resident Recording Studios (2001–2023), Bruno now works full-time as a composer, musical educator, and researcher. His orchestrations and productions have featured internationally, recognised for their rich harmonic language, dynamic pacing, and the fusion of acoustic and analogue synthesis. Bruno studied at Magdalen College, Oxford, and later earned an MA in Orchestration for Film, Games and Television (Distinction) and a Licentiate of Trinity College London in Composition. He is currently undertaking a PhD in Composition at the University of Chichester, where his research explores musical liminality through gesture-controlled synthesis and modular systems. He teaches composition, orchestration, harmony, and theory at the University of Chichester and privately. He also remains active as a freelance conductor and music journalist. Bruno serves on the Board of Directors for the Incorporated Society of Musicians (ISM) and is a member of the Ivors Academy and the Association of British Choral Directors.

🔗