Menü

Performance at UNI.T - 8th of february 2025

Hybrid Rituals — UdK 50 years celebration @Unit Theater

Wann

8. Februar 2025

Wo

UNI.T - Theater of UdK Berlin, Fasanenstr. 1 B, 10623 Berlin

Format

Performance at the event "An Experimental Ecology. Art Practice in Dialogue with Disciplines at the UdK Berlin"

InKüLe Project leader

Sabine Huschka

Concept

Eman Safavi Bayat, Sabine Huschka, Marcello Lussana, Anastasia Putsykina, Elisabeth Scholz, Franz Siebler, Fang Tsai, Tristan Wheeler

Organization and Script Writing

Anastasia Putsykina

3D Visual Design

Elisabeth Scholz, Fang Tsai

Live generated Visual

Tristan Wheeler

Hardware Design

Franz Siebler

Interactive Sound Design

Marcello Lussana

Live Audio performance

Eman Safavi Bayat

Choreography and Dance

Javier Blanco

Documentation

Beril Ece Güler, Pia Stelzer

Text

Marcello Lussane, Anastasia Putsykina, Fang Tsai

What story do we want to tell?

Performance at the event "An Experimental Ecology. Art Practice in Dialogue with Disciplines at the UdK Berlin"

With the performance Hybrid Rituals, InKüLe was part of the event An Experimental Ecology. Art Practice in Dialogue with Disciplines at UNI.T (Theater of UdK Berlin) as part of UdK Berlin's 50th anniversary celebration. The performance invited the audience to experience the interplay of sound, light and movement, controlled through innovative technologies that respond to proximity and touch. This creates a dynamic dialogue between physical and digital spaces, based on artistic methodologies and interfaces developed by InKüLe. Through audiovisual interactions, projection mapping and motion tracking, the stage becomes a hybrid space where materiality, images and sounds merge. The scenario between performance and installation offers an immersive experience that the audience can actively shape.

Note: The approximately 20-minute performance took place at 19:15 as part of the event running from 16:00 to 21:00.

Educational approach

Hybrid Rituals is an interdisciplinary performance developed by InKüLe. Since 2021, the project has focused on exploring digital innovations in artistic education. This performance is an attempt to offer deeper insights into the precious knowledge generated through artistic practices and developed interfaces.

Throughout its development, all processes—including technology integration, technical manuals and key insights—were documented. These resources offer a look behind the scenes and the opportunity to utilize them for other artistic formats.

The Miro board is used for idea construction, where the team collects interesting references and core ideas they want to capture.

 

Ineffability of the sensations of nearness and touch

Concept Forming

InKüLe considers itself as a bridge between digitality, education and artistic practice. The open call for UDK’s 50th anniversary was, for us, a valuable opportunity to activate and reflect on the teaching and learning experiences we’ve gathered over the past three years. Through collaboration with student assistants and experimentation with digital media, we document the entire creative process and share our insights with other interested communities.

It began with a simple question: what kind of performative format could emerge from our past workshops and experiments? And more importantly, what story do we want to tell?

We started by reflecting on the idea of shifting perspectives in artistic practice. For us, an art project is never just a static object meant to be exhibited — it’s also a process of communication, of exchange. Art is not only created to be seen but to be interpreted, misinterpreted and reshaped through the eyes of others. This led us to the classic parable of The Blind Men and the Elephant — a story about how different people, experiencing only a part of something larger, arrive at different, even conflicting understandings of the whole. In the same way, we recognized that misunderstandings or partial perceptions are not failures, but meaningful components of the artistic experience itself. These fragments of interpretation — sometimes misaligned, sometimes beautifully resonant — became central to how we shaped this performance.

This reflection laid the foundation for our piece: a performance that not only shows but listens, that invites multiple viewpoints and embraces their divergence. The work became a living space of negotiation between body, image, sound and audience — constantly remade in each encounter.

In developing the piece, we collaborated closely with Javier Blanco, a performer and choreographer whose practices embody openness to multiplicity and transformation. His work played a key role in shaping the embodied language of this performance. Blanco holds a Master's in Choreography from HZT Berlin and his unique trajectory — from studying physics in Colombia to creating movement-based performances integrating technology — brings a deep interdisciplinary resonance to the project. In *Hybrid Rituals*, his movement composition bridged the tactile and the virtual, grounding the technical systems in human gesture, rhythm and relational presence.

Our awareness of ourselves and others transforms through proximity and touch.
Anastasia Putsykina moves the screen during the rehearsal.

A dialogue through distance, gesture and presence

The performer started in front of an blank screen.
The performer begins by exploring a variety of movement patterns around the antenna
The sound interactive part with 3 other performers on stage.
The performer introduces a moving screen.
The screen as a co-performer

What do you see?

Structure of the performance

In the first part of the piece, we incorporated and adapted elements of the former project Sentire, a research-based artistic work that uses interactive technology to trigger and modulate sound based on the proximity and touch between two or more people. We created new sound environments, specifically designed and named such system Proximity and Touch. This work is rooted in the philosophical ideas of Maurice Merleau-Ponty, exploring the ineffability of the sensations of nearness and touch, and draws on Edward T. Hall’s theory of proxemics, which categorizes physical space into public, social, personal and intimate zones. Proximity and Touch explores how our perception shifts depending on how someone approaches or touches us — and how these shifts can be made audible and tangible through sound.

By implementing this system in the performance, we invited both performer and audience to become attuned to subtle, bodily perceptions — how our awareness of ourselves and others transforms through proximity and touch. Using wearable sensors and real-time sound modulation, we turned human interaction into a live instrument. As the performer gradually invited audience members onto the stage and handed over the sensor bracelets, new sonic relationships emerged. A dialogue unfolded — not through words, but through distance, gesture and presence — encouraging a deeper awareness of boundaries, vulnerability and connection.

In the second part of the performance, we ventured into the interplay between virtual space and physical movement, bringing together our work in 3D environments with the embodied language of dance. At the center of this exploration is a single object: a mobile screen, fitted with a Vive Tracker paired with base stations, the tracker allows for precise spatial mapping, but instead of using it in the conventional way — tied to a performer’s body — we chose to let the screen itself become the performer.

This screen, in its movement, acts as a portal: framing fragmented glimpses of a digital world constructed from 3D scans and imagined landscapes. As it glides across the stage, it reveals not only new visual perspectives but invites a shifting of perception itself. The screen is no longer a passive surface; it becomes a character with agency, a lens through which reality and imagination blur.

At a certain moment, the performer turned and asked the audience a simple but disarming question: “What do you see?”. With this gesture, the performance opened up. Meaning is no longer dictated from the stage, it is co-authored in real time. Audience responses were fed back into the system, shaping the projected visuals through an AI image-generation model. What emerged is a shared narrative: one that accepts ambiguity, celebrates multiplicity and reveals how understanding is always partial, always in motion. The boundary between seeing and being seen, between performer and observer, dissolved. In shifting the frame — both literally and metaphorically — we offered the audience a role not just as viewers, but as participants in the unfolding of meaning. The artwork became a conversation, and its final form was never fixed, but always becoming.

Performance & technology : Rundown

Part1: ENGAGE

Exploring proximity and touch interactively through sound, light, dance and audience engagement

At the beginning of the performance, a dancer enters the stage and approaches an antenna positioned at the center. This antenna is equipped with a proximity and touch sensor, developed from the earlier Sentire project (sentire.me), which detects nearness and physical contact to modulate sound in real-time using SuperCollider. The performer begins by exploring a variety of movement patterns around the antenna, gradually uncovering the relationship between spatial positioning and the dynamic behavior of the soundscape. These movements are performed by Javier Blanco, whose dual role as choreographer and dancer brings an acute physical sensitivity to the stage. With each gesture, Blanco tests the space like an instrument, composing live sound through nearness, hesitation and touch. The audience is drawn in to witness a dance where an intimate dialogue emerges between body and system—an evolving relationship marked by sensitivity and responsiveness.

As the interaction continues, the performer extends the experience to the audience by inviting an individual from the seating area onto the stage. The performer hands over a wearable bracelet sensor—connected to the antenna system—to the participant, allowing them to co-create sound through their movements. Over time, three more audience members are invited to join, forming a 2:2 grouping. With two people connected by hand, they become a joined entity which react to the sound together. The space transforms into a collaborative field of embodied interaction, where sound responds dynamically to the evolving choreography of distance, touch and interpersonal presence. This segment foregrounds the emotional and perceptual nuances of closeness, inviting both participants and observers to reflect on the sonic implications of human connection.

Part2: REVEAL

Blurring the boundaries between the virtual and the real through movement, AI-generated visuals and collective imagination

The second part of the performance begins as the sound fades out and the performer introduces a moving screen. This screen, equipped with a motion tracker, functions as a dynamic window into a virtual environment created in Unreal Engine. As performer rotates and positions the screen, it reveals a 3D digital world, allowing the audience to explore the space from different angles based on the screen’s movement. The dancer is animating the screen with a precise, embodied score. His movements transform the object into a co-performer—one that glides, halts and pivots as if guided by intention. The choreography blurs the line between body and equipment, using dance to question what is seen, and how. This virtual world is one of two distinct visual elements in the performance; the other is an AI-generated projection displayed on a larger screen.

Live visuals from the moving screen’s video feed are streamed in real-time to StreamDiffusion—an AI-powered image generation pipeline integrated into TouchDesigner. During the performance, the dancer engages with the audience, asking them what they see. Their responses serve as prompts for StreamDiffusion, dynamically transforming the projected imagery to reflect the audience’s input. This interactive process brings their collective imagination to life, merging virtual exploration with generative creativity.

Simultaneously, a live-generated soundscape composed with the VCV Rack virtual modular synthesizer underscores the entire experience. The audio design reacts to the dancer’s movements and aims to create a slightly dimmed, atmospheric mood that enhances the sense of uncertainty in a world composed of fragmented, incoherent 3D-scanned objects. Together, the visuals and sound build a layered, immersive environment where reality and imagination blur.

The team sets up the techniques for the performance


Tutorials: backstage of the magic

Proximity and Touch - SuperCollider & Sound interface setup

Proximity and Touch is an interactive system that detects nearness and physical contact to modulate sound in real time via SuperCollider. the distance between the performer and the metal object first, and another human participant afterwards, continuously translates into a three-layered sound texture, while direct contact triggers distinct melodic and rhythmic events. As the distance shifts, so does the sonic landscape, turning human connection into an instrument of expression.
(Tutorial written by Marcello Lussana)

Description of the former Sentire system

Sentire is a cable-based body–machine interface that uses sound to represent proximity and touch between people in real time, turning simple movements into an interactive audio experience. In its wired form, two participants each wear a conductive bracelet that is connected by standard audio cables to a low-noise signal generator that feeds a 0–3 mHz oscillating, low-voltage signal and a differential amplifier. One bracelet acts as a transmitter, injecting the signal into the wearer’s body, while the other acts as a receiver, picking up the signal via capacitive coupling through the air and skin. The amplitude of the signal falls off approximately linearly across the intimate (0–0.5 m), personal (0.5–1.2 m) and social (1.2–3 m) proxemic zones and jumps discreetly upon physical touch. The amplified 'raw' voltage is fed through a consumer-grade audio interface into SuperCollider, where a digital signal processing (DSP) chain (band-pass and slew filters → envelope follower → adaptive exponential lag → nonlinear-to-linear scaling) yields a stable proximity control signal and detects touch events.

These gestural parameters are then mapped divergently — one-dimensional proximity controls several synthesis parameters simultaneously — to drive algorithmic sound environments ('Sinus' ambient pad and 'Pulse' percussive synth), modulating amplitude, pitch, pulse rate and harmonic content on approach and triggering randomised chord/melodic sequences on contact. As all the signal processing and mapping routines are specified in open SuperCollider code and rely on standard audio input/output (I/O), conductive electrodes and off-the-shelf amplifiers, the system can be recreated by wiring two bracelet electrodes through an audio interface, implementing the published SuperCollider DSP/mapping patch and arranging simple calibration of amplifier gain versus distance. The whole development of the software, including various scientific studies, was undertaken during the research project Soziale Interaktion durch Klang-Feedback – Sentire funded by the Bundesministerium für ­Forschung, Technologie und Raumfahrt.

All the information related to the *Sentire* system, included the diagram of the signal path are taken from the following publications: Rizzonelli, Marta, Jin Hyun Kim, Pascal Staudt, and Marcello Lussana. “Fostering Social Interaction Through Sound Feedback: Sentire.” Organised Sound 28, no. 1 (2023): 97–109. https://doi.org/10.1017/S1355771822000024.

Staudt, Pascal, Anton Kogge, Marcello Lussana, Marta Rizzonelli, Benjamin Stahl, and Jin Hyun Kim. “A New Sensor Technology for the Sonification of Proximity and Touch in Closed-loop Auditory Interaction”. Zenodo, 2022. https://doi.org/10.5281/zenodo.6798242.

Adaption for Hybrid Rituals: the sound environment ‘Smokey’

Taking the former Sentire system as our starting point, we created a specific sound environment. We named the system Proximity and Touch and its sound environment 'Smokey', since  the performer begins the show by emerging from a cloud of smoke. The  proximity-based interaction is elusive and intangible, similar to the nature of actual smoke.

In this sound environment, the distance between the performer and the metal object first, and another human participant afterwards, continuously translates into a three-layered sound texture, while direct contact triggers distinct melodic and rhythmic events. As proximity decreases, a rhythmic layer composed of detuned sawtooth and pulse wave oscillators gradually increases in amplitude, providing a steady, throbbing foundation. At the same time, a harmonic layer generated by dual-rate ring modulation (at one-and-a-half and half the base frequency) is routed through a resonant low-pass filter. The cutoff and resonance parameters of this filter scale linearly with proximity, resulting in increasing tonal warmth. A third noise layer consisting of filtered noise bursts gated by a slow envelope introduces a gentle, breath-like texture and its trigger rate likewise rises as participants draw nearer. These three layers are summed and smoothed by a low-frequency sweep to ensure a seamless transition from distant to intimate ranges.

Physical contact initiates a secondary mapping whereby each touch event resets and reshapes the rhythmic and noise envelopes to produce percussive accents. At the same time, a probabilistic selector draws short note sequences from a predefined minor scale (rooted at A = 110 Hz) with the occasional repetition or stutter to produce brief melodic motifs. The strength of the touch influences the depth of the low-frequency modulation applied to the pad, as well as the probability of pitch variation. This incorporates gestural nuance into each contact response.

To delineate perceptual zones, the system uses a custom envelope with breakpoints at 0%, 40%, 80% and 100% of the maximum sensing distance, which correspond to social, personal and intimate proxemic regions. This mapping ensures that volume, brightness and textural density evolve in clearly defined stages. By integrating continuous proximity control with discrete, touch-triggered events, Hybrid Rituals provides performers with a responsive, layered soundscape that transitions from ambient drones when apart to dynamic, melodic interplay upon contact.

AR Screen Effect - VIVE tracker linked to Unreal Engine

The second part of the performance features a hybrid AR setup that transforms a simple screen into a portal between physical movement and digital imagination. A Vive tracker mounted on the screen captures its position and rotation, translating real-world gestures into navigation within a 3D environment built in Unreal Engine.
(Tutorial written by Fang Tsai)

The AR screen is build with a monitor screen on wheel and a VIVE tracker mounted on top. The VIVE tracker 3.0 works with the VIVE base station 2.0, for a tracker to function, you’ll need a minimum of 2 base stations, to avoid dead area or to improve tracking you can extend it to up to 4 base stations.

Vive tracker and it’s base station. With this method a HTC Vive headset is not needed, which is good for project with limited budget.

Normally Vive trackers and base stations work with a Vive headset from the following list:

  • HTC VIVE
  • VIVE Pro Series
  • VIVE Pro Eye Series
  • VIVE Cosmos Elite

But in our performance case, we only need the base stations to sense the location of the tracker, so we needed to hack the system. Following this tutorial will guide you through the process of setting up VIVE tracker without the headset, an installation of SteamVR is required:

Dieses Video-Element nutzt Videodaten und Playback-Technologie von YouTube. Wir verwenden eine Datenschutz-optimierte Version von YouTube, bei der weniger Nutzungs- und Statistikdaten an YouTube übertragen werden. Video anzeigen?

In this setup, you’ll need a Windows computer with a good graphic card, installed with Steam (to be able to download SteamVR), SteamVR, and Epic Games Launcher (to download Unreal Engine).

Files directory:

  1. Program Files (x86)\Steam\steamapps\common \SteamVR\ drivers\null resources\settings\default.vrsettings
    1. "enable": true
  2. Program Files (x86)\Steam\steamapps\common\SteamVR\resources\settings\default.vrsettings
    1. "requireHmd": false
    2. "forcedDriver": "null”
    3. "activateMultipleDrivers": true

In the next step, we’ll need to install Unreal Engine with Epic Games Launcher.

After installation, download the newest version of Unreal Engine 5 (in our case, we work with ver 5.3.2). Install the following plugIn in Unreal:

  • VirtualCamera (beta)
  • Live Link
  • LiveLinkXR
  • OpenXR
  • OpenXRViveTracker (Beta)
  • SteamVR (Disabled)

The following video explains the workflow of installing some plugins. The listed version above is the updated one you should be installing, the video has a slightly older version. Nevertheless, this video provides a more detailed description on setting up a virtual camera with smoothing effect in Blueprint and the bounding to your Vive tracker with LiveLink.

open Live Link if it’s not shown directly when you started Unreal,

Window → Virtual Production → Live Link

Dieses Video-Element nutzt Videodaten und Playback-Technologie von YouTube. Wir verwenden eine Datenschutz-optimierte Version von YouTube, bei der weniger Nutzungs- und Statistikdaten an YouTube übertragen werden. Video anzeigen?

Before you get stuck by not getting any signal from the Live Link, please follow this explanation step by step in this video to start the project properly:

Dieses Video-Element nutzt Videodaten und Playback-Technologie von YouTube. Wir verwenden eine Datenschutz-optimierte Version von YouTube, bei der weniger Nutzungs- und Statistikdaten an YouTube übertragen werden. Video anzeigen?

Open Windows app “Run”, and execute the following line, change your Unreal path if needed:

C:\Program Files\Epic Games\UE_5.3\Engine\Binaries\Win64\UnrealEditor.exe -xrtrackingonly

After launching your project, you can now go back to the previous video to create the binding with the virtual camera. Mount the Vive tracker safely on you monitor (triangle facing front), and parent the virtual camera underneath the vr_cam blueprint object.

So far is how the AR screen is set up, in the next step we will go through the process of creating the virtual scene and the digital object inside.

Workflow every time when opening:

  1. Open unreal with the “run” app
  2. Open LiveLink and connect the tracker
  3. Open vr_cam blueprint → LiveLinkComponentController, to link the tracker inside Subject Representation
  4. Parent the Cinema Camera under vr_cam
Scale Actor 2 to 5.0
3D World Building - 3D scanning & Unreal world building

The 3D environment is built using Unreal Engine, composed of various 3D scans collected with the Polycam App. These scans are reassembled and recontextualized into surreal, artificial entities. From gathering raw models to sculpting terrain, this part explain the technical setup for the imagined world.
(Tutorial written by Elisabeth Scholz)

Screenshot of the 3D environment built in Unreal Engine.
Quixel Mega Scans

For terrain textures and a lot of  the 3D assets, especially rocks, trees and plants, we used Quixel Megascans. To import them, you need to add fab to your Unreal project and import the assets with this plugin.

Megascans are very hand since they come in different detailed variances and Unreal will automatically switch between them depending on how close they are to the camera.

www.fab.com

FAB To UE5: How to Add Assets to Your Project

(Since 2025 the Megascans can only be added to Unreal via fab, not Quixel Bridge like previously.)

Terrain and Foliage Tools (native to Unreal)
Sculpting Landscape

With the native Unreal Terrain tools you can sculpt and paint different textures onto the terrain.

We mostly used a combination of Noise and Smooth. We added Noise in different scale on top of each other to create a mountainous landscape and flattened the player area in the center.

Painting Textures

A similar Noise-technique was used to achieve variance in texturing.

To be able to paint with different textures, you need to create a landscape material and blend between the textures with a “Landscape Layer Blend” node. Then you need to create a “Landscape Layer Object” for each of your texture layers and add them to the landscape in landscape paint mode as seen in the screenshot.

Painting Foliage

dev.epicgames.com/documentation/en-us/unreal-engine/foliage-tool?application_version=4.27

After you’ve imported the 3D assets for your foliage via fab, you can paint it directly onto the terrain in foliage mode. This can be rocks, bushes and flowers, trees, and any other 3D object. In foliage mode, you need to add the object as a foliage to be able to use it with the different brushes.

Lighting

By rotating a given directional light in the scene, you can influence the position of he sun on the skybox and change the time. This is also native to Unreal.

dev.epicgames.com/documentation/en-us/unreal-engine/directional-lights-in-unreal-engine

3D Scans

In the world building in Unreal, we’re using 3D scans students and teams did with the app Polycam, the material is under Creative Common 4.0: Deed - Attribution 4.0 International - Creative Commons

InKüLe Polycam account: @inkuele3000 | Polycam

Polycam tutorial: franz siebler polycam guide

A modified adaptation of individual 3D scans are made, we decrease the face count and remove excessive scanned surface with open-source software [Blender], then relocated them into artificial object collage/ installation.

Decimate Modifier - Blender 4.4 Manual

Blueprints Animation

We added some additional point lights to the scene connected to blueprints to animate them base on time.

Because of the architecture of the program (mostly the usage of Vive trackers) we didn’t run the scenes in Play Mode but only in the viewport. This is why, in the Event Graph we didn’t use “Event BeginPlay” or “Event Tick” like you normally would, but “On Live Link Updated”. This came with the Vive Tracking and was also used in this setup.

This blueprint will animate the position of the Actor on the z axis based on time.

It is possible to expose variables in the blueprint to make them editable from the Scene View.

Live Generated Video - StreamDiffusion in TouchDesigner

The live camera feed from the virtual world is routed into StreamDiffusion, an AI-powered image generation model, where it is transformed in real time based on audience-generated text prompts. We implemented a custom module by DotSimulate, StreamDiffusion is embedded into the TouchDesigner environment to output a live generated video.
(Tutorial written by Tristan Wheeler)

StreamDiffusion is a pipeline-level solution designed for real-time interactive generation, leveraging models like Stable Diffusion. Integrating it with TouchDesigner allows for dynamic AI-generated visuals that respond to live inputs, enhancing interactive performances.

Prerequisites

Before beginning the integration, ensure your system meets the following requirements:

- Operating System: Windows 10 or later. (Not compatible with Mac)
- GPU: NVIDIA GPU with CUDA compatibility
- TouchDesigner Version: 2023 or later.

Integration with TouchDesigner

This patch uses a private module developed by DotSimulate in order to use StreamDiffusion within TouchDesigner. It can be accessed by signing up to their Patreon for a small fee:

Get more from DotSimulate on Patreon

While it is possible to manually connect StreamDiffusion and TouchDesigner, this process requires familiarity with Python, REST APIs/WebSockets and handling real-time image generation models. Setting up StreamDiffusion manually involves configuring a local server, managing dependencies like CUDA, ensuring GPU compatibility and handling real-time image streams via HTTP, WebSockets or NDI. For those without sufficient experience in Python and AI models, we recommend using a third-party patch, such as the one provided by dotsimulate, to simplify the setup and ensure smooth integration. The patch provided below includes an easy to use GUI for installing all libraries and configuring TouchDesigner to work with pytorch and CUDA.

For a manual setup follow the instructions here:

github.com/cumulo-autumn/StreamDiffusion

Steps

Open this link to download our performance patch: (modified)
cloud.udk-berlin.de/s/nPE65Dr62q2CEn2

 

  1. Open the patch, find the red box and replace the 2 “null TOP” with the .tox you download from the DotSimulate as input and output
  1. Install the StreamDiffusion from the .tox open “StreamDiffusionTD”, go to tab “Install”, following the installation steps.
  2. To link the controlling mechanism, go to the Settings 1 tab and ender in the Prompt Blocks (add prompt 0 & 1 if you don’t have it) in the Weight parameter “op(’PROMPT_WEIGHT_1’)[’prompt_fader’]” & “op(’PROMPT_WEIGHT_2’)[’prompt_fader’]”, add “op(’STEP_1’)[’step_1’]” in the Tindexblock (see the screenshot below).
  1. Once all set, press the Start Stream, which should open up a Terminal and the component should be working.

Why TouchDesigner / Structural Overview

TouchDesigner provides an easy to configure interface for manipulating and processing our video feed amongst various data streams. In this project we use the output from the camera in the virtual world as an video input via Blackmagic ATEM mini into our diffusion model, which gives a compositional structure for the generated images. We have configured a midi controller, which allows for fluid manipulation of parameters compared to using the mouse and keyboard.

There are only two types of parameters we modulate live connected to the diffusion model itself: step sequences and prompts. The step sequencer controls how many levels of processing the diffusion model will execute. During the course of the performance one can see the visuals fluctuate between a direct resemblance to the input visuals and no structural resemblance whatsoever. This is a result of panning between 0 and 50 steps of diffusion.

Prompts are a series of characters that can be input into the model to influence the output image. These can range from descriptive phrases to abstract concepts, allowing the audience or performer to experiment with different stylistic and thematic variations. The system dynamically parses and integrates these text inputs, modifying the generative process in real time by guiding the image generation model towards a certain descriptor. We configured our TouchDesigner patch to interpolate between the two prompts, controlling the ‘weight’ of how much each prompt influences the output generation of the model. By doing this we can slowly fade from the influence of the previous input from the audience to the current input i.e. the visuals fade from resembling ‘seahorses’ towards ‘skyscrapers’.

We also use TouchDesigner to “clean up” some of the inputs and outputs of these signals. We used a chain of Transformation, Blur and Feedback TOPs to convert the vertically oriented video feed into something that would work well with the back projections. To make the manually adjustments on the midi controllers appear less abrupt, we added LAG Chops to the midi input to smooth out the transitions in values.

Live Soundscapes - VCV Rack generative audio performance

The generative soundscape is composed in VCV Rack, forming a textured auditory layer that mirrors the fragility and uncertainty of the imagined world. Simulating analog modular synthesis, the patch weaves together fragmented sonic elements into an evolving atmosphere, resonates with the visual chaos of the 3D-scanned environment, grounding the performance in a mood of subtle tension and disorientation.
(Tutorial written by Eman Safavi Bayat)

Hybrid Rituals: REVEAL (VCV Rack Generative Performance System)

In this part of the performance, the idea of uncertainty is explored through a generative soundscape crafted in VCV Rack. The patch serves as a sonic reflection of an ever-changing world—a mood subtly dimmed and laden with echoes of incoherent, 3D-scanned fragments of reality. Using VCV’s simulation of analog sound effects, various audio elements are routed to build a multilayered experience that is neither conventional music nor a danceable beat, but rather an evolving atmospheric environment. This soundscape sets the stage for the performance, inviting the audience to feel the tension and fragility of contemporary times through sound.

This documentation details the inner workings of the patch designed exclusively by Eman for the “Hybrid Rituals” performance. It explains how multiple signal threads interweave to create a dynamic, evolving soundscape that supports the performance narrative. Each section outlines the role of the module or thread and its relationship to the live control interface.

Screenshot of VCV rack, the actual patch file can be found in “Additional Resources”
I. System Architecture Overview

The patch is organized into six parallel signal threads. Each thread is dedicated to a unique sound-generating process and is later mixed together via a central “MindMeld” mixer. The overall signal is then processed by a master DJ-style filter and shared reverb, ensuring a cohesive sonic output.

Central Mixer & Effects:

  • MindMeld Mixer: Consolidates the six signal threads, ensuring balance and interaction between layers.
  • Shared Reverb Send/Return: Provides a unified spatial effect across threads.
  • Master DJ Filter: A dual-mode filter (highpass/lowpass) that shapes the overall timbre, used dynamically during the performance.
II. Detailed Signal Threads

1. Drone Thread

Purpose: Establishes the foundational ambient texture that evolves over the course of the performance.

  • Sound Generation:
    • Uses three VCOs (Wavetable VCO and Modern VCO from Surge XT, and Vult from Basal) to produce rich harmonic content.
    • Stochastic Modulation: The Vult module from Caudal introduces randomness into oscillator parameters, ensuring the drone never sounds static.
  • Processing Chain:
    • Subtractive Synthesis: A Surge VCF shapes the harmonic spectrum by filtering out unwanted frequencies.
    • Granular Texture: Grayscale Supercell adds granular effects that break up the signal into micro textures.
    • Additional Effects: Phaser and distortion modules introduce movement and edge to the sound.
    • Dynamic Control: Use Comp II by Squinky Labs to create Sidechain compression (ducked by a kick drum signal) allows the drone to interact rhythmically with percussive elements.
  • Live Interaction:
    • The fader of the midi controller ,Korg nanoKONTROL 2, controls the amount of high-end noise injected at the performance start, setting the sonic atmosphere.

2. Chord Swarm Thread

Purpose: Generates harmonized, clock-synced chordal textures that evolve over time.

  • Sound Generation:
    • Progress Chord Sequencer: Drives the harmonic structure.
    • FM-OP Module: Adds detuned harmonics to enrich the chord texture.
  • Processing Chain:
    • Surge VCF: Shapes the chords through filtering.
    • Dynamic Shaping: ADSR envelopes modulate the chords’ attack and decay, enhancing expressiveness.
    • Delay Processing: the plugin Alright Devices from Chronoblob2 offers delay, modulated by Caudal, introduces time-based variations and echoes.
  • Live Interaction:
    • The density of chords is adjusted in real time using a dedicated fader, allowing for a gradual build-up or thinning of the texture.

3. Sequence/Glitch Thread

Purpose: Introduces randomized pluck sequences and glitch textures for added rhythmic and timbral interest.

  • Sound Generation & Logic:
    • Clock-Division Triggers: Provide rhythmic timing for the sequence.
    • 8→1 Switch & Bernoulli Gate: Randomly selects inputs and governs the probability of note generation, creating a stochastic effect.
  • Processing Chain:
    • Quantization: Ensures that the generated notes align rhythmically.
    • Even VCO: Produces a pluck-like timbre.
    • Glitch Effects: Debriatus (bitcrushing) and Vult Bleak (filtering) are used to create sonic irregularities.
    • Delay Modulation: The delay time is modulated by the Caudal module, adding unpredictability.
  • Live Interaction:
    • A rotary knob “freezes” the sequence (setting Bernoulli probability to 0% for a sustained note) or randomizes it, letting the performer choose between consistency and unpredictability.

4. Drum Thread

Purpose: Provides the rhythmic backbone with both standard drum hits and textured hi-hat elements.

  • Sound Generation:
    • GateSeq64 Triggers: Control the timing for drum events.
    • Core Drum Modules: Generate kick and snare sounds.
  • Processing Chain:
    • Distortion and Compression: Apply character and punch to the drum hits.
    • Hi-Hat Processing:
  • The hi-hat signal is delayed, bitcrushed, and filtered to create a diffused, high-frequency “wash” effect.
  • Live Interaction:
    • A rotary knob adjusts hi-hat decay, allowing the performer to switch between a tight closed hi-hat sound and a more open, expansive feel.
    • Drums are introduced with a filtered sound that gradually becomes unfiltered as the performance intensifies.

5. Bell Cue Thread

Purpose: Acts as an auditory cue for performers, using a distinct FM bell sound.

  • Sound Generation:
    • Wavetable VCO: Generates the bell tone, modulated by a Wavetable LFO for FM-like characteristics.
    • Octave Shifting: Nopskate module shifts the pitch, ensuring the bell stands out in the mix.
  • Processing Chain:
    • Triggering: A manual gate (via nanoKONTROL 2 button) initiates the sound.
    • Dynamic Enveloping: ADSR shapes the bell’s onset and decay, while delay/chorus effects add spatial depth.
  • Live Interaction:
    • The performer triggers the bell cue to signal transitions or key moments within the performance.

6. Sampler Thread

Purpose: Integrates pre-recorded narration and other audio samples, synchronized to the performance timeline.

  • Sound Generation & Control:
    • Sampler Module: Plays back narration and other sound bites.
    • Synchronization: The play/stop function is mapped to a NanoKontrol button, ensuring precise timing during the performance.
III. NanoKontrol2 Mapping & Performance Controls

We use the Korg NanoKontrol2 MIDI controller for easier real-time performance manipulation. Below is a detailed mapping of controls:

Control Function
Faders 1–5 Control volumes for: Drone, Chord Swarm, Sequence/Glitch, Drum, and Hi-Hat Texture.
Rotary Knob 1 Adjusts the Bernoulli Gate probability for the Sequence/Glitch thread (0% = sustained note, 100% = randomized sequence).
Rotary Knob 2 Modulates delay glitch intensity by affecting the Caudal-to-delay time relationship.
Rotary Knobs 3 & 4 Fine-tune hi-hat decay, balancing between closed and open textures.
Rotary Knob 5 Controls the master DJ filter: sweeping from highpass (right) to lowpass (left), neutral when centered.
Buttons 1 & 2 Trigger sampler play/stop and bell cue functions.
Button 3 Starts and stops the master timer for performance synchronization.
IV. Performance Workflow & Structure

The performance is structured as a 10-minute improvisational set, divided into clear phases that allow for both gradual evolution and dramatic transitions:

Intro (0:00–2:00):

  • Drone Focus: Begins with the Drone thread, emphasizing high-end noise and gradual introduction of harmonic elements.
  • Chord Entry: The Chord Swarm thread is softly introduced via fader control.

Sequence Entry (2:00–4:00):

  • Activation of Pluck Sequence: The Sequence/Glitch thread is engaged, with the rotary knob used to vary the randomness and introduce glitch effects.

Drum Build (4:00–6:00):

  • Rhythmic Foundation: Filtered drum sounds are layered in, with real-time adjustments to hi-hat decay building tension.
  • Cue Transition: A bell cue signals a shift in the performance dynamic.

Climax (6:00–8:00):

  • Full Drum Presence: Drums come unfiltered, increasing energy.
  • Sampler Integration: Narration or sampled content is introduced, with the DJ filter sweeping to create dramatic tonal shifts.

Outro (8:00–10:00):

  • Sequence Freeze: The Sequence/Glitch thread is “frozen” via the Bernoulli knob to hold a final sonic state.
  • Textural Dissolution: The DJ filter is swept towards highpass, gradually dissolving the sonic textures and closing the performance.
V. Key Sound Design Concepts

Generative Design:

  • The system uses stochastic modulation (e.g., through Vult Caudal and probability gates) to ensure an ever-evolving sonic landscape that never repeats exactly.

Performance Mixing:

  • Real-time control over multiple layers (via faders and knobs) allows the performer to “orchestrate” the overall sound rather than merely adjusting volume levels.

Glitch as Texture:

  • Rather than being a mere effect, glitch elements (achieved through delay modulation and bitcrushing) are integrated as a key sound design parameter, adding an unpredictable and engaging layer to the performance.

Integration & Interaction:

  • The modular setup encourages interaction not only among sound sources but also between the performer and the soundscape. Each control input is designed to affect both individual threads and the overall mix, creating a dialogue between technology and live performance.
VI. Implementation Notes

Module Selection:

  • The choice of modules (Surge series, Vult series, Grayscale Supercell, etc.) reflects an intention to blend digital precision with analog warmth, key for creating immersive ambient textures.

Live Flexibility:

  • The mapping strategy using the NanoKontrol2 ensures that the performer can seamlessly switch between controlled transitions and spontaneous experimentation, maintaining both structure and improvisatory freedom.

System Reliability:

  • The design emphasizes both robustness (for live performance) and flexibility, with clear signal paths and defined interactions that can be quickly adapted in response to live conditions.
Additional Resources:

Performance Video of the draft:

Dieses Video-Element nutzt Videodaten und Playback-Technologie von YouTube. Wir verwenden eine Datenschutz-optimierte Version von YouTube, bei der weniger Nutzungs- und Statistikdaten an YouTube übertragen werden. Video anzeigen?

Um Ihnen einen einwandfreien Service auf unserer Webseite anbieten zu können, verwenden wir Cookies und Drittanbieter-Tools zur Verbesserung der Funktionalität. Sie können der Nutzung generell zustimmen, oder in den Datenschutz-Einstellungen spezifische Werkzeuge an- und abschalten. Die Einstellungen können in der Datenschutz-Erklärung jederzeit bearbeitet werden.

Privatsphären-Einstellungen

Weitere Informationen finden Sie in der Datenschutz-Erklärung.

Historie anzeigen

Plugins und Drittanbieter-Tools

YouTube Momentan inaktiv Momentan aktiv