Tag: neural audio synthesis

  • Black Latents | Latent Diffusion

    Black Latents | Latent Diffusion is a gradio application that allows you to spawn audio items from Black Latents, a RAVE V2 VAE trained on the Black Plastics series using RAVE-Latent Diffusion models.

    A demo version is accessible on Huggingface. The full application can be retrieved from GitHub to use in local inference.


    Latent Diffusion with RAVE

    The RAVE architecture makes timbre transfer on audio input possible, but you can also generate audio by using its decoder layer as a neural audio synthesizer, e.g. in Latent Jamming.

    Another approach to use RAVE to spawn new audio information has been provided by Moisés Horta Valenzuela (aka 𝔥𝔢𝔵𝔬𝔯𝔠𝔦𝔰𝔪𝔬𝔰) with his RAVE-Latent Diffusion model.

    Latent diffusion models in general are quite efficient since they operate on the highly compressed representations of the original data. The key idea of RAVE-Latent Diffusion is to replicate structural coherency of audio information by encoding (longer) audio sequences into their latent representations using a RAVE encoder and then train a denoising diffusion model on these embeddings. The trained model is able to unconditionally generate new and similar sequences of the same length which can be decoded back into the audio domain using the RAVE model’s decoder.

    The original package by 𝔥𝔢𝔵𝔬𝔯𝔠𝔦𝔰𝔪𝔬𝔰 supports a latent embedding length down to a window size of 2048, which translates to about 95 seconds of audio at 44.1 KHz, suitable for compositional level information.

    In my fork RAVE-Latent Diffusion (Flex’ed), I extended the code to support a minimum of 256, which equals about 12 seconds at 44.1 KHz, and implemented a few other improvements and additional training options.

    Black Latents: turning Black Plastics into a RAVE model

    The motivation to train Black Latents was to extract dominant characteristics from my Black Plastics series, a compilation of 7 EPs with a total of 28 audio tracks of genres Experimental Techno, Breakbeats and Drum & Bass, I released between 2012-2020.

    I trained the model using the RAVE V2 architecture with a higher capacity of 128 and submitted it to the RAVE model challenge 2025 hosted by IRCAM, where it was publicly voted into first place. The model is available on the Forum IRCAM website.

    Using Black Latents | Latent Diffusion to spawn audio

    For Black Latents | Latent Diffusion, I trained diffusion models in 7 different configurations and context window lengths using once again the audio material from the Black Plastics series as base data set together with the Black Latents VAE.

    The application itself is a simple gradio interface to the generate script of RAVE-Latent Diffusion (Flex’ed). In the UI, you can choose from the different diffusion models, define seeds and set additional parameters like temperature or latent normalization before generating audio items through the Black Latents model decoder.

    Depending on the diffusion model and parameter selection, the resulting output varies from stumbling rhythmic micro structures to items with resemblances of their base training data’s macro scale considerations.

    Other examples

    I published earlier experiments with RAVE-Latent Diffusion and a different set of RAVE models in the form of two albums:

    MARTSMÆN – RLDG_0da02c80cb [datamarts/2KOMMA4]: BandcampNina

    MARTSM^N – RLDG_835770db1c [datamarts/2KOMMA3]: BandcampNina

  • Reykjavík Sunburn

    This is the most recent framework that builds on previously proven techniques of Latent Jamming solidified into abstractions for easy set up. It’s also the first framework to use more than two models in parallel.

    In Reykjavík Sunburn, four different neural audio models are used: each two RAVE and vschaos2 models.

    • Black Latents: a RAVE V2 model trained on the Black Plastics series – 28 tracks/ 3h of drum- and percussion-heavy electronic music. The resulting model generates mainly percussive output with rough textures and a generally high grittiness. In the composition, this model is used as a leading asset to generate the rhythmic baseline and general percussive structure. 
    • Nobsparse: a RAVE V2 model trained on a hybrid dataset of Tech House and sonically sparse Drum & Bass (about 4h of audio material). The model’s characteristics are relatively clear, sterile, and lightweight sounds, harmonic textures, and an isolated but dominant low end. Depending on the process development during the recording session, this model serves as a secondary texture generator but can also replace Black Latent’s role in the composition. 
    • VSC2_Nobsparse: this vschaos2 model has been trained on the same dataset as the Nobsparse RAVE model. In the composition, this model is used to generate interchanging pads and drone-like noise textures for transitions or simply to enrich an ongoing section of the recording with a harmonic layer.
    • VSC2_Martha2023: being the only model trained on voice data, courtesy of my daughter, this model adds a layer of rhythmical, pseudo-vocal sound on top of the otherwise „instrumental“ generations of the three other models. 

    Together, these four models are responsible for 100% of the audio information created. No additional synthesizing techniques or sound sources have been used. 

    Output examples

    Reykjavík Sunburn (Take 1 Redux) received recognition at the AI Song Contest 2025 where it was selected to the finalist shortlist of 10 out of >150 submissions.

    A release with multiple recorded versions from the framework is currently in the making.

  • Latent Russando

    Latent Russando is a semi-generative compositional framework written in Pure Data dedicated to exploring musical qualities in working with generative neural nets for audio, conceived both as hybrid instruments and as autonomous actors.

    Practices from generative music and algorithmic composition are used as mediators between human performer and the generative abilities of the neural nets, displacing and circumventing concepts of authorship and genius by empowering multiple independent agents in an improvisation-driven, co-creative process.

    The work is based on Russando. Serenade for six German Sirens, op. 43 by Hallgrímur Vilhjálmsson, a heteronym of conceptual artist Georg Joachim Schmitt. The original piece was composed in 2008 and premiered in the context of the (also fictional) art exhibition cologne contemporary — international art biennale 08 at Asbach-Uralt Werke in Rüdesheim. It is a three-part composition of approx. 33 minutes in length, in which six German emergency and police sirens are alternately sounded together or alone. In consultation with the creator, I trained models based on two neural net architectures (RAVE, vschaos2, both courtesy of IRCAM, Paris) on the original piece.

    Output examples

    For Soundcinema Düsseldorf 2025, I expanded the Latent Russando framework into a multichannel version employing 8 models with their outputs distributed over 7 channels. At the festival, I presented Nebuloso that stands exemplary for a potentially infinite number of musical works that can be generated with the framework; it is the output of a joint creative act of human and artificial agents. With this, both the conceptual genesis of Russando with its distributed or fictionalized authorship is reflected as well as the interplay of control and autonomy in a process that deflects claims of unique authorship and concepts of solitary genius.

  • Latent Jamming

    Latent Jamming is an improvisation practice with real-time capable neural audio models that embraces concepts of algorithmic and/ or generative composition techniques. It is one of my main practical research topics since 2023.


    Motivation and background

    Coming from a traditional electronic music background (Drum & Bass, Breaks, Electronica) where deterministically driven production routines in a technologically homogeneous setup are dominant, two main questions have been at the center of my practical research for the last years: 

    1. How can techniques of generative music and algorithmic composition be injected into electronic music genres that are deterministically driven? (see e.g. Fibonacci Jungle, Risset Rhythms)
    2. How can generative AI be integrated into creative processes in electronic music production holistically, not only as another new tool out of many in existing production routines?

    To narrow in on answering these questions, in particular the second one, I train neural nets on the musical material I’ve written and produced in the past and work with the trained models in real-time settings. I apply compositional concepts from generative music and algorithmic composition as mediators between human performer and the generative abilities of the neural nets, displacing and circumventing concepts of authorship and genius by empowering multiple independent agents in an improvisation-driven, co-creative process that leads to musical output, but not necessarily to a fixed recording artifact.

    Sharing agency

    With this approach, I aim to amplify one key quality of neural audio models, which is their unexpected behaviour when generating output. This quality sets the models apart from a perception of conventional musical instruments, where control over the produced sound is usually the objective. My goal when making music powered by neural nets is to share the agency by finding the right equilibrium between establishing control and embracing the lack thereof.

    Creative considerations

    Using deep learning algorithms to interpret and extract key characteristics of particular audio data subsets, my creative intent is an expansion of these characteristics into something genuinely new. 

    Finding a novel approach to music production

    Opposed to approaches with similar AI-augmented practices in contemporary music production, where models are often used as a material source for samples or sound items in otherwise conventional production routines, my interest in using neural audio synthesis aims at being able to generate (electronic) music in a real-time compositional dialogue with single models. Consequently, my training data consists explicitly of self-contained assets (i.e. full tracks), not separated stems of one instrument, synthesizer, or other homogeneous sound samples.

    „back in our day we didn’t have ai we used REAL synthesizers. . .to sound like drums“ dadabots

    Object of this approach is my own music written in past years under a traditional electronic music production paradigm. Preselection and categorization is a first creative act in the process, where e.g. material with a particular sonic character (e.g. sparse, dense or attributed to a particular genre), such from a particular working phase or a dedicated output selection (e.g. an album), is separated into various datasets.

    Building hybrid instruments

    Using open source audio-to-audio neural network architectures RAVE, vschaos2, MSPrior or AFTER, I trained various models on these curated selections of my earlier works. Capable of reproducing and respawning sound characteristics they’ve learned while training, these models become hybrids of instruments and sound machines that partly act autonomously. (For example, RAVE models are known to randomly produce sound on no input/ silence when the training data didn’t explicitly contain silence as information.)

    Learning to navigate in latent space

    The compositional setup used to make music with the models requires an experimental approach that embraces this understanding of them both as instruments of a new type and autonomous actors. Interaction with the models happens in latent space, where conventional compositional techniques cannot be applied. Similarities in behaviour between different models hardly exist; each model requires exploration and empirical observation. Therefore, the compositional setup is mainly a boilerplate template combining different techniques that have proven successful in similar use cases, while putting it into action resembles learning an instrument from scratch. 

    Embracing new qualities

    Results of working with this approach can produce high similarities with the musical characteristics of the original material; however, the amalgamation of sounds as performed by the models as well as their unexpected behaviour generally results in a new quality of output that challenges both performer and listener. As such, making music with neural audio models in real-time settings bears a paradigm shift in electronic music production.

    Technical setup

    For the compositional process, I use Pure Data (PD) where RAVE and vschaos2 (as well as MSPrior and AFTER) models can be employed for real-time application using the nn~ object. In PD, I programmed a set of custom abstractions that allow building frameworks for semi-generative or algorithmic use cases and are tailored for these model types explicitly.

    With these abstractions, I can intervene directly in the latent space of the models, overriding their intended use case of timbre transfer on audio material with injecting latent embedding mimicry instead. This allows me to guide the models’ outputs, comparable to tuning – and to some extent playing – an instrument.

    Compositorial and performative considerations

    Tuning and setting control thresholds

    The compositorial process usually includes a lot of exploratory work until a constellation of parameters is found that leads to musically coherent and/ or novel results. Once a parameter constellation (or tuning) for the models has been established, the amount of human influence on a compositional level is determined. This includes defining the range of control level variation the models can use to create their output. It also implies leveling out the amount of perceivable rhythmic structure or repetition.

    Finding pieces

    While performing, the model’s behaviour can be stabilized, but the actual output is usually not exactly repeatable a second time. For that reason, I call that musical practice Latent Jamming, referring to a co-creative situation where human and artificial agents interact in an improvisational setting. In terms of compositorial or performative practice, therefore the process is hardly deterministically but exploratory driven – less writing a piece but finding a piece. 

    Ethical considerations

    Selecting data

    From an ethical point of view, neural audio model training – like basically all AI model trainings – requires considerations of dataset provenance in particular regarding questions of authorship and licensing. Using only my own musical material, excluding remixes and collaborations with other artists, is not only an aesthetically driven decision but also a practical one since I’m not touching the rights of any other creator. 

    Considering bias

    While bias is something considered problematic in LLMs, it can be highly desirable when training neural audio models; in my use case, it didn’t require any additional consideration.

    Compensating environmental footprint

    Training AI models is broadly known to come at a significant environmental cost. Training RAVE and vschaos2 neural audio models on cloud data centers appears to be comparably cheap (e.g. 170 GPU hours of training a RAVE model on Kaggle equals around 24,48 kg CO₂, while 12 GPU hours for vschaos2 models equal around 1,73 kg CO₂; numbers are rough estimations based on an hourly power consumption of Tesla P100 GPUs + infrastructure (300W) and a global electricity carbon intensity of 0,48 kg CO₂/kWh.).

    In the EU, the most efficient way to compensate CO₂ as a private person is by buying (and retiring) fractions of EU Allowances (EUAs) for CO₂ emissions. I’ve chosen ForTomorrow to compensate for my own environmental footprint in this manner on a yearly basis. 


    Use cases and examples

    In the past years, I’ve developed various frameworks in Pure Data that build on the idea of Latent Jamming in order to explore new ways of music co-creation. You can find these under Works.

  • Saatgut Proxy

    Saatgut Proxy is an experimental generative setup in Pure Data that creates both randomized and repeatable pathways through the latent space of two neural audio model architectures (RAVE, vschaos2) at the same time.

    The framework is based both on generalized abstractions that I have developed for the Latent Jamming use case and additional prototypes of techniques that I turned into dedicated abstractions later on.

    Output examples

    The Saatgut Proxy framework led to the following release artifacts:

    MARTSM=N – VARIA 3L [datamarts/2KOMMA1]: Nina

    MARTSM))N – Saatgut Proxy Reflux [datamarts/2KOMMA0]: Nina

    MARTSM))N – Saatgut Proxy [n/a]: Bandcamp

  • Spoor

    Early prototypes and tests setups in latent embedding mimickry and establishing a control level baseline in latent space have led to Spoor, both name of a loosely coupled set of Latent Jamming techniques and two releases:

    MARTSM/\N – Spoor Widen [datamarts/1KOMMA9]: Nina

    MARTSM/\N – Spoor [n/a]: Bandcamp

    Below video shows the setup that lead to tracks Loom and Loom Rewood.

    Track Architects was based on the following patch