My website is currently under construction and will be updated soon.
Neural Tides: Oceanic Neural Granular Synthesizer (2024)
<Neural Tides: Oceanic Neural Granular Synthesizer> is a multi-granular sound synthesizer using an artificial neural network to analyze and re-organize sound sources from out previous project Wind Traces(2023). The synthesis process mimics the natural degradation of styrofoam, blending nature with artificial elements to reflect marine debris integration into the natural world. A custom interface with a touch pad, knobs, and motion sensor allows users to navigate sound sources, adjust time segmentation, and add effects.
12/6/20242 min read
<Neural Tides> is a multi-granular synthesizer that uses an artificial neural network trained with sound samples from the coastal areas of isolated islands—Hakrim-do and Ulleung-do—in South Korea. These islands have been mapped as sound particles within a latent space, enabling users to freely explore and listen to their coastal environments. The synthesis process mimics the natural degradation of styrofoam, blending nature with artificial elements to reflect the integration of marine debris into the natural world. We employed granular synthesis to illustrate how plastic breaks into small pieces by waves and wind, merging with natural materials on the coast. This process mirrors granular synthesis, where sound is divided into particles and new sounds are created by adjusting grain size. The instrument is designed for precise manipulation, allowing users to navigate the latent space and select sound particles. The custom interface is user-friendly, featuring six knobs, a motion sensor, and a touchpad for easy control, enabling users to navigate sound sources, adjust time segmentation, and add effects. The instrument’s case has been 3D printed with algae-based filament, making it biodegradable. <Neural Tides> transforms the visual experience of the sea into an auditory one. Our aim is to promote environmental awareness in a lighthearted way with this playful instrument while also sharing and connecting our ideas with others.
<Neural Tides> applies AI and ML techniques to create an intricate sound analysis and synthesis system. At its core, the project uses an autoencoder architecture implemented in SuperCollider and Max 8. The sound analysis process begins with the segmentation of source recordings into grains, the neural network then analyzes the sound fragments using 13 MFCC (Mel-frequency cepstral coefficients) parameters and performs dimensionality reduction, mapping the sound grains into a navigable 2D latent space.
These sound grains are processed through a series of convolutional layers in the encoder, extracting hierarchical features from the MFCC representations. The bottleneck layer then compresses this information into a 2D latent vector, while the decoder attempts to reconstruct the original grain from this compressed representation.
To create a multi-dimensional sound exploration space, the system utilizes four distinct autoencoder neural networks. Each network generates a layer of analyzed and organized sound data, with each network trained on a different subset of sound grains focusing on specific acoustic characteristics and timbral profiles.
User interaction is facilitated through an additional Multilayer Perceptron (MLP) neural network for gesture recognition. This model is trained on hand and finger gestures captured by a Leap Motion device. It uses a time-series approach, analyzing sequences of 3D coordinate data from the sensor.
The gesture recognition model interprets user movements to activate various subsets of sound bank layers, allowing navigation through different combinations and interpolations of the four latent space layers, providing an intuitive and engaging interface for sound exploration.
Finally, the synthesis engine creates new sounds using the organized grains and their positions in the latent spaces, applying a nearest-neighbor search algorithm to find grains close to the user-defined position. The system interpolates between grains based on their latent space proximity resulting in a dynamically evolving granular soundscape.
https://a-nam-san-hot-pot.vev.site/collective_nshp_kr/neural-tides-2024_en/
This project premiered at Sónar+D festival in 2024, sponsored by Ministry of Culture, Sports and Tourism, the Arts Management Support Center, and Arts Korea Lab.