sound spatialisation
Hello everyone,
I'm working on a physical computing project and would love some advice on spatialising sound in Max.
I've built a device with 4 load cells under a plate. These calculate the centre of force and send live OSC data to a Raspberry Pi Pico W, which then communicates with Max.
In Max, I'm calculating xpos
, ypos
, and total force from the plate. Now, I’d like to use a piezo mic attached to the plate as a live audio input — capturing whatever physical interaction is happening (touches, taps, movement).
My goal is to spatialise that live audio based on the calculated centre of force — ideally using stereo panning (binaural-style, as it's being demoed over headphones).
Has anyone done something similar, or could point me to examples/tutorials for dynamic panning or gesture-based spatialisation in Max?
Many thanks in advance
Fede
Not clear if you are running RNBO on the Raspberry, or if the Raspberry talks to a remote computer running Max. If it's the latter, maybe it's overkill, but the spat5 package from Ircam has everything you need to make binaural (or any other configuration) spatialisation. You need to create a (free) account on the Ircam forum in order to download it though.
Thank you RFL, this is great - I'm downloading the spat5 package now and will look there for what I need. I'm not running RNBO on the Rasberry, the Rasberry is talking via Wi-Fi to my computer running MAX. I'm a beginner trying things out, happy to fail and learn along the process. Let's see where this all brings