Sonic Cubes is a set of instruments intended for a live performance exploring random music generation system. It involves motions and spatial arrangements of 6 cubes that trigger and manipulate sound in a 17-channel audio set up. Listeners are surrounded by speakers at ear level arranged in a circle.
The programs of choice are Max/MSP and reacTIVision, a vision framework that tracks motions of objects with a webcam. The detailed technical documentation can be found here in my previous blog post. Having established connections between physical objects and Max through reacTIVision, I started my composition process. The affordance of 6 cubes suggests the action of tossing around so that thousands of different combinations of sound can be generated. Having the goal of a random music generation system in mind, I started experimenting with different sounds, melodic, percussive and experimental. I was very inspired by Terry Riley’s “In C”, where the entire 40 minutes of composition is based on a very simple repeating note pattern. I began by attaching a short note sequence to each side of a cube and sending MIDI noteout to play piano notes. I started with fewer cubes each representing a type of 7th chord like fully diminished, half diminished, and minor 7th etc. I then took out certain notes in the chord(e.g. 3rd and 7th, or root) leaving fewer notes in the sequence. I started to notice some interesting patterns and unexpected combinations like this. However, the longer it lasts, the more robotic and dry it sounded due to the uniform piano sound, so I thought about using different program/synth options. I tried different ways to send MIDI to dac~ through different sound, like VST~ and X.FM~, but I found fluidSynth~ to be the most handy. It loads any soundfont files, which opens up to tons of possibilities. I explored different combinations of synth, drone and percussions, to make it more musical. I then also applied some effect modulations to certain sounds triggered by rotation of the cubes. After many experimentations, the piece came together.
I have achieved my goals in the way that all the technical aspects worked exactly the way I wanted, moving the cubes under the camera would affect the direction of the sound source, all the cubes are set to the right beat whenever they are added or removed. Something that didn’t work as expected was when all the cubes are thrown in the camera and when too many things are changing at the same time, the system would get confused and when certain cubes are removed, some of the clips would not stop playing. Through this project, I learned about ambisonics using the ICST-ambisonics package on ambisonics encoding and decoding and automation, as well as reacTIVision, the unique object tracking framework. If I were to make a revision of the work, I would make triggering and stopping more reliable, and use a translucent table surface and hide the camera underneath the table, or even explore other less musical, more experimental sound and attach the fiducial IDs to other objects so that the interactions imply different and closer relationships between the action and the sounds.