Sound In Space Rubric

Rubric for evaluating my assignments and progress:

In my projects I aim to explore how our perception of music and sound has evolved over time with the influence and affordance of technology. My projects will exam the relationships between any organic forms of sound, either from natural sources or acoustics, and the agency of synthesized sound from electric signal or computer analog signal. 

I’d like people to listen for 

  • Rhythm

  • Layers and transitions

  • Use of effects and modulation

  • Manipulation of natural sounds

  • Balance between the organic and the mechanical 

  • Sensibility and rationality


Week 2 Piano Keyboard Lock Screen App

Originally I wanted to create a drum pad lock screen, where user would tap a certain beat pattern that matches with a predetermined beat pattern to unlock. However, in order to do that, I need a metronome to keep track of the timing and rhythm, and I need to analyze the note duration value(half note, quarter note, rest etc) and record the note pattern. As rhythm is time-based media, it is a little tricky to work with timing in Xcode at this point for me, so I decided to modify the interface from a drum pad to a piano keyboard with an octave of 8 playable C major keys. If the melody/note sequence matches the predetermined melody, it unlocks. In this way, piano keys act just like lock screen buttons, instead of displaying numbers, it plays different pitches based on the key pressed.

Read More

Week 1-2

In the first week of the class, we re-examined the definition of new media art and looked at some of the important artists and works that represent the radical shifts in how we evaluate and appreciate art and its form, one of which was Marcel Duchamp, leader of DADA movement in the early 20th century. His use of found objects in pieces, like Fountain and Bicycle Wheel, instead of creating something on his own opened up the conversations around what is art, how is art made and perceived. And soon after that the difference between how art should be made and that everyone can make art started to collapse. Followed along Duchamp were artists who kept challenging conventional art-making practice: Robert Rauschenberg, Martha Rosler, Felix Gonzalez-Torres, Mark Hansen and Ben Rubin.

Read More

Week 5 - 7 Unreal 3D Animation - Pretty Girl Sneezes

From week 5 - 7, we learned how to create our own 3D characters in Fuse, and export and auto-rig the characters and create animations in Mixamo. We also learned the basics of Unreal Engine, to create a scene, import 3D objects, upload T-pose skeletal mesh and apply multiple animations to the mesh, then create a sequencer for the animations. I teamed up with Yves and our inspiration came from this Joke - A Pretty Woman Sneezes At A Restuarant.

We created a girl and a guy character in Fuse and exported some animations from Mixamo.

In our first version of the animation,  for the girl, we found "sitting yell" from Mixamo which looks similar to sneezing, "sitting disblief" similar to pushing eyeball back to head, and picking up object. For the guy, we had walking and catching. We spent an entire afternoon trying to figure out how to combine two animations of a character smoothly without having him/her jump back to his/her start position of the animation. Exporting the animations from Mixamo with the "in place" option clicked and then manually transform position in unreal would solve the problem. However, a lot of the animations we found do not have that option and we couldn't find any good substitutions for the actions we wanted to use. We followed this tutorial on how to blend animations, and we got something like this.

We then decided that instead of having a long shot where viewers can see all the action happening at the same time, we could cut the camera and just focus on one animation of one character/object at once to avoid the issue of blending animations. So we just focused on the girl sneezing and her eyeball shooting out.

Our first scene is camera rotating and zoom in to the face of the girl sneezing. The second scene is a close up of the eyeball flying in the air. And the last scene is the girl struggling while her eyeball is gone, and zoom in to her face, where the eyeball used to be was a dark hole.

Another challenge we had was when we first animated the eyeball, it was shooting out in a straight line, and it did not look realistic. Later I located the curve editor in the sequencer, which is similar to graph editor in After Effects, so I had the eyeball animated following a para-curve.

In order to make the girl seem like she lost her eyeball at the end of the animation when she lifts her head up, we figured out we could mask out her eye in Premiere, as we combined the scenes. Under opacity in effects control, we masked the eye contour and motion-tracked the mask as she lifts up her head.

Our final rendering:

Week 2 - 4 After Effects Animation - Robot in Space

For this animation project in After Effects, me, James and Fenfen discussed an idea that James had for the story. It is about a robot wandering on a planet and found an empty spaceship. Looking longingly at the "moon" not far away from the planet, the robot decided to get on the spaceship and embark on a journey to the "moon". The first time the robot attempted to launch the spaceship, the engine fell off. After fixing the engine, the robot attempted to launch it again, but this time the wings fell off. the persistent robot fixed it again and finally, it took off. However as the spaceship approached the "moon", it hit the "moon" and bounced off. The "moon" was just an illusion. James drew a storyboard: Storyboard v001

We discussed what we wanted the robot to look like and Fenfen also created a robot in Illustrator:

 

 

I created a spaceship in Illustrator, with the body, engine, flame, and wings in different layers. I split up each wing into two pieces with a rugged line each in its own layer and have patches on top.

Screen Shot 2018-11-28 at 1.42.43 PM

In week 3 and 4 we learned how to import our assets to After Effects as footage and as composition so we can animate individual parts of the assets. Some useful tools I learned are the puppet pin tool to pin the joints of characters to move body parts like arms and legs, as well as following the animation parameters of another object using parent or pick whip. We also learned how to animate objects in 3D space, placing the camera, different types of lights, how to cast shadows and having multiple views from different directions or compositions to see reflected changes while animating. We also talked about how to slowly reveal an object using pen tool, have objects follow the path of a null object, adding a slider effect to a null object to link to different parameters, and adding expressions to parameters with hand-coded values.

My part of the animation was the spaceship launching and failing. During both launch attempts, I added a wiggle expression to the positions of all the spaceship parts, and created a null object with a slider effect, and pick whipped all the wiggle expressions to that slider to key frame the wiggle with slider so as to have all the spaceship layers wiggle for a few seconds. When the engine falls, I tried to simulate the motion of an object free falling and bounce back.  I used the graph editor to animate the y position according to this graph to achieve a realistic falling and bouncing effect in the physical world. I also did something similar for the falling wings composition.

Image result for object falling  bouncing s t graph

While James was working on the intro scene, I also learned that I can create a starry space background and bright glowing planets with a fractal noise effect.

After we put all the animations together, James added a background soundtrack that he composed that matches the pace of the story and we added some sound effects to make the whole animation come alive.

Here is the final render:

https://youtu.be/4mGDK5Js1sY

W9 Music Exhibition Design

Me, Adi and Camilla teamed up this week to build a virtual interactive museum exhibition explaining techno music. First, we analyzed some musical concepts defining the genre techno:

The central rhythmic component is most often in common time (4/4), where a single measure is divided into four beats marked by a bass drum on each quarter note pulse. Each beat can be further divided, with eighth notes dividing each quarter note in half and sixteenth notes dividing each quarter note into quarters. Famously called “four-on-the-floor,” the bass drum is the automated heartbeat or metronome around which all other sounds fall into place: a backbeat played by snare or clap on the second and fourth beats of the measure, and an open hi-hat sounding every second eighth note. The tempo tends to vary between approximately 120 to 150 beats per minute (bpm), depending on the style of techno. Techno focuses more on sophisticated rhythm and repetition (cross-rhythm, syncopation) than it does on melody or chord progression. The development is gradual where many sounds are layered and altered over time. All of these characteristics together create something automatically definable as techno.

Techno is a landscape. It’s a reaction and artistic statement about the actual automated world that we live in. It’s half machine & half human! Techno’s highly repetitive and mechanic rhythmic structure against human expressivity in sampling, modulating, remixing and looping shapes its unique art form.

From here, we identified three elements of modern techno music: 1. Repetition - “four on the floor” as the basic unit 2. Instruments being layered/omitted one by one over the course of a song 3. altering/modulating a sound texture gradually over time

With the elements in mind, we think the best way to explain how Techno music works is by deconstructing an existing techno song into individual instrument layers as the building blocks. We will have users rebuild their version of the track by adding and subtracting layers and playing with different combinations on top of the 4/4 rhythmic structure and give them expressive controls over multiple parameters that control certain layers.

Inspired by Valentina’s presentation with the different musicians for the fantasy Blues band, we came up with the idea to present different options for each of the elements, to give the user in our museum some feeling of agency of being able to choose their own sounds and controllers within the constraints of a techno infrastructure. 

Our interface is inspired by the popular DAW Ableton Live's session view, but without the intimidating panels and knobs, reduced to the bare minimal core of triggering and stopping clips. Each instrument (in our case: a bass drum, multiple hi-hats, clicky percussion, rim, bass, brass hooks, pad drone, and a sequence) is a column of clips represented by blocks which are clickable for enabling and disabling. Since session view is non-linear, we decided to have a visual indicator to show where in the timeline the sound is out of 4/4 -- all following the bass drum as the heart of the song. Users will be able to modulate some parameters of certain instruments over time by dragging the sliders underneath the instrument columns. Some visual guide for the interface: https://www.are.na/adi-dahiya/techno-is-a-landscape

I started off by searching for stem files of existing techno tracks available for downloads so that I could have separate files for individual instruments. And I found Solace by Pan-Pot a good example of a techno song that can be used. So I opened up the files in Ableton and reduced each instrument track into its smallest unit and exported all the clips to be mapped into buttons on P5(sketch).  I uploaded all the clips onto P5 and had all the percussive instruments looped every measure with Tone.Event. However I was having some trouble with instrument clips when looping them with Tone.Event, as since instrument clips are much longer 10-30m per loop. the clips don't start or stop until the current loop has ended, i.e. if a button is pressed at a random time, instead of loop starting/stopping at the following measure, it would start/stop until the beginning of next loop, which takes a long time. So I decided not to use Tone.Event, and just have the button control the start() and stop() or the Tone.Player, however, the clips would be out of sync with the percussive clips. I did some research and found that I can sync Tone.Player to TransportTime with sync(). The clips would still start/stop at an arbitrary time at mouseClick but I fixed it by adding the transport time of the next measure to start(), so that no matter when the user turns on the clips, they will still be in place with one another. Although I found the process of splitting Tone.Transport.position to get the measure and switching back and forth between a number and a string was a little tedious, I haven't figured out a better way to achieve enabling/disabling a long looping clip at the start of next measure.

Another big challenge for this project was to figure out the best modulating parameters for the instrument track. When listening to the original clip, it was very noticeable that certain parameters of the sound (high pass filter frequency? Resonance?) have been changed over time, when I tried to recreate that modulating effect using Tone, it was hard to pinpoint which effects were being used or which parameters were being altered. It seemed a little overwhelming because there are tons of parameters that I could experiment with and tons of variables, value ranges I could test with.

Ultimately, for a museum environment, we imagine that our installation could be shown as it is now, on a screen, for the user to play with using headphones or speakers with instructions popping up from time to time to prompt users to enable or modulate clips, similar to this Jazz Computer project. This is to suggest users what could be a proper progression of a techno song, but it's up to them to follow the instruction or not. Alternatively, it could be installed in a more immersive environment, such as in a room with surround sound, featuring tactile buttons that would trigger each sound with visual feedback on the walls around the user, which would line up with the instruments they have selected and where they are in the timeline.

 

Final project proposal 2

Idea: An interactive storytelling installation inspired by a scene from the movie 2001: A Space Odyssey, where the astronaut Dave shut down the AI HAL 9000 that controls the spacecraft. " In the film, HAL's central core is depicted as a crawlspace full of brightly lit computer modules mounted in arrays from which they can be inserted or removed." In the scene, Dave was removing the modules to terminate the HAL, as each module was being pulled out the AI started to deteriorate. Its voice was getting slower and deeper and it was begging Dave to stop as it was dying. And it finally sang a song "Daisy", which was the first ever song sung by a computer in human history.

Read More

Week 8 Pixelated Motion Detection camera

After watching the Brightness Mirror tutorial on processing pixels from video capture to lower resolution without RGB, only brightness values, I decided to modify this example into a pixelated motion detection camera, so that if there is any change in any pixel values, fill that particular pixel to red color. I approached this by adding a new variable pbright at the end of the nested for loop, hoping to save the brightness value of each pixel from the previous frame, and compare with the current frame. However, it turned out that my camera seemed to have turned the pixels with brightness values above a certain threshold into red, and filled the darker pixels with brightness values.  Code: https://editor.p5js.org/ada10086/sketches/r1IiME9nm

Screen Shot 2018-11-02 at 4.58.19 PM

I had Dan look at my code and it seemed like inside the nested for loop, I was comparing the brightness value of the current pixel with the brightness value of the previous pixel in the pixel array instead of the same pixel of the previous frame. 

Instead of comparing individual pixels, I had to save all the pixels from last frame as pFrame and compare the pixels in pFrame with pixels in the current frame to detect pixel change(motion). I referred to this MotionPixels code example (tutorial), and combined it with the brightness mirror code.

My biggest issue though was when I called video.pixel[], it wasn't returning anything. In console.log it was an undefined object. However if I changed it to just pixel[], it was doing what I wanted (accessing video pixels), but pixel[] was supposed to access canvas pixels instead of video pixels. video.pixels[] was working in the tutorial, but not working in my sketch, I was told it was a version issue. Therefore, inside index.html, I changed the library version back to 0.6.1 and it worked.

My final code: https://editor.p5js.org/ada10086/sketches/HJfXU_j37

 

Week 1 Stop Motion Animation - The Catch up

In week 1 class we talked about a brief history of sequential art and animation and some principles of animation. I loved how devices like Thaumatropes, Phenakistoscope, and Zoetrope generate moving images back in the days and create the illusion of characters coming to life. And those principles of animation were really helpful for me to understand how to animate characters and objects in a way that they look less mechanic and more lively based on real-world physical rules and achieve certain effects that can only be seen in animations through exaggeration. The prompt of this week's assignment is to tell a short story in 30 seconds or less with stop motion, which made me think about what kind of story can be told within 30 seconds of time. I instantly thought about jokes, especially really bad jokes that are easy to grasp. I had a whole database of bad fruits and vegetable jokes in my head I used to walk around and tell people, so I shared this tomato joke with my group, and we all loved it, so we decided to shoot a stop motion with real vegetables as the characters in the joke.

Before shooting, we collected all the materials: color paper for setting street and sky, a variety of vegetables, some plastic eye pieces to stick on the vegetables.

 

Following the demonstration of how to use Dragon frame and camera in class, we set up all the hardwares(1 Cannon 5D Mark III camera with tripod, 3 lights pointing at different directions), and softwares(1 production laptop with dragon frame).

IMG_8294

During shooting, we learned switching between the capture mode and video mode, doing focus check everytime we switch to a new shot. We chose large fine jpg files for our photos, took 2 pictures per frame, and exported the video in H.264 codec at frame rate of 23.97 and aspect ratio of 1620 × 1080. We edited the raw Dragon Frame export file in Premiere to adjust the duration of each shot and applied some background music to it. Finally we recorded our own voice for the characters and distorted it using the pitch shifter audio effect in Premiere.

One problem we encountered during shooting was that it was evening so there wasn't any natural light in the room so we had to borrow a lot of lights to project on our scene. We fixed one light on a tripod, and the other two were clipped on the chairs. We noticed that however we positioned the lights there were always some weird shadows being projected onto the sky backdrop, which is not supposed to happen in real life. So we were just holding the lights during shooting, which probably wasn't a good idea for stop motion because the shadows would just be moving around as the hands holding the lights are not stable during the shot. However, it turned out that the shadows just made it seem like there was some breeze and trees were moving along. Given the time constraint, we just settled with it. We were also having some trouble manually adjusting the white balance in the beginning, so the picture color came out a bit warmer than expected.

Our final video:

https://youtu.be/ne7z2TUaghY

Final project proposal 1

Concept option 1: As I just finished my Code of Music midterm project, a live audio-visual generator, I am thinking about keeping refining it, not just sound wise, but also interface wise. Because as I was playing with my own web-based instrument, I was pretty happy with the overall sound and visual reactivity and complexity, but I was still struggling with making my live composition more expressive. Having the whole interface living in the browser indicates certain limitations. I was only able to control the audio and visual with three sliders on the screen, a mouse drag function, and a key pressed function.  I was having trouble controlling multiple parameters at the same time because there is only one mouse. So I thought about why not make it a physical interface, not necessarily physical buttons and sliders like a MIDI controller, which has already been made probably thousands of times in other ITP projects. I wanted to create something that make people "dance", either some sort of games similar to DDR with visuals indicating what actions users should take to match with the sound in order to get points, or playing with the idea of "body as cursor", to have different user actions as input for different parameters, so as to generate sound and manipulate visuals accordingly, to have human body act as an instrument.

Read More

W7 Bandsintown API artist tracker visualization

For this API assignment, I wanted to create some sort of artist follower visualization. So I looked up API documentations for SoundCloud, Spotify, Songkick, and Bandsintown. I requested my API keys and only Bandsintown got back to me. I was able to get different artist information and artist event in json on their public API. However, as i tried to access the same json URL(var url = 'https://rest.bandsintown.com/artists/stimming?app_id=09f313e072cd1b192f200fb70df19ea5') I opened it on the browser on P5 editor, it was always giving me Script error. (: line 0). I changed my editor and same problem still persisted. I couldn't figure out what was the issue, so I decided to manually importing a selection of artists json files to my editor. My original goal was to have user input artist names, the input would locate artists' tracker counts in Bandsintown database, and then have the tracker count draw many little stick figures on the canvas. The more the tracker count, the more figures.  I uploaded a few json files of artists information: darke, sia, joji. And in preload,  I had drake = loadJSON("json/drake.json"); joji =loadJSON("json/joji.json");...  I wanted to access the artist tracker count by: drake.tracker_count. However I couldn't figure out how to convert a user input string "drake" back to variable drake.  So I thought about combining individual artist files into one big artists json file with individual artist object labeled by artist names in strings: "drake", "sia"... and access the tracker_count through artists.artists[input.value()].tracker_count, which was an expression I saw in this documentation.

Screen Shot 2018-10-30 at 3.10.00 PM

Basically I can access the object values by using the dot(.) notation as well as bracket ([]) notation, which solved my problem.

myObj = { "name":"John", "age":30, "car":null }; x = myObj.name;   // x = myObj["name"];

 

My code: https://editor.p5js.org/ada10086/sketches/HyeGPQI2X

To interact: https://editor.p5js.org/ada10086/full/HyeGPQI2X

 

 

live audio-visual techno generator

Among all the music elements we have discussed in Code of Music class up till week 6, I found the topic of  Synthesis most interesting and at the meantime challenging for me. Having studied music theory and played classical piano music for a long time, I found rhythm, melody, and harmony rather old and familiar concepts. Whereas, synthesis, the technique of generating music from scratch using electronic hardware and software, was never taught in my previous theory classes. Having a little bit of knowledge on synthesis from my Intro to MIDI class in the past, I still felt a little hard to wrap my head around the code that applies all these new concepts even after I finished my synth assignment. So I decided to challenge myself and refine this assignment by focusing and experimenting further with some synth techniques. In my previous synth assignment, I drew inspiration from the Roland TB-303 bass line synthesizer and created a simple audio-visual instrument that manipulates the filter Q and filter envelope decay. TB 303 has a very distinct chirping sound which became the signature of the genre acid house and a lot of techno music. Having listened to million songs in those genres, I have grasped on the surface certain patterns in the subtle and gradual changes layer by layer. After discussing with Luisa and resident Dominic, I thought why not expand my assignment into an audio-visual techno instrument with multiple control parameters for myself and other amateur fans to play with and experience the joy of creating/manipulating our own techno track on the go. 

In the beginning, I was completely lost and didn't know where to go about starting this project. Nevertheless, I started experimenting with all types of synth and effects in Tone.js docs, and some ideas started to merge.

I used a MembraneSynth for kick drum and a Loop event to trigger the kick every "4n", to generate a four-to-the-floor rhythm pattern with a steady accented beat in 4/4 time where kick drum is hit on every beat (1,2,3,4). And also connected a compressor to the kick to control the volume. I asked how I could manipulate the bass without changing the synth, and I thought about a most common way when DJs mix tracks is to apply a high pass filter to swoop out the bass sound during transition. Therefore I added a high pass filter to my MembraneSynth controlled by a kick slider and also had the slider change the brightness of the entire visual by inverting the black and white color. I referred to this article to set my filter slider values.  "Low end theory: EQing kicks and bass" : "put a similarly harsh high pass filter on the bass sound and set the frequency threshold at 180Hz again, so that this time you're scooping out tons of bass. "

In my original assignment, I only had one fixed set of notes generated by a MonoSynth being looped over and over, which contains 4 beats of 4 16th notes in each beat(4/4 time signature). After running the code over and over during and after the assignment, the melody was stuck in my head and I started to get annoyed the same notes. I knew that if I were to create an instrument not just for myself, the main melody line needs to have more variety for user to choose from, so it can be more customized. Again, inspired by the virtual TB 303, I really enjoyed the feature of randomizing the notes in the editor, and decided to recreate the randomize function in my instrument so that if users don't like the note patch, they can always keep randomizing to generate new patches until they find one that satisfy them. Therefore, I added a randomize function tied to a randomize button, and looped the MonoSynth notes every "1m"(16 16th notes) with a Tone.Part (funtion(time, notes)) event to pass in randomly generated notes each time user click the button. I initialized 16 16th notes in timeNotes array with each item in the array contains a time and note array ["0:0:0", "Bb2"], and created another synthNotesSet array with all 13 notes in the octave C2-C3 and some 0s as rest for computer to pick from. I accessed each 16th MonoSynth notes by using synthPart.events[i].value to overwrite previously generated notes as well as print out the generated notes in the randomize function, so users can see what notes are being played.I also added a compressor to MonoSynth to control the volume. Same as the previous assignment, I had mouseX tied to Q and the circle size, and mouseY to decay and circle density, but instead of moving mouse to control, I changed it to mouse drag because later I created some sliders for other parameters, changing slider value would interfere with Q and decay as the whole window is my sketch. However, I noticed that when my mouse is dragged over the slider, it would also change my Q and decay value, I want each parameters to be controlled independently from each other, so my solution was to confine the mouse drag for Q and decay within a square frame with the same center as the circle.

I had the square as a visual reference for manipulating the MonoSynth, but it was static and I wanted to animate it somehow to have it reactive to some sound parameter, so I thought about adding a hi-hat layer to my composition to make it sound richer. I tried to create a NoiseSynth for hi-hat but however I changed the ADSR values around, I could never achieve the effect I wanted. Sometimes I would hear the trigger and sometimes it was straight up noise, and it sounded very mechanic, not the hi-hat that I wanted. Instead of using a NoiseSynth, I decided to just use a hi-hat sample and trigger it every "4n" with Tone.event and a playHihat callback function. I started the event at "0:0:1" so the hi-hat is on the backbeat. As I was telling my classmate Max that I was not sure if there is a way to animate the square rhythmically with the hi-hat without manually calibrating the values, which was how I matched the bpm with pumping size of the circle in my original assignment, Max showed me his project 33 Null made with Tone.js, where he used Part.progress to access the progression of an event with a value between 0 - 1, which he then used to synch his animation with the sound user triggers. I was so excited to find out about the trick. I had my circle scaler and square scaler mapped to kickLoop.progress and hihatEvent.progress, so that the pumping circle and square are more precisely controlled by BPM/kick drum/hi-hat, therefore to create a more compelling visual effect. I also imagined changing the closed hi-hat into an open hi-hat with another slider. I tried different effects on the hi-hat sample and ended up using a JCReverb. I wanted to create the tracing circle effect Max did in his project on my square frame when Reverb is applied, but for some reason I couldn't apply it to my sketch. So I came across Shiffman's tutorial on drawing object trails for my square frame. I created a Frame class, and I had rectTrails mapped to the hihat slider value which also controls the reverb, and had rectTrails to constrain the number of trails generated (frame.history.length).

During my experimentation with synth, I liked the fmSynth sound a lot, and decided to add another layer of low pitch synth sound to fill the depth of the overall composition. I wanted to generate a random fmSynth note by keyPressed from the randomized MonoSynth note patch so it doesn't sound too dissonant,  but I also wanted to exclude the rest "0" from synthPart._events[].value, so that every time key is pressed there is note being played instead of silence. I did some research on how to generate one random index from an array but exclude one. And I saw this example . So I created a function pickFmNote() that returns to itself to generate a new note until it is not "0".  I wanted to use FFT/ waveform to visualize the fmSynth going across the canvas. I referred to the source code for this analyzer example, and I was having a lot of trouble connecting my fmSynth to Tone.Waveform. I referred to another envelope example that draws kick wave live from kickEnevelope. I viewed the took the block of code from the source code where the kick wave was drawn and replaced kickEnvelope with fmSynth.envelope.value. The result did not look the same as the example but I was quite surprised to see the effect of dispersing dots when fmSynth was triggered and converged back to a straight line when it was released, so I decided to just go with this accidental outcome.

There is only limited amount of things I could achieve in a week. The more I expand on my project, the more ideas start to emerge and the more I feel compelled to improve it and add more features. Right now I have a finished audio-visual instrument, where users can create their own composition. However, I would take this project to the next step by refining the sound quality, experiment with the envelope parameters and effects more to make each layer sound more compatible with each other, and overall less cheesy and mechanic, as well as adding more controls to turn each layer on and off to generate more variety of layer combinations.

my code: https://editor.p5js.org/ada10086/sketches/rJ2k48z3X

to play: https://editor.p5js.org/ada10086/full/rJ2k48z3X

instructions:

click the randomize button to generate a set of randomized monosynth melody

drag bpm slider to change the speed of composition

drag the hihat slider to add reverb to hihat

drag the kick slider to filter out kick drum

press key 'a' to play an fm synth generated randomly from the monosynth notes set.

 

Demo1:

https://youtu.be/zmlSP3IMHuk

 

Demo 2:

https://youtu.be/eZYGB9h9isY

Haunted Photobooth - Midterm Project documentation

Midterm is during Halloween, so my partner Shivani had an idea of a Spooky photo booth with some sensor (TBD) that triggers webcam to take spooky distorted pictures. We started brainstorming by asking ourselves what kind of activities we would want our users to engage at the photo booth without giving explicit instructions or without them knowing what they need to do in order to create surprises and fun. I told Shivani about my experience playing with the mood ring the other day. As I put on the ring the color of the stone on the ring changes depending on my finger temperature and each color represents a mood. So I thought of implementing some wearables that output some individualized messages, that could be fortune or curses, depending on the attributes of the person wearing. Shivani instantly thought about the sorting hat from Harry Potter. That was a moment of excitement when we realized we can have a fancy fortune-telling witch hat as a prop for triggering the camera at the photo booth.

Read More

W6 Interaction response

I had a lot of fun playing with  the Keezy classic recorder+looper and the Sampulator keyboard sampler. I like the Sampulator interface in the way that it looks very neat and straightforward as it visually maps all the samples to each key on the keyboard, with first percussion samples on first two rows and melody on the third row and human voice samples on the last row. There is a lot of possibilities to the sound that users can create and it can sound very rich because there're so many keys to play with.  It also comes with a metronome/time signature and a recorder.

Similar to Sampulator, the Keezy classic operates on smart phones, instead of pressing the keyboard to trigger sound samples, Keezy divides phone screens into 8 grids and tapping each grid triggers a sample. Sample size wise, there are a variety of sample packs but each one can only play 8 samples at a time, as opposed to the greater possibility of sounds generated from a full keyboard. However, an advantage of this is that users can easily locate which grid triggers which sample, and trigger samples they want with higher accuracy. Keezy also allows users to record their own samples and map them onto the grids so that they can create their own simplified MIDI controller to trigger 8 samples of their own, as well as make as many sample packs as they want, which is a cool feature that Sampulator doesn't have.

W6 Singing animal sampler

My inspiration of a singing animal sampler came from the youtube/vine sensation of animal voices singing to pop songs with accurate pitches. The most typical and popular example is Gabe the dog, who barks to tons of hit songs, commercials, and movie tracks. Similar to Gabe the dog, I decided to focus on sampling in music production and create an animal sampler, with a selection of different animal sounds and sample them to generate pitches in a major scale. I found some very distinguishable animals samples from freesound.org:  dogcat, cow, sheep, chicken, crow, trimmed the original clips to have them represent one note(F) in the center of the C major scale, so that the animal sounds don't get distorted as much when the pitch goes higher.

I added a dropdown menu with createSelect() in the DOM, to select between different animals sounds.

My biggest challenge, however, is to get the DOM element to interact with animal animation and sound, which I elaborated more on my ICM blog post.

https://youtu.be/m6F_yL2m_m8

To play: https://editor.p5js.org/ada10086/full/SJ8-14tj7

My code: https://editor.p5js.org/ada10086/sketches/SJ8-14tj7