Pages

Saturday, March 28, 2015

"This Is Just One Voice" Performed March 27th 2015 at Studio 7

After many days of programming and practicing (+50 hours all together), I designed and learned to play a new instrument in Max/MSP. Its debut performance took place on March 27th 2015 at Studio 7. Thank you to all who came out to the performance. In this post, I will share the recording of the performance, and discuss the design of the instrument.

First of all, here is the performance:


The instrument's design is relatively straightforward from a macroscopic perspective. My voice is taken into the computer through a microphone where it is processed by three main sections before being distributed to four speakers which surround the audience.



The first stage, labelled "Live Recording and Loop Playback" is an Ableton Live set controlled by a grid controller. At this stage I am able to record loops such as the opening "this is just one voice", the singing in the middle, and the final "I invited people to abuse my gravity". This is the only processor using premade software; when I started the project, I imagined that I would be recording and rearranging words and sentences on the fly to make new poetry from one recorded original. As I started practicing with the instrument, however, I realized that this was fairly uninspiring; simply speaking the derived poem would have the same semantic content, with the additional prosody of my voice, and it was nightmarishly difficult to keep track of which word was associated with which button on my grid controller. Ultimately, I think the performance was improved by my decision not to extensively use this stage of the instrument.

The second stage is labelled "Frequency Shifted Delay with Feedback". This processor has a very dynamic gestural sound, and adds an inharmonic electronic quality to my voice. It's controlled by a QuNexus keyboard device which sends three streams of data to the computer based on which note I press, how much pressure I am applying to each note, and the position of my finger on the key. By varying the way I touch the key, I can elicit sharp attacks or steady drones from this stage. As with all stages of the instrument however, it doesn't make any sound unless I speak/sing into it from the microphone.

The third stage is the granular delay. This complex processor chops the incoming stream of vocal sounds from the microphone into tiny fragments, called grains, each of which has slightly different properties which are stochastically generated from within ranges set on my tablet computer. I used this processor in the performance to generate a variety of textures from my voice. The sound of the line "when many voices speak at once" near the beginning of the performance is a typical granular sound effect. One of the parameters of each grain is which speaker to play from; with four speakers to choose from, the output of the granular delay really enveloped the audience, who sat in the middle of the four speakers.


Finally, stages two and three form a feedback loop, each one feeding into the other by an amount set on my tablet computer. I didn't use this feature of my instrument much in the performance, but it offers some interesting possibilities to iteratively process the sound of one processor using the other.


The first performance with this instrument was a success, and I look forward to continuing its development. Both of the processors have room for improvement without changing their function considerably, and as well there is the possibility to add additional processors in the future. Programming aside, I have a lot of room to hone my playing technique with this instrument. I hope to also use it in collaboration with other vocalists and poets, and potentially also with instrumentalists and other laptop musicians. While I developed the instrument intending to use it with my own voice, its design is such that it could operate with any audio signal, be it a mic'd acoustic source, or any other electronic source.

Thank you to Thomas Christie for sharing the March blog with me, thank you to Studio 7 welcoming me into such a great performance space, and thank you so much to the FARR for the opportunity to work on this project and document it on this blog. It has been a joy to discover Max/MSP during this project, which I was only able to afford thanks to the residency honorarium. As well, it was fascinating to interact with the Reading Room users to develop the poetry, which was one of the most important aspects of the performance at Studio 7. I couldn't have accomplished any of this without the Reading Room.

My gratitude,

Travis West