Composer, electronic musician, improviser

Virtual Matrix Mixer (yes, in SuperCollider)

Screenshot of the Waz Matrix Mixer in action

Kane and I recently dropped $170 at JameCo on potentiometers, switches, diodes, project boards, and more in anticipation of several MuCo projects we have been planning.  The main project now after some op-amp FAIL last night (the FAIL being Mimms’s op-amp.  Yes, there is a free version on the nets.  No, we will not help you torrent it illegally) is a classic 3×3 matrix mixer which we intend to use a la David Tudor to make feedback music of the most splendiforous nature.  As some of you may have noticed, I have been slightly obsessed with feedback of late, and for good reason: feedback, like Frosted Flakes, is better than good, its great.  It’s great to make, great to listen to, great to cover up the drunk, sleeping neighbor’s DVD menu music that runs for hours and hours after he’s passed out on the couch.

I plan to post a little “Fun with Feedback” post this weekend (maybe tomorrow), but I will jump the gun and get to results before I do.  In anticipation of the analogue matrix mixer, I decided to spit in the eye of convention and model an analog (make an analog of…) device digitally first because I wanted to see what my results with the analog device might be.

This was an interesting experiment because it highlighted the reality that, while creating digital analogs of analog equipment may be useful on a basic, conceptual level, it breaks down completely when it comes to the actual implementation/realization of the object.  This may seem obvious to some of you (congratulations), but one wouldn’t suspect this with the tradition of modelling analog equipment in electronic music studios the world over.  Not to mention all of the digital synthesis software that models even its appearance.  (Yes, Reason, I’m looking at you… with disdain.)  I’ll make a long story short and say that approximately 2 minutes after I sat down with idea the matrix mixer in my head to start coding it up, I was conceptually far enough away from the analog instrument that looking at my notes one might not even guess it was supposed to be a simple 3×3 summing mixer.  This is partly because of the nature of programming itself, and partly because of the idiosyncrasies of any programming language.  If one were to mock up the 3×3 in Csound, SuperCollider, and ChucK, it would become very clear very quickly that one cannot think the same way about the same object when coding in different languages.  I now digest.  (yes, digest.)

After some headbanging and with some help from Kane and HJH (on Nabble) the Waz Matrix Mixer V.1 was realized last night.  The SC3 code is below so you can see how it is constructed.  The mixer is simple: it routes 3 input sources (in this case, either the built-in microphone or a sine oscillator) to 3 outputs each.   At the outputs is some processing, a delay line, distortion, etc.  The output from the processing can then be routed back into any of the inputs including itself, thus the feedback.

In the following picture, the blue knobs represent the 3×3 matrix.  Each row routes its respective input to outs 1, 2, and 3 individually.  The red knobs control the input volume, and the yellow knobs control the amount of outs 1, 2, and 3 that are fed back into the chain.

janky gui -- needs work, but works...

As promised, here is the code (provided Scribd ever finishes processing it…)

Here is a recording with the mic as the input source.  I’m not actually doing anything with the mic, I’m just letting it hum and collect room noise and the output from the speaker which is right next to it in the laptop.  The delay line’s delay time parameter is being dynamically changed using the mouse position (x axis) which results in pitch-shifting.  This is responsible for the “glitching.”  Additionally, I am using the mouse position y axis to control the decay time (in seconds.)  When the decay time is over 3, the processing synths begin a sometimes irreversible pattern of self destruction.

Here is a recording of the sine oscillator inputs.  There are three sine tones around 440, 1000, and 1400 Hz respectively.  The rest of the processing is as described in the example above.

Filed under: Code, Current Projects, El MuCo, Music, SC3 - Code - Music - More, , , , , , , , , , ,


seventeen:thirteen is my doctoral dissertation from the Eastman School of Music. It is a composition for fifteen players to be performed outdoors in an urban environment for 30 minutes each day for 3 days. It employs coordinated movement and sound with audio and video recording, playback, and projection. The work employs different compositional styles for each of the three days, but all of the material is derived or informed by algorithms written in SuperCollider, a music programming language. More information can be found in the score. Please click full-screen for best viewing. If you would like a pdf copy, please feel free to contact me.

Filed under: Music

Mixer Feedback Music: 1204FX Improvisation 2


(every good post should begin with one!)

Following any of the steps below to create feedback loops with mixers can harm your gear and more detrimentally, your ears.  The results are often unpredictable and almost always extremely loud.  The pulse waves created by these kinds of setups and heard in the recording below are very hard on the ear mechanism (as you will be able to tell by listening.)  Please take all precautions to limit the amplitude of your speakers and, if listening on headphones, to start with the volume very low and turn it up as needed.  If you plan to attempt the following setup or one like it, start with all volumes at the minimum and raise them once you know what your results are going to be.

Note: the piece begins very quietly, the first loud sound is around 1:26.


The following is a list of equipment used in the above improvisation.

  • Dell Latitude D620 (1.6gHz, 1GB RAM) running the latest PureDyne distribution
  • Jack and Ardour to record the improvisation
  • Behringer XENYX1204FX mixer for all sound generation
  • 4 1/4 TS cables
  • 4 RCA cables
  • Headphones

Kane recently played a few recordings for me of experiments he had done with feedback systems created using his 1204 mixer.  The sounds were appealing and I thought it would be fun to see what it was like to make music with only a mixer for an instrument.  My 1204FX has on-board DSP that Kane’s model does not.  Normally, I do not use the processor at all, but for this exercise it was useful in adding variation to the signal flow and achieving a variety of sonic results.

Last night I experimented for about 2 hours with different routing schemes and to get used to controlling the mixer as a sound-generator.  I recorded 8-10 tests and ended up with about 45 minutes of pretty good material which I may use at some point in the future.  I then recorded 1204-10-29 in one take, using only the 2-channel output from the mixer.  There is no additional material in the recording, nor any post-processing aside from normalization.  The following is the routing recipe I used.

Routing the 1204FX

The first pair of feedback loops was connected as follows:

Alt 3 output –> channel 1 input (trim at +60) –> sent to Alt 3-4
Alt 4 output –> channel 2 input (trim at +60) –> sent to Alt 3-4

The second pair of loops was connected like so:

Aux Send 2 –> channel 5/6 L (+4) –> Main Mix (no Alt 3-4) –> Aux Sends 1-2 alternately as desired
Aux Send 1 –> channel 7/8 R (+4) –> Main Mix (no Alt 3-4) –> Aux Sends 1-2 alternately as desired

Aux Sends 1-2 at +15
Aux Returns at +5 to +10
Aux Return 1 to Aux Send 1 at +5

The reverberation heard is the built-in “Chapel” reverberation, program 19 on the mixer.  I used the Control Room R & L output channels to route the audio to my laptop for recording.  I monitored the sound using the headphone jack on the mixer with the volume as near to zero as I could get it.  (At some points this was not enough and I had to quickly pull the phones off.)

Useful parameters for making music

There are many ways to achieve sonic variation within the mixer.  The controls I used were the “pre” buttons for each channel, which control signal flow to the main mix and the aux sends, the faders for each channel plus the ALT 3-4 and Main Mix stereo faders, the “ALT 3-4” buttons, the AUX 1-2 faders, the pan controls, and the 3-band EQ for each channel.  (Is that everything, you say?  Almost, I didn’t touch the trims, the low cuts, or the aux send knobs below the DSP area.)  The controls I used the most were the volume faders and the 3-band EQs.  All of the frequency variation (thumping lows to screaming frequencies around 12k) was accomplished by turning down two of the three EQ bands, and playing with the remaining band while simultaneously working the volume fader for that channel.

If you are interested in experimenting with a mixer like this, trial and error will be your best guide.  Try making the channel settings similar for all channels and then changing them one by one to clearly hear the results.  Or try using only 1 or 2 of the channels and later adding the rest one by one.  Most of all, play with the levels a lot: I noticed that in several instances minute changes to a single channel produced startling results.  Also get to know your routing: changing the ALT 3-4 stereo faders will affect all of the channels using the ALT 3-4 pair, while playing with the gain of an individual channel will only affect other channels that share its signal path.  By bypassing the aux sends (the DSP) you can have two layers of sound, one processed and the other dry (you can hear this clearly in my piece), so experiment with foreground and background layers.

Here, again, for your edification is my improvisation… I know you don’t want to scroll all the way back to the top of the page.


Filed under: Current Projects, Music, Phase 1, , , , , , , , , , ,

Lost Voices of Blasphemous Friends

In 2006 I both completed recording for, and abandoned, a sound installation for 8 speakers.  The work, originally titled Voices of Blasphemous Friends was intended for installation at a festival to which I was never invited.  I recently stumbled upon the recordings, mostly in a state of discombobulation, and the 5 movements/excerpts I used to submit to the festival.  Perhaps it is the magic of the passing of time, perhaps it was hearing the voices of far away friends, but I was immediately drawn to these recordings.  I here present them unedited and in the state in which I left them 4 years ago.  I will now refer to the work collectively as the Lost Voices of Blasphemous Friends (for obvious reasons.)

To create the piece, I tricked eight friends into writing two questions to me via email under the assumption that at some point that year I would invite them up to Rochester (where I was living at the time) to record them asking me the questions.  I then collected the questions they sent me and put them all together, compiling a list of 18 questions (including two of my own) and a few monologues I asked select people to write.  When the “interviewer” showed up to quiz me, I informed them that in reality they would be the ones answering the questions, and that there were not two, but 18 questions, and that additionally they would be required to sing, hum, and perform other assorted tasks as required by the questions.  After I recorded them answering there own, and other’s questions, I had each one ask me the questions as well.  I did not prepare any answers.

The questions themselves ranged from “Did you have anything to do with the bombing?” to the classy question “who do you think says the word ‘fuck’ the best?”  There is also a question of epic proportion that takes nearly a minute to ask and can be heard below in the piece Question #9.  Originally, the work was to take the form of “a whole and then parts.”  By this I mean that the questions and answers were to be presented unedited, one set per speaker, followed by “movements;” edited portions of the set of whole recordings which would take the form of canons, dance music, and more, using only material from the recordings with no alterations of pitch and duration, nor processing that would mask the original sound.

For much more detail into the philosophical underpinnings of the work I will have to dig (provided something good doesn’t come on the TV.)  Until then, please have a listen to the short pieces/excerpts below.  I hope you will enjoy them.


Lost Voices of Blasphemous Friends

excerpt of 1+ hour of questions sounding simultaneously, spatialized here to imitate 8 speakers in a rectangular arrangement



monologue written and read by Marc Bollmann


Question #9

question written by Solomon Guhl-Miller, asked by Scott Petersen (8 different times)



monologue written and read by Matt Barber


Breakdown : or I am the Destructicon

voices, beat-boxing, and knee-slapping by Ethan Borshansky, Gabriela Ponce, Scott Petersen, other

Filed under: Music, Phase 1, , , , , , ,

Audio/Video: Uncertainty Music – April 24th

Here is some low-quality video from our show at The Big Room in New Haven. More details will follow. Below is audio only.

Filed under: El MuCo, Music, Phase 1, SC3 - Code - Music - More, , , , , , , , ,

Audio/Video: Hartford Artspace, April 25th

The above is some low-quality video from our most recent show at Hartford’s Artspace. Below is audio only.

The performance space, as you will be able to tell in the recording, was cavernous. I estimate the ceiling to be between 16-20 feet. The room was exceptionally live and reverberant. Our approach is always to improvise, in the most genuine sense. We do not discuss what we are going to do before hand, nor do we really “practice.” Once we were in the space, we knew that we had to incorporate a lot of space (silence) into our improv to keep the texture from becoming too muddy. The work has a natural ramp shape as we move from sporadic and spacious to dense and thick textures at the end.

Hardware used: mostly the Casio SA-2 with a little bit of the growler. We set up the mixer so that the mic input was routed to both computers, and our computer audio was routed to each other as well. (This creates great cross-talk possibilities, and is mostly how we work now.) Kane was running his granular patch and, at the beginning, it is easy to hear him grabbing bits of what I am doing with the mic. I played the Casio a bit, banged on the mic a lot, as usual, but started with cups, forks, and my hands. The household objects were provided by Juraj Kojs who performed in a later set.

My code was primarily made up of recursive ndefs (sorry, SC3 specific stuff here…) which take the incoming sound, delay it using comb filters, feed a certain amount back into the signal, and a certain amount out to a bank of “effect” ndefs. By dynamically changing the delay length of the comb filters while running the synth, pitch-shifting occurs which varies the resulting feedback loop’s overall spectra.

Filed under: El MuCo, Music, Phase 1, SC3 - Code - Music - More, , , , , , , , , ,

G O I N G S O N : L O C A L (ISH)

fritz Art of Fritz Horstman
kane Music of Brian Kane
fritz Hartford Phase Shift
fritz Hartford Sound Alliance
Lique Art of Philip Lique
Lique Music of Matt Sargeant
strycharz Art of Heather Strycharz
uncertainty Uncertainty Music Series

My Other Awesome Sites [•_•]

Assault! 375 Aural Assaults!
About me! About me!
MySpace! MySpace!
Google+! Google+!
My (soon-to-be) Company! My (soon-to-be) Company!

Enter your email address to subscribe to this awesome blog and receive notifications of new posts by email!

Join 81 other followers


P O S T E R S !







INI new haven

Handmade instruments by Scott Petersen and Brian Kane at Artspace New Haven