Sway - live electronic processing environment
Sway is the name I have given to my live processing environment for any number of musicians. It is an example of an interactive music system like George Lewis's Voyager or Pauline Oliveros's Expanded Instrument System. It differs from those systems in that it is an autonomous system focused on live processing audio input. Any number of inputs can be processed, the only limitation is that of the host computer’s processor speed/memory and number of microphone inputs available. The goal of the system is to create a dynamic, constantly shifting electro-acoustic environment around a group of musicians that embodies my compositional aesthetics around live electronic processing. The practical result is that I can return to performing on string bass and leave the processing role to be done solely by the computer. In addition, future notated works of mine will use this system to generate the electronics components of the pieces.
The system is written in the SuperCollider programming environment which I've been using for the past 14 years (as of 2018). The system conducts audio analysis on each channel to determine the type of processing to be used as well as the exact moment-to-moment parameters of the selected processing type. Each musician’s amplitude, density, and pitch clarity is tracked and determines where each musician is assigned to on a grid of four processing possibilities. For example, at the beginning of a performance a musician is assigned to the center of the grid which denotes no processing. If the musician’s playing increases in density with a high pitch clarity that would move the musician into the upper right quadrant of the grid (quadrant I) and initiate a delay processing effect. The average amplitude, density, and pitch clarity over the past second determines the values of parameters like the delay time and feedback level. What results is rapid and responsive change to the effects parameters in real-time. When this is expanded to several musicians, the sheer number of control signals being generated would be beyond the scope of one musician controlling the live processing to manage. This analysis controlling processing technique enables massive processing collages to be very dynamic musically.
Over the course of the performance the amount of time each musician spends in each quadrant is tracked and once time in one area has reached a threshold that will trigger a change to occur in the processing of that quadrant. This is to ensure that each musician’s processing grid stays fresh and that each musician’s grid changes individually over the course of the performance. The system also tracks what is happening with all of the channels as a whole unit. For example, the system can tell if only one person is playing at the moment or if all the musicians are playing above their amplitude threshold. If the system catches one of these instances then it will enact changes across the whole system. This might include mapping all channels to process a single musician, or change the processing grid of each musician to a new setting, or turn off processing all together. These are a few of a number of possibilities and are an attempt to ensure a unique set of processing results for each performance.
This project has been made possible in part with the support of the CT Department of Economic and Community Development and the CT Office of the Arts.