diaphragma for chaosflöte is a solo improvisation and my song of the breath, one that undulates between uncertainty, persistence, and vulnerability, while musing upon Ernest Becker's frank observation of the human condition: "This is the paradox: he is out of nature and hopelessly in it; he is dual, up in the stars and yet housed in a heart-pumping. breath-gasping body that once belonged to a fish and still carries the gill-marks to prove it." (The Denial of Death).

The main impetus behind this 28-min improvisation set was to reflect upon breathing complications I experienced over the course of 6 months and to somehow express the "meditative" aspect I found with the breathing exercises I practiced every evening, juxtaposed with a lurking paranoia and uncertainty of when this slow-developing illness/recovery would end. I wanted to explore the 'translation' of this breathing experience, which was somehow a stark reminder of my human mortality, into a machine-reinterpretation (in the form of visuals/electronic sound), which at times breathes with me and at other times continues to play without breathing at all.

sound/visuals/performance by Melody Chua

3.6 diaphragma (sep. 2020)

Similar to A->B, I created diaphragma with primarily live-generated elements, and the framework was not constrained to any fixed length. I took this piece as a chance to completely rethink my paradigm of the improvisation machine. I rewrote the program so that all audio would be managed and generated in Max, as opposed to sending OSC instructions to SuperCollider. The number of "voices" AIYA had control over was reduced to 5, as the previous version with 8 voices tended to make the texture of AIYA too dense, which diminished the ability to discern AIYA's actual outputs. Furthermore, the size of each real-time recorded buffer assigned to each voice was made to be of variable length (as opposed to the fixed 10 seconds from previous versions) to allow for a potentially greater variety of source material for the machine to choose from. I also organized the machine patch into modular components (bpatchers) that could be reused and rearranged depending on the performance setting.

Framing: Between composition and improvisation
Although the sonic and visual content of this work was generated live and reacting to my actions on the stage, I still felt a lingering sense of “composition” in the way I had prepared the elements for the improvisation, and the work itself felt like something in between the two. The sonic and visual elements of the work, although reacting in real-time to my actions on the stage, were still — in their fundamental state — designed before even reaching the stage. Decisions for the interaction between my actions on the stage and their effect on the real-time audiovisual elements had to be taken in advance, and I extensively tested and refined these interactions before the performance. This process of creating the improvisation system—on both the technical and artistic level—speak to me as a form of composition. Indeed, composition and improvisation have long shared the same space in this creation process. Musician and educator Richard Dudas addresses this exact preoccupation: “But just how close to being completely improvised can interactive electronic music become when we are by nature dealing with the finite limits inherent in our hardware and computational tools?” (Dudas, 2010, p. 29).

Dudas makes a slight exit from the composition-improvisation dichotomy and prefers to describe the process of creating an interactive performance work as “composing an ‘instrument’ in the form of a pre-designed and pre-defined interactive musical system,” which is “designed to evolve or metamorphose in the hands of a competent performer, in the way that a performer of an acoustic instrument can coax a multitude of seemingly different sounds out of their instrument” (Dudas, 2010, p. 29). I find this as a useful perspective when describing the nature of my interactive performance works. Yet, this also raises an additional question: What delineates composition, improvisation, and instrument?

I revisited my previous implementation of an on/off switch for the improvisation machine and thought about how I could address this mechanism in a different way. Instead of having a switch with only two settings (on/off), I created a slider that controlled the number of “voices” active in the improvisation machine at a given time. I suppose it was still a switch in some sense in that it could still turn on and off the improvisation machine, but here I could have even finer control over its presence on the stage.

Because of this finer control, I felt even less like the improvisation machine was actually an improvising partner, especially when compared to the very first prototype I made for the MIDI keyboard. The machine, while structurally organized more cleanly, functioned more as an elaborate sound effect process than anything else. I even contemplated characterizing this work as solely for Chaosflöte and electronics, without improvisation machine. However, I felt that it was important to address this part of the improvisation machine’s evolution and illustrate what an impact the obsession over control would have the perceived identity and function of the machine within the performance.

I found this obsession over control a bit contradictory or even ironic when juxtaposed with my original narrative intention for the piece: expressing the vulnerability of the breathing issues I had experienced over the course of the previous summer. Even though I was well on the road of recovery, it was still exhausting to perform for 28 minutes non-stop. I wanted to show this vulnerability through the timbrally labored qualities of my breath and the quivering of my warbled voice as musical motifs in the improvisation. I suppose I was not fully vulnerable though, because I brought the machine with me on the stage to counterbalance this vulnerability. With its microphone and sonic "voices" whose activation I could control with the aforementioned slider, it was something that could fill the silence and fill the space where I play.

(above) signal flow of this version of AIYA, as used in bad decisions, A->B, and diaphragma. The main technical difference between this version and the aforementioned versions are found in the restructuring of the AIYA Max patch.
(above) the new "voice" manager for AIYA, where instead of 8 different voices, there are only 5. Each module in this chain is in the form of a bpatcher, ready to be duplicated or reused when needed, making it farily easy to tweak the patch in between performances when needed.
next ►◄ back