3.2 aiya for MIDI keyboard (apr. 2019)

I created the first version of AIYA for a MIDI keyboard, where AIYA’s software would process the following information:
- MIDI note number/pitch
- key velocity
- amplitude
- spectral centroid
- spectral flatness
- the average value of these parameters over a given time period.

This form of AIYA functioned first as a sound analysis tool, a way for me to organize incoming performance data and reflect upon the parameters I would be working with in improvisations with electronics. Below is the overview of the signal flow for this version of AIYA.

Is the improvisation machine more than the sum of its parts?
The programmed interaction rules were rather explicit, AIYA functioned more like a program used for live sound processing as opposed to appearing as an "intelligent" improvisation partner. For me to consider the machine as an improvisation partner, it recreates to an extent some of the mechanisms involved in human-to-human improvised music performance as well as the relationships afforded by human-to-human improvisation. In my opinion, hard-coded improvisation rules themselves are not enough to fully satisfy the condition to resemble a improvisation partner.

Yet perhaps it is the multitude of these interaction rules or the combination of these rules that contributed to my impression that working with AIYA was no longer like working with any usual patch I would create in MaxMSP. Although I programmed these explicit improvisation rules, I was not actively trying to remember any of the interaction rules I created for the machine when playing with it. I was merely reacting and improvising with the sounds being generated from the machine, which made many of its actions unpredictable. The unpredictability of the machine contributed to my perception of it as being more autonomous and musically interesting than a MaxMSP patch where I would consciously control every single parameter. It also contributed to my perception of the machine as having agency in the musical improvisation.

AIYA eventually evolved into a program that would also output sound back to me, in the form of "voices" that derived their sound from real-time recordings of the connected MIDI keyboard.

As seen on the left, this version of AIYA was created as a Max patch that would control the playback of 8 processed sound buffers running in SuperCollider.

10 trigger rules can be seen, which triggered the playback of the 8 voices (p1, p2, etc.):

Trigger p1: Play really slowly
Trigger p2: Play nonviolently
Trigger p3: Play fast
Trigger p4: Play slowly
Trigger p5: Play fast and violently (hard MIDI keystrokes), or with a dirty sound or with chords
Trigger p6: Play high notes or with a clean sound
Trigger p7: Play really violently
Trigger p8: Play quietly
Trigger all OFF: Don't play for a few seconds
Trigger p7, p8 off: This is automated at fixed intervals

(above) a general overview at the main components of the very first prototype of AIYA, for MIDI keyboard. "sound stats," "pitch collector," "cent/flat," "hit monitor" all refer to the sound analysis functions of this version of AIYA (I regret the name choice of "Average Violence"...this metric was only meant to track the average force of the musician's MIDI keystrokes).

Making Connections
I had my colleagues in the M.A. Transdisciplinary Studies program try out this version of AIYA in a non-performance setting. I noticed two interesting aspects:

1. For the participants, the experience of “playing” with the machine was less about actually playing, but more about figuring out how their actions were mapped to the output of the machine.
2. There was a general anxiety about not being able to control the machine (not being able to stop it).

I would see these two issues reappear throughout my experience with creating subsequent versions of the improvisation machine.

next ►◄ back