The improvisation machine is an elusive figure. It has no universally agreed-upon form, but is instead in a state of continuous fluidity, changing between different configurations, versions, functions, perceptions, and is perpetually in a state of flux that actually, as Hayles argues, all mindbodies adopt, whether they be of flesh or metal (Hayles 2002). But what is unique about the machine improviser is that, in its interactions with others, it can be symbolically treated as a chamber musician, an artistic companion, and become “personified” in this sense: a personified entity without a fixed, personified form in which to make reference to. It is always in tension between the attempt to recognize it as an encapsulated entity and at the same time as a synergy of independent entities, in which the terminologies of "mind" and "body" make no sense.

Perhaps the improvisation machine can serve as a way to be relinquished from our traditional, humanoid perceptions of mind & body. The form of the improvisation machine – fragmented through spatially separated electronic devices and mediums – is inherently discombobulated. Indeed, the fragmentation of the improvisation machine reflects a mindbody conceptualization offered by Blackman: "We are never a singular body, but are multiple bodies that are brought into being and held together through complex practices of self-production" (Blackman 2008).

black box fading reflects my experience developing and working with a musical improvisation machine, and it can be seen as a transient imagination of this machine’s memories and dreams. The sonic and visual material present in this event have been derived from improvisations I made with the machine between December 2020 and March 2021. Between performance and installation, and between physical and virtual spaces, the material plays between shifting perceptions of human-machine agency, interaction, and (dis)embodiment. Getting lost in the space of each other becomes the game of representation, and there is no better way to describe this than through actions rather than words. So welcome to this black box. It is a pleasure having you here.


Melody Chua: (project lead)/concept, sound design/composition, performance, machine programming, visual projections, video composition, installation design
Eric Larrieux: sound recording engineer
Sébastien Schiesser: technical manager
Valentin Huber: cinematographer
Kristina Jungic: I.A. space producer, event guide
Benjamin D. Whiting: sound design, audio restoration
Martin Fröhlich: projection mapping assistance
Patrick Müller, Matthias Ziegler: project mentors

Special Thanks

ZHdK Immersive Arts Space
ZHdK M.A. Transdisciplinary Studies
Sara Stühlinger

(above) a snapshot from the 360° video black box fading
(above) the entrance to the ZHdK Immersive Arts Space, the literal black box, decorated with square handouts/program notes each of the participants received.
(right) a view of the rehearsal space and the three computers used to run the production (one for motion capture, one for audio, and one for visual projects). These computers were present on the set for the filming, but removed from the set for the live event.
(above) the handouts for each of the participants (each handout features the QR code on the front and one of the three quotes on the back). Scanning the QR code takes you to the online program notes at, where there is also a form to leave feedback on the event.
   (above) another snapshot from the 360° video black box fading
4. black box fading (nov. 2020 - may 2021)

black box fading represents the culmination of my work with AIYA and a cheeky metaphor of the imperceptibility of the improvisation machine, which the black box symbolizes. Taking the form of a fixed media spatial audio work, an interactive audiovisual installation, and a 360° video in VR, I attempted to represent the improvisation machine in three different perspectives of increasing [visual] immersion. Each of the three works shown were derived from audio and video recordings I made with the Chaosflöte and AIYA in December 2020 and March 2021.

The final event for the master's project took place on May 17th, 2021 at the ZHdK Immersive Arts Space (IAS), itself a literal black box, where participants were guided through a three-part exhibition of the following works:

(1) black box forming (6 minute fixed media spatial audio work)
(2) black box finding (8-10 minute interactive audiovisual installation)
(3) black box fading (22 minute 360° video in VR)

Recordings of these three works can be seen below:

black box forming
(fixed spatial audio work decoded for the 42 speakers of the IAS).
The recording on the right is the 2-channel binaural version of the work.

black box finding
(interactive audiovisual installation)

black box fading
(fixed spatial audio work decoded for the 42 speakers of the IAS). The recording on the right is the 2-channel binaural version of the work.

This 360° video is meant to be experienced with headphones on a VR headset or with cardboard VR and (mobile) YouTube VR (make sure 4K is selected). It can be streamed via the link below or via the embedded 360° video below:

watch black box fading on YouTube

A brief introduction and explanation to this work can also be seen in the 360° below:

These works were created in the reverse chronological order of when they were shown in the event, so I will describe their development as such (Note: black box fading is the title for both the 360° work in VR and the entire event itself...boxes within boxes...).

With the new abundance of data I now had access to from the position and rotation values of the rigid bodies, I had to think about how this would affect the audio programming of AIYA. I still retained the basic architecture of AIYA as a buffer-based improvisation machine with five voices of differing qualities, but now I was able to more viscerally connect my movements with AIYA's sound character.

First, I reprogrammed AIYA to record the entirety of the improvisation, from the moment I would begin playing. This means, instead of previously only having access to the last 10-20 seconds of my playing, it had access to the entire history of the improvisation. From here, it had the option to choose material from any point in the past as the starting point for its sound generation.

Then, the question became, how should the machine choose which sound material to work with? I reflected back on my movements in A->B, this meandering walk from one side of the stage to the other that somehow created a potent spatial connection between myself and the machine (due to the feedback this walk would produce). This spatial connection, already palpable in the shifting grid projections, became another important concept to develop in the audio of the improvisation machine.

Since my position within the IAS was tracked, I created a virtual map of my position in the IAS and connected it to the buffer selection process of the machine improviser. The x-axis of my position was mapped to the point in time in the past from which the machine improviser could choose material from (e.g., the far left on the x-axis would make the machine improviser choose buffer material from the very beginning of the improvisation, and the far right on the x-axis would make it choose material from the present). The y-axis was mapped to a deviation value, where the higher the y value, the higher the machine could randomly choose buffer material from the surrounding time periods (e.g., a high y value with a low x value could increase the chance of the machine to choose material from the middle part of the live-recorded improvisation, instead of exclusively material from the very beginning of the improvisation). The beauty of this paradigm is that in the beginning, it can be easy to "steer" the improvisation machine's choice of source audio material, but as the improvisation goes longer and longer, it becomes more difficult to be precise about this selection, because more and more sound material is available to the x-axis parameter.

Aside from the improvisation machine's sound generation, I also mapped my body movements directly to live effects processes in Ableton, which would transform my flute sound in real-time, according to the rotation of my joints. I used Reaper to spatialize the sound with the ZHdK ICST (Institute for Computer Music and Sound Technology) VST Ambisonic plugins.

Technically, the signal flow from the human performer to Chaosflöte and improvisation machine can be summarized in the diagram below:

(2) black box finding (interactive installation)
I reimagined the black box I constructed for the 360° video recordings into an interactive installation for up to six participants. Each of the rigid bodies originally used on my body in the 360° recording were distributed to each audience member, as somewhat of a metaphor for the disembodiment or distributed embodiment of my body (as perceived by the machine) to the six participants. Three of the rigid bodies controlled the live visual projections and three rigid bodies controlled the sound generation. A view of the installation space can be seen on the right.

I staggered the entrances of the participants into the installation space, two at a time, for a gradual increase in the denseness and activity of the installation. What resulted was a general decrease from having clearer transparency in the human-machine interaction to less transparency, and accordingly a general decrease in feelings of extendedness with the machine as the number of participants in the space increased.

Aesthetically, I would have preferred to keep the transparency of the installation more clear and consistent than it was at the actual event. However, it was interesting to observe the machine fading between these extremes and to try to observe the point in which the machine shifts from appearing as an extension of the participants to being a separate entity and its own space. This point differed from group to group, but generally this shift occurred the moment the third and fourth participants entered the space and/or if the participants were more prone to move/rotate the rigid bodies more drastically than normal.

(above) the installation setting of black box finding. Three rigid bodies allowed participants to steer the virtual camera within the virtual [2] 3D space of the scene (altogether the rigid bodies controlled XYZ position, rotation, and zoom). As the virtual camera was moved and rotated, one could "walk" within the virtual scene towards text snippets from the prologue that were distributed across the virtual 3D environment. Sometimes it would be necessary to rotate the virtual camera or zoom into the scene in order to clearly read the text.

(1) black box forming (fixed media, spatial audio work)
The purpose of this work was to introduce the participants into the space of the black box, with complete immersion on the auditory level and minimal immersion (or perhaps "complexity" is the more suitable term) on the visual level. Played in nearly complete darkness, the work featured transformations of the recordings made in December 2020 and March 2021 of my improvisations with the machine.

I incorporated audio from the outtakes of the recordings, with phrases such as “(let’s) make sure this is working” and “maybe you could do a (stomp).” These phrases are a subtle reference to the technical coordination process of setting up the 360° video and Ambisonic recordings, using parts of the technical process as aesthetic elements for the piece.

Motifs of control (and the symbolic "on/off" switch), orientation/disorientation, mirror/reversing, and the hybridity of concert and interactive installation permeate all three of the works in black box fading and are described in greater detail below:

The return of the switch
The on/off switch makes a thematic appearance throughout black box fading in the form of somewhat abrupt cuts in the audio and video. In the beginning section of the 360° video work, I use the cuts to reinforce the spatialization of the audio, where one side of the audiovisual space would play at a time, cutting into and out of each other until their spatialized sounds eventually overlap each other. This can be observed on the left.

Other examples within the 360° video include the following:

10:54-10:57 - The growling noise sequence crescendos in intensity and abruptly cuts to black/silence
15:05-15:14 - the machine steadily repeats a flute sequence and abruptly cuts to black/silence
18:14-18:20 - "...and the machine continues to make actions even though it knows it can be turned off." (again, the video cuts to black/silence)
18:37-18:40 - The abrupt change from the darkness of the video to the blinding white scene symbolizes the turning "on" of the machine. This is also the point where the video reveals the IAS with the lights on, removing any previous illusions that the projection visuals may have created (as the light renders them practically unperceivable).
21:36-21:52 - Matthias Ziegler plays one last note with the machine before it "then goes off." In the recording, the machine "goes off" by continuing to play even when Matthias has exited the box; however, "off" also refers to the turning off of the machine. In this case, this was represented by the subsequent cut to black/silence that marks the end of the video work.

This motif also appears in the spatial audio work, where the cuts are represented metaphorically by the outtakes of the recording sessions and the prevalence of the foot stomp that was made to signal the start of each recording take (This foot stomp was amplified and reverberated throughout the spatial audio work, prevailing also as a musical motif in the low, pulsing bass synth line from 0:34 to 1:42).

Between concert and interactive installation
black box fading as an event spanned both fixed media “concert” elements as well as an interactive installation, making it already difficult to describe beyond simply calling it “an event.” The spatial audio work functioned as a typical “concert” fixture where one listened in a linear format, the installation functioned as an indeterminate composition of sorts with flexible duration, and the 360° video itself exhibited hybrid elements between concert and interactive installation:

- The physical scene of the recordings used in the 360° video already behaved like an installation to me, especially in regards to my relationship to that space. As the OptiTrack system tracked the position and rotation of my body and the Intel RealSense camera captured an infrared representation of my body, every movement or sound illicited a degree of response from the projections and electronically produced sounds. During breaks, it was interesting to see people working at the IAS walking around within the tracked performance setting and behaving as if it was a type of interactive installation (even though their bodies were not tracked via OptiTrack, the RealSense camera still picked up their movements, serving as their interaction point).
- My representation within the 360° video also played between the attitude of "playing a concert" and the atmosphere of observing an abstraction of an installation space. At times I am seen playing the flute (allusion to the concert setting), but at other times I could be seen sitting in the IAS, peacefully observing the dynamic projection visuals surrounding me (allusion to the installation setting).
- Furthermore, the nature of the 360° video format is that one cannot fully guarantee the field of vision the audience will choose to experience;. One can only influence it by predicting what elements the audience will find the most interesting to pay attention to at a given point in time and use that information to organize the dramaturgy and narrative of the video, while at the same time acknowledging that there will be variations on that narrative depending on how the audience chooses to interact with it. This is, to me, much like an interactive installation, where you have a general idea of what the audience might choose to perceive, but there is still an element of indeterminacy.

Juxtaposition of orientation/disorientation
black box fading uses the balance of orientation and disorientation as a metaphor for my personal experience in improvising with the improvisation machine. At times, being the programmer of the machine gives me a certain "orientation" when performing with the machine in that I can discern its general behavior and character. On the other hand, I am not actively thinking about the interaction rules I have programmed for the machine, and there are also surprises produced by the glitches in the machine. These result in some moments of "disorientation," or when referring to Lösel's described model of improvisation, the point of " which an improviser realizes that his mental model diverges from others'" (Lösel, 2018, p. 190).

Navigating the balance between orientation and disorientation also speaks to the negotiation of control and vulnerability, issues that I have continued to grapple with throughout AIYA's development. Some disorientation can be very stimulating and propels the dramaturgy of the narrative work. With the machine, I am testing the limits of my vulnerability and asking, "how much control do I really need?" I try to represent this disorientation/orientation balance, literally speaking, through the immersive aspects of the work in black box fading, as outlined below:

One of the first subtle moments of disorientation occurs in the spatial audio work, where there are no visual cues to orient oneself within the work in the traditional sense. The participants adapt to this by instead directing their orientation sense on the audio level, where the sounds within the work have been carefully spatialized across the 42 speakers of the IAS Ambisonics system.

Once the spatial audio work is finished, the projections of the immersive interactive installation space gradually appear. The image is mostly static until the first participants are led into the installation space and pick up their rigid bodies. From here, the mapping on the visual level is fairly straightforward: Move in the physical space to move the camera in the virtual space. Stop moving in the physical space to stop moving in the virtual space. The same is true for the audio control, where moving the rigid bodies creates sound and stillness stops it. The disorientation creeps in when more people are added to the installation. Who is controlling what? And who plays when? If all six participants are moving at the same time, it is no longer easy to discern the mappings made between motion and audiovisual activity, and one becomes disoriented in their relationship with the machine.

When the participants arrive at the VR portion of the event, a further disorientation is introduced through the complete darkening of the IAS and the "takeover" of the visual field with the 360° video shown on the VR headsets worn by the participants. Within the VR video, participants are confronted with the following juxtapositions of orientation/disorientation:

- A clear left-right orientation is outlined through the repeated appearance and disappearance of two rectangles (their appearance and disappearance is always occurring in the same locations). However, within the rectangles themselves is a scene of complete disorientation: the equirectangular 360° recordings shown in these rectangles have been decoded for playback in the cube map format. This mismatch of source and destination video format has a slightly destabilizing effect. On the audio level, the initial appearances of the rectangle correspond to the same location in the spatial audio map, but as the section progresses, the locations are reversed and eventually overlap each other to create auditory disorientation.
2:51-3:54 - A grid forms into view (white lines on black), to create a uniform canvas for the viewers to orient themselves.
3:55-5:29 - This sense of orientation is once again decomposed as the grid disintegrates and morphs into a new, shifting texture. This texture fades between sharpness (orientation) and blurriness (disorientation).
5:30-5:43 - The scene finally composes into the view of the IAS stage, where one can clearly see me playing the flute and have a general sense of "which way is up." My flute sound can be localized to the visual position of myself on the stage. There is, however, the amplified flute signal that sounds on the opposite side of my physical location, which in the spatial audio creates the perception that I am in two places at once, aurally speaking.
7:07-10:58 - This sense of orientation is, again, undermined by the morphing and overlapping of multiple 360° videos. The 360° camera alternates between stationary positions and moving recordings to create further cycles of orientation/disorientation until the end of the section.
11:05-16:26 - The opening of this section outlines a faded box around myself playing the flute to serve as starting visual reference point for the work. Throughout the piece, this box expands, contracts, changes shape, and even rotates (13:32) in an alternation of orientation and disorientation.
17:05-18:19 - 360° footage captured with the camera in a stationary position (orientation) is juxtaposed with volatile visual noise that at times obfuscates the 360° footage (disorientation).
18:48-19:38 - The whole IAS stage is revealed in bright light (orientation)...
19:41-20:16 - ...only to be faded into a series of continually shifting lines and boxes reminiscent of the video segment from 3:55-5:29.
20:32-20:37 - The texture morphs one last time, back to the original, uniform grid seen in the beginning of the work (but this time with black lines on white: orientation).

At the end of the video, the participants, upon removing their headsets, find themselves in yet another enclosed black box, whose curtains are slowly opened to (again) reveal the full view of the Immersive Arts Space, which is now illuminated to establish the last cycle from disorientation to orientation [3].

The reflection as a metaphor for failed human-machine emancipation
The use of mirroring and reflecting (or reversing) serves as a metaphor for the machine's relationship to my actions. Although, not an absolute mirror in itself, it is still difficult to emancipate the machine from myself and my actions. It has until this point always relied on human presence in order to "perform," just as the mirror image depends on the original source image to exist. The metaphor of the mirror and reverse image is an exaggeration of how the machine actually behaves (as it does not repeat my actions verbatim), but its use of live-recorded buffers of my flute playing as its source sound material makes it still a mirror of sorts.

The spatial audio work is comprised of many audio clips taken from the 360° video, but played in reverse. Examples include the following:
0:26-2:10 - The "alto flute"-sounding melody is actually the original melody played on the C flute in the 360° video at 5:30 - only that it is reversed and pitch shifted downwards.
1:15-1:40 - The voice snippets heard in this section (and throughout the piece) are the reversed forms of the voiceover heard in 16:46-19:46 of the 360° video.

The sound of the interactive installation plays the recorded buffer samples forwards when accelerating the movement of the rigid bodies and backwards when decelerating the movement of the rigid bodies.

The VR video incorporates several instances of the mirroring/reversing motif, as indicated below:

- The videos contained within the two opposing rectangles actually come from the same source video; only one is playing in reverse.
5:30-5:43 - As indicated earlier, my flute sound is spatially split into two: the "pure" tone is localized where my body is seen, and the amplified flute signal sounds on the opposite side of the virtual spatial audio map.
9:20-9:27 - My body is mirrored via two virtual projections of my body whose movement is separated by a variable delay value (seen on the right).
11:05-16:26 - The opening of this section shows a reverse image of myself playing the flute (one can see that it is incorrectly held on the left side of my body instead of my right). Gradually, I walk across the scene to meet the mirror image of myself (the original recorded image). Throughout the work, one can see an additional mirror image of myself (upside-down) on the top portion of the 360° video.
19:18-19:48 - Sandwiched between two copies of the same computer, one sees myself physically on the stage and also "virtually" in the software window of TouchDesigner on the computers.
19:48-20:37 - The majority of this section is actually the reverse form of the shifting grid segment from 3:55-5:29 of the video (except in white with black lines)

(above) both real-time and delayed, "mirror" outlines of my body  (via the Intel RealSense infrared camera) projected onto the curtains of the space

At the end of the video, the participants find themselves in the same black box they have seen in the video, symbolizing the mirroring of the virtual space with the physical space.

As previously mentioned, the machine's use of recorded buffers for its source sound material can also be seen as a transformation of the mirror motif. The improvisation machine recalls snippets of my playing from various points in the past and replays them based on the programmed interaction rules (which, as mentioned previously, is not something that the performer/me is actively keeping track of during the performance). This reinforces the machine as a distinct improvisation partner. Because this sound material is literally consisting of samples from my flute, it can be difficult in the recording, from the audience's perspective, to distinguish between which flute sounds are coming from the performer/me and which are coming from the machine. Both sound as if they are mirrors of each other. This ambiguousness paradoxically reinforces the concept of external embodiment to the listener (that the performer's body is perceived as being augmented by the machine) while reinforcing itself as a separate improvisation partner in the live setting from the perspective of the human performer.

(above) signal flow of this version of AIYA, using the OptiTrack motion capture system in the ZHdK Immersive Arts Space. The addition of the RealSense camera allowed me to capture a live infrared camera feed of the scene, which I used for some of the source material for the visuals in the TouchDesigner patch; the virtual outlines/reflections of my physical body is one example, as seen in the later subsection "The reflection as a metaphor for failed human-machine emancipation."

The Chaosflöte and the motion tracking afforded by the OptiTrack system contributed to the main sensor inputs used by the improvisation machine to read human behavior and act according to its pre-programmed interaction rules. 5 rigid bodies were affixed to the human performer (myself) to track the global position and local rotation of each major limb of my body. AIYA—programmed in MaxMSP—would take this data and control the output of sounds and visuals generated live in the programs Ableton (sound) and TouchDesigner (visuals). On the audio level, the signal goes through another program, Reaper, to be decoded for the Immersive Arts Space Ambisonics speaker array. On the visual level, the TouchDesigner visuals get transferred to the projection mapping program S.P.A.R.C.K. [1] and finally distributed to four projectors that cover three walls and the floor of a giant black box lined with black curtains.

(above) circled are the motion tracking cameras distributed throughout the IAS.
(above) 5 rigid bodies are attached to my body, and a sixth is attached to the 360° camera filming me.
(left and above) my own grids I constructed out of inspiration from the calibration grids. Grids are also a prominent motif in cyberpunk culture, of which I am also a fan, from early childhood until now.

(3) black box fading (360° video in VR):

I began the development of my final master's project with the improvisation machine knowing only that I had wanted to work with immersion as a loose concept for the improvisation machine's representation and that the project would take place at the Immersive Arts Space. Much of the work came together in constant dialogue between my original ideas and the ideas the space and technology themselves gave to me. The result is a work of hybrid ideas, where I use the limitations of the space and technology in an intentional way that informs the overall aesthetic of the work.

With the idea of immersion, I knew I wanted to create some sort of box with interactive visual projections on each side of the box, however, the original idea for the box was for it to be constructed out of white screen material for better color contrast. In the end, it was only possible to hang black curtains as a projection surface, so I adapted my visual concept to a "black box" with white projections.

Even then, I imagined a dense visual identity for the improvisation machine, with thick geometric forms to encompass my black box. However, the moment I arrived in the space and saw the constructed box for the first time, it was clear that the nature of the space called for a sparser scenography in the visual projections. Too much white in the projections would create too much light in the room and diminish the seemingly infinite void of the black box (as the materiality of the black curtains would be blindingly exposed). The technical manager of my project, Sébastien Schiesser, showed me the calibration of the four projectors used for the visuals on the black curtains, where a set of grids were projected onto each surface as a way to check the alignment and stitching of the four images that would comprise the unified box texture. Although the grids themselves were fairly simple, I was fascinated by how they were able to create such powerful illusions of space and scale. Shifting the grids in the calibration itself already induced a slight sense of disorientation and unstable architecture. I somehow felt a connection between the tension of orientation/disorientation of the shifting grids and the tension between the control/loss-of-control that I sometimes had with my improvisation machine. The grids became an important motif throughout the entirety of black box fading.

I created my own grids in TouchDesigner and experimented with different ways to shift them in real-time. Using the OptiTrack motion capture system of the IAS, I was able to map "rigid-bodies" (special objects whose position and rotation were made trackable by the motion capture system) to the position and rotation of the virtual camera in my sea of grids. I wore five rigid bodies (1 on the head, 2 for each hand, and 2 for each foot) that allowed me to shift my virtual view of the grid projections with my body.

Between improvisation and composition
Among the shifting grids of the black box, I wanted to create a live concert with the improvisation machine. After all, one of the key attributes of improvisation is its “liveness,” which communicates a certain authenticity and energy difficult to replicate in a documentation format. However, due to the second wave of COVID-19 in the fall of 2020, I had to make the decision early in the creation process of the project to adapt the final project into a series of documentations (representations) instead.

This led me to the idea of creating an immersive documentation of my improvisations with the machine/black box, using a 360° camera and an Ambisonics microphone (seen on the right).

(above) the 360° camera + Zylia Ambisonics microphone in action.

(above) the revised AIYA Max patch, which includes a new management system for the "voices" of the machine, and mapping rigid body parameters to the machine's buffer selection, sound effects processes in Ableton, and shifting grids in TouchDesigner.
(above) the recording rig used to hold and move the 360° camera and Ambisonics microphone.
Somehow I feel conflicted over creating a work about improvisation in a composed setting…pandemic aside, perhaps this format illustrates precisely my blurred perception of improvisation vs. composition…..if you watch recordings of an improvisation, does it still feel improvised for you? If you edit and combine recordings of improvisations together, does it already make the work a composition?

Listening to the fingerprint of the machine
Recording the material used in black box fading proved to be a challenge due to the low-light setting of the scene. The footage of the 360° video would appear dark, with low contrast and an excessive amount of noise. Imperfections such as these began to inform my aesthetic direction of the work. At first, I wanted to reduce the imperfections as much as possible, and it is true that much of the footage had already been “denoised” by the cinematographer (Valentin Huber). However, I soon realized that in order to more authentically embrace the aesthetic of the machine, I should rather work with the low-res/imperfect nature of the recorded footage and use that as an inevitable aesthetic motif of the work, a fingerprint from the machine. Throughout the 360° video, I worked with the poor signal-to-noise ratio of the original recordings more emphatically by at times exaggerating the graininess (the extreme of which culminated into “pixel dissolve” video effects) and at other times reducing the graininess to the point where the footage morphed into its own blurriness and no longer resembled its original form. This resulted in an interesting aesthetic for me: On the one hand, the subject of the documentation was rather futuristic, with state-of-the-art motion tracking, projection systems, and Ambisonics sound system as the foundation. It carried a certain dystopian vibe as one saw myself in the video wearing and interacting with this technology. On the other hand, the fidelity of the documentation seemed nostalgic in character, almost as if it was a reimagination of film noir; it carried a wear and age to it. It was as if one was capturing a memory of the future.

This concept of embracing the imperfections and at times low-fidelity of the technology evolved into the general motif I attempted to weave throughout the work, which was to exemplify, rather than try to hide the technicalities and production elements involved in creating the work—as a counterweight to the otherwise opaque nature of how the black box was working. In the 360°/VR video, one is led from the dark curtains of the black box to the array of computers used to run the performance. Instead of cropping the computers away from the scene for the sake of making a “clean” scene, they end up serving aesthetically and metaphorically as “windows” to another space, which work towards the overall narrative of the piece. Indeed, the computers serve as an important thematic element in the last scene of the 360°/VR work, where one looks closely at the computer(s) in order to see the human performer (me) exiting the stage in the final dramatic motion of the scene. Other elements that exhibit this motif include the following:

- The cinematographer, Valentin Huber, also appears in some parts of the video—an inevitably “seen” figure, given the fact that the 360° camera captures everything around it, including the cameraman.  
- I chose to leave the illumination of the motion capture cameras on, to create a surreal perception that they are somehow “LED stars” in the mysterious black box. They are always seen, just as the elements within the black box are always seen by them.
- In the recorded audio of the improvisations with the machine, the Ambisonic microphone picks up a lot of the fan noise emitted from the 360° camera and the three desktop computers running the system. I paired this noise with the graininess of the 360° video in the beginning of the video work.

The shifting grid
One of the most prominent representations of the improvisation machine in black box fading is the shifting grid displayed on the immersive projections of the black box. The oscillation type and frequency of the grid projection is programmed to rudimentarily reflect the state of the improvisation (from least agitated to most agitated), in addition to signaling when it chooses a new audio buffer excerpt for its source sound material (via a flash of white). The machine gains a perceived level of agency through this ability to reflect its analysis of the performance scenario and communicate to the human performer anticipations of future musical material.

The grid also serves as a structural reference in the darkness of the recording/space, giving form and architecture to the space. In this case, the perceived embodiment of the machine is also modulating as it creates varying perceptions of the form of the space.

(above) an early version of the shifting grid, made in December, 2020.
(above) the oscillation of the grid reflects the "agitation" (or lack of agitation) of the improvisation machine
(above) the improvisation machine flashes white when it chooses a new audio buffer excerpt for its source sound material. This communicates to me an anticipation that the sonic output of the machine will change.
next ►◄ back

[1] Developed by I.A. Space researcher Martin Fröhlich
[2] What is "virtual" and "real" is a discussion that is outside of the scope of this thesis...For the purposes of comprehension, when I refer to "virtual", I am referring to the digitally constructed sounds and visuals originating from my software patches.
[3] One feedback a participant gave to this event was, "That was Melody Chua pure: disorienting but somehow comforting. Like being inside a warm computer on a cold winter day." And from another participant (regarding the 360°/VR video): "It was strange how I got really scared at times while being super fascinated. It was definitely something I had never experienced before..." Somehow these comments really speak to me with regards to how I personally feel about this balance between orientation/disorientation, control and vulnerability. It is a representation of how I feel with the machine, whether it be improvising with feedback, navigating its glitching behavior, or simply when it surprises me. It is the balance of slight uncertainty and its attractiveness.