top of page

Chapter 2. Original Works

2.1. Project One
2.1.1. Untitled 10 (pre-study)
Video 2.1
Video 2.1: Untitled 10
Audio 2.1: Sound of cicadas recorded at the botanic garden in Trang in Thailand.

While searching for case studies in dance and technology collaboration, I found that the choreographer Wayne McGregor worked with cognitive scientists to seek connections between creativity, choreography, and the scientific study of movement and the mind (deLahunta, 2006). The research developed from a 2001 workshop organised by Scott deLahunta by aiming to use motion-tracking technology to expand brainstorming for choreography. The scientists sought ways to design software as a “multimedia notebook that could be used in rehearsal for recording, notating, and playing back information, rather than an interface to trigger media on stage” (Salter, 2010: 266). They observed how McGregor and his dancers developed choreography from mental imageries and other derived sensory stimuli over different projects. These observations and the recorded movement trajectories were used to help break conventional movement habits by shifting the perspectives by which these imageries were approached. [30] What interested me about this collaboration was that the choreography itself came about as the result of technological adaptation rather than a representational event such as a sound or image created with real-time movement data. One outcome to the collaboration was the choreographic composition tool Mind and Movement for choreographers and teachers, which includes image cards with related movement tasks to stimulate developing and structuring movement materials (Figure 2.1).

Figure 2.1: Mind and Movement, a practical resource for choreographic composition by Wayne McGregor and Random Dance Company.

I composed an audiovisual work Untitled 10 (Video 2.1) as a pre-study for my first choreographic sound composition Locus. The work is composed with real-time video processing in vvvv and post-production sound in stereo.

[30] The collaborative process is explained with a series of videos from Wellcome Collection. Available at: https://www.youtube.com/watch?v=bd1nZDMLRgM and https://www.youtube.com/watch?v=ggjy7rNg4oY (Accessed 14 April 2018).

Figure 2.1

Inspired by the usage of imagery for choreographic composition, I decided to create a real-time video composition that could be used as choreographic stimulus as well as a resource for an interactive performance. Because it was my first time creating a real-time video composition, instead of creating images from scratch I decided to use photographs or video recordings as source materials and to manipulate them within the graphical programming environment vvvv.[31] vvvv was the most approachable software for me to learn because of the similarity of its programming environment to Max. I went out to the Manchester city centre as well as towards north-west Wales to take photographs and videos of interesting shapes and patterns. When I returned, I sorted the pictures into similar shapes, textures, and colours, and thought about how the images could be transformed into one another (Figure 2.2).

Figure 2.2
Figure 2.2: Some of photographs taken in Manchester city centre and northwest Wales.

In search of inspiration for a video composition using these photographs, I had an opportunity to meet the artist Patxi Araujo at the University of the Basque Country as a part of my Santander Study Visits scholarship programme in 2015. He showed me a sketch of his new project and explained that the computer-generated visual work, a complex moving mesh, was the result of his ‘organic’ system in vvvv (Figure 2.3). His work was not a representation of a real object, but a blurred dichotomy of analogue and digital aesthetics.

Figure 2.3
Figure 2.3: The artist Patxi Araujo presents a sketch of his new visual work created in vvvv.

I found a similar approach in the audiovisual artwork Rheo: 5 Horizons (2010)[32] by Ryoichi Kurokawa. This work is projected onto five aligned vertical plasma-displays and moves across them screens in time with a tightly synchronised sound composition. The visual work mixes visions of human and computer: high quality representational images of nature are juxtaposed with computater processing of these images in Euclidean space (Vandsø, 2014). In addition, Kurokawa’s music used some organic sound materials, which in turn added another texture onto the digitally produced images. For example, the sound of squeaking noises is juxtaposed with glitch-like flickering visuals, as if the flickering images were caused by the action of twisting something. When the visuals moved like a wave across the screens, he added field recordings that recall the sounds of wave, wind, and seagulls.

[34] Later, this idea led me to curate an art and technology exhibition Artificial Retirement on the related theme of ‘failure’. I also published an article on the subject, "Redefining Failure in a Technologically Aided Era" in Clot Magazine [online]. Available at: http://www.clotmag.com/redefining-failure (Accessed: 8 June 2018).

The composition Untitled 10 is divided into seven sections (Figure 2.4). [35] The first was a simple study in generating multiple vertical lines using spread and LFO objects (see Video 2.1 from 00:00 to 01:00). These were the most basic and crucial objects to learn to generate real-time visuals in vvvv. Several spread objects were used such as LinearSpread and RandomSpread. These multiply an image with either linear values or random values. LFO used signal data to add movement to these generated images. For the second section, I programmed the three photographs in Figure 2.5 to transition from one to the next and added the circular movement (Figure 2.6). The manipulated image was no longer a realistic image of nature. Yet the fluid movement added another texture onto the static image and abstracted its representational reality (see Video 2.1 from 1:00 to 3:00). For the third section I used a mask to overlap the video recording of a mountain in Wales to show the contrast between the static image of the metal structure and the moving image of the mountain as a texture (see Video 2.1 from 3:00 to 03:19). For the fourth section, I divided the screen into four parts and played the same video recording of the mountain at different loop speeds (see Video 2.1 from 03:20 to 03:41). For the fifth section I added a glitch effect onto the renderer which added another movement onto the image; it divided the screen into vertical sections and moved each section up and down (see Video 2.1 from 03:42 to 04:00). For the sixth section, I used masks to overlap two video recordings of some rocks that were covered with colourful mosses in interesting patterns (see Video 2.1 from 04:00 to 04:30). Although the videos showed my struggle to hold the camera still to capture the details of these patterns with a zoom lens without a tripod in strong windy weather, it provided interesting movement when these videos were overlapped onto the masks’ tree shape. It looked like the surface of the tree started moving. For the seventh section, I used a mask again to overlap a goat’s eye onto a tree (see Video 2.1 from 04:30 to 05:03). I took a picture of that tree because the winding tree branches were grown over another big tree, creating a contrast of vibrant and static feelings (Figure 2.7). The winding branches looked very alive compared to the big old tree behind them, and I wondered how it would look if I overlapped a partial image of another living creature onto this one. I added a wavy movement onto the background texture to make the winding branches look more vibrant. I also made a list of photographs whose background texture I would change to see how the wavy movement worked with the other images (see Video 2.1 from 05:03 to 05:27). Finally, I connected my MIDI controller to control these effects in real-time and captured my performance as a video. This was in order to see how this video composition would work in vvvv when I received the input data from the Gametrak controllers.

[32] An excerpt is available at: https://vimeo.com/31319154 (accessed 5 May 2018).

Similarly, the artist Herman Kolgen creates visual work with high quality images and glitches. At the event Digital Québec at the British Film Institute in London in 2015 he performed Seismik and Aftershock with live electronics. [33] Although Kolgen’s visual works looked neat and slick, he used tactile and physical objects to create the noises for the sound performance. For instance, he created some spark-like noises by touching two wires together, and moved antennas around to receive “a cluster of seismic readings and terrestrial frequencies culled from a variety of locations” through the internet that “impact the live performance in random and unexpected ways” on the stage (Kolgen, n.a). Kolgen’s live performance embodied his physical movement and the digitally produced images.

[33] The preview of Seismik is available at: https://vimeo.com/90652292 (Accessed 5 May 2018).

Kim Cascone (2000) explains that the use of glitch comes from an aesthetic of “failure” in contemporary computer music. Both Kurokawa and Kolgen use glitch in images and sounds in their post-digital approach. However, their glitches intentionally reveal an interplay between the realistic images and the high-end computer technology rather than arise as accidentally generated digital noise. This approach seemed similar to mine. Although I used the vintage game controller Gametrak in this project, I chose it to reveal intrinsic physicality through the controller itself rather than for its ability to retrieve real-time human body movement data. In other words, my approach was post-digital. Also, I did not see the lower specifications of the Gametrak as a technological failure in comparison to the newest motion-tracking devices; I wanted to combine its tactile appearance and body movement with my digital audiovisual compositions. [34]    

Motivated by these works, I decided to manipulate my photographs digitally, but preserve their organic textures. First of all, I named the sorted photographs (see Figure 2.2) as Untitled 1, Untitled 2 and so on, and sent them to my collaborating dancer Katerina Foti to see whether any of then inspired her choreographically. She chose a series of photographs named Untitled 10. In fact I later used these photographs for Locus, but I retained the name Untitled 10 for this project since it was a pre-study for Locus. I tried various methods in vvvv for processing my photographs, primarily so as to find the most suitable aesthetic direction for Locus.

Figure 2.4: The seven subsections of Untitled 10 in vvvv.
Figure 2.4
Figure 2.5
Figure 2.5: Three photographs used for the second part of Untitled 10.
Figure 2.6
Figure 2.7
Figure 2.8: My vvvv patch started creating glitches by itself and froze my computer because of my poor usage of the graphic card’s capacity. [36]
Figure 2.6: Addition of a circular movement in vvvv. 

As the background image changed, I made a sound like being under water. Because I added an effect that moved the image like the surface of wavy water, the photographs of rocks seemed blurred as if they were under the water. Finally, for the glitch part I added some sound I had used sporadically in the previous sections as well as the sound of swirling for the fast transition movements on screen.  

[When watching Rheo: 5 Horizons] our attention is being swirled around in a roller-coaster ride: from the imagery to the processuality of software codings, from the represented space in the image to movements on the screen, to movements in the actual space in which the large panels are exhibited. (Vandsø, 2014: 144)

While performing with the MIDI controller, my vvvv project froze because of my poor skills at programming graphics according to the capacity of my graphic card. Although I was no longer able to control the patch with the MIDI controller at this moment, my vvvv project created glitches and very fast transitions between the different photographs and layers by itself that went beyond my imagination (Figure 2.8). It resized some of the photographs, masked some areas that I had not intended, and applied some effects with unpredictable patterns. It reminded me of Anette Vandsø’s analysis of Kurokawa’s work Rheo: 5 Horizons, which insists that the work “combines or connects human experiences” using computer (Vandsø, 2014: 143):

Figure 2.7: Several threes grown over each other creating a complex entanglement.

[35] A detailed explanation of the sub-patches in vvvv is given in Appendix A.

Figure 2.8

This moment made me realise that what made something appear digital was not only its digitally processed look but also its movements, which could not be recreated with analogue tools. As a consequence, the fast transitions and the glitch movement seemed, ironically, ‘organic’ to the computer. I thus decided to include this glitch movement as a part of my composition. In fact, the glitch-like transitions of the final part of Untitled 10 were not entirely genuine glitches. I was no longer able to run my vvvv patch as it had been because it froze the entire operating system of my computer. I had to recreate my patch more efficiently. But inspired its patterns I decided to recreate the glitch-like moment with an automated quick transition control between different sections of the composition. The recreated patch was more stable than the previous one but still it produced clunky and incorrect transitional moments with the fast transition control.

[36] I also recorded the movement with my mobile phone video camera. Available at: https://vimeo.com/267820495/0a73a02344.

I created the sound composition in Logic Pro X[37] for the captured video as a way to think about what kind of interactive sound synthesis I would need to create in Max for the next project. I had originally practiced film sound design and theorization for my undergraduate degree, and this had led me to work with individual filmmakers and some architecture companies who used video content in their exhibitions or research projects. For this reason, I feel more comfortable sketching sound composition ideas using sound editing software that allows me to watch the image I am going to work on before creating sound synthesis in Max. It is crucial for me when I create a sound composition to have a visual stimulus.

I wanted to use sound materials related to the image to create a cinematic audio experience. My purpose was not to create a realistic representation of the objects shown in the captured video but to provide aural abstractions of the texture or the movement of the images. This kind of relationship between sound and image was also shown in Rheo: 5 Horizons. For instance, when some complex grids and grains of an image moved like a wave in a slow motion, it reminded an image of an approaching wave because of Kurokawa’s use of grainy white noise-like sound moving from left to right. Also, Kolgen’s use of physical action to create spark-like noises (described above) and the accompanying shaking images created an abstract cinematic experience. Kolgen calls himself an “audiocinematic sculptor”, as his purpose is to create an intimate relationship between the image and the live sound performance on stage that blends “the real and the virtual” (Palop, 2013).

For the first part of my visual composition, I used 60Hz hums and other electric noises, and syncronised them with the movement of the white vertical lines. As the next image faded in and moved in the circular motion, I added a field recording of a water stream in the background as the image reminded me of the fluid movement of water. I also used the sound of squeezing a woven basket and syncronised it with the circular movement. For the footage of the snowy mountain, I wanted to use the sound originally recorded with the camera as it already had the vibrant feeling of the windy mountain. I processed this sound with various effects just to take away the abstract feeling of the movement of the camera. For the video footage of mosses overlapped with the tree image, I used some grainy electric noise because the video’s movement reminded me of white noise on TV. I then changed it to the sound of cicadas, which continued until the next scene of the overlapped images of the trees and the goat eye.

Audio 2.1

I used the sound of cicadas to create an abstract drama based on a bizarre experience I had when visiting a botanical garden in Trang in Thailand. Compared to other places in Thailand, Trang is not a touristic town, and I stayed there for two nights in transit between Bangkok and Koh Muk island. When I arrived at the botanical garden, I did not see any tourists other than my partner. The garden was huge and preserved the wildness of nature with very tall trees like a jungle, and because of that it was hard to see what was ahead of me. There was a moment that I sensed a high-pitched almost electrical noise surrounding me, which made me wonder whether it really was cicadas rather than electrical noise coming from nearby telephone poles. I listened carefully and found that the cicadas were making different pitches, like a low-cut filtered white noise and then a unified sharp high-pitched sound in oscillation (Audio 2.1). This was a very different sound to that of the cicadas I heard in Korea in my childhood. The deeper we walked into the garden, following a very narrow path between huge plants, the more the sound of cicadas built up a tension in my mind. Suddenly my partner got stung by a bee and we saw a sign for the swamp forest with the warning ‘Beware of poisonous animals’. I was afraid to go deeper into the forest since there was no one else around besides us, and we turned back to where we came from. The sound of cicadas made me imagine more dangerous things than could possibly be waiting for me after the warning sign in this artificial garden. Based on this memory I wanted to recreate the strange and almost surreal moment with the image of the tree and the goat eye.

grouping photo
bottom of page