top of page

Chapter 1. Study for Choreographic Sound Composition

As a consequence, debates and criticisms followed regarding the usage of technology. How could its use “enlarge dance as a historical and cultural practice” and what kind of aesthetics could be aroused with gesture-driven computer music in dance (Salter, 2010: 263)? Scott deLahunta (2001) argues that in the field of computer music the process of new musical instrument learning has been assumed to be a form of dance training. Julie Wilson-Bokowiec and Mark Alexander Bokowiec (2006: 48) point out that mapping sound to bodily movement has been described in utilitarian terms: “what the technology is doing and not what the body is experiencing”. According to Johannes Birringer (2008), developing interactive systems with this utilitarian perspective creates “disjuncture” between movement data and the outcome media whether that is image or sound. This is because the system requires performers to learn “specific physical techniques to play the instruments of the medium”, which dancers find hard to think of as an “intuitive vocabulary” that they have gained through their physical and kinaesthetic practice (Birringer, 2008: 119). Discussions about creating musical instruments are still valuable to the development of interactive systems. However, I find that this narrow focus on the gestural or postural articulation of technology misses the aesthetic concerns in creating choreography with dancers.

Wilson-Bokowiec and Bokowiec (2006) provide honest insights about their Bodycoder System (Figure 1.1), a musical interface with sixteen bend sensors that can be placed on any flexing area of the body and a pair of gloves designed as switches. Similar glove-based interface designs have been used previously in Mattel’s Nintendo PowerGlove (1989) and the Lady’s Glove (1994) by the composer Laetitia Sonami to capture sophisticated finger movement. Winkler (1997) also began his research in movement by observing hand and finger gestures to help design musical instruments. Wilson-Bokowiec and Bokowiec (2006: 50) write that their initial idea to adopt physical techniques from contemporary dance seemed logical, but they stopped soon after realising that the system was associated with “specific economic movements” like playing an instrument. In interactive dance and music collaboration the dominant compositional approach has been to translate gestures into sonic results. This process of translation is usually initiated by composers and computer scientists with their own interpretations of movement qualities, and then realised by dancers. Unfortunately, due to the limits of time and budgets, it is not easy to collaborate with dancers throughout the entire composition process to find out which sounds feel most suitable for controlling the synthesis parameters with the dancers' diverse range of movements. Thus, composers have mostly sought ways to capture the most natural and precise movements by preserving dancers’ free motion for movement analysis. However, I believe this effort ironically caused a disjuncture in the sonification of movement for some dancers because the assumed mapping scenarios and interpretations were not directly related to their dance vocabularies, but rather to an engineering perspective.  

figure 1.1
Figure 1.1: Bodycoder system (Wilson-Bokowiec and Bokowiec, 2006: 50)

Here, two research questions arise:

​

1) How can my interactive sound system aid collaboration by encouraging dancers to use their intuitive vocabulary, not just demand that they learn the technological and musical functions of the interface?

2) Once I have considered the sounds to be used in a piece, how should I direct dancers to create choreography as well as sound composition with my interactive system?

I decided to adopt a more rigorous approach to integrating interactive systems into the creative processes in sound and dance rather than merely to receive movement data to control my sound synthesis. The resulting performances investigate ways to carefully structure the relationship between music and dance when involving interactive systems in the creative and performance processes. To situate my work within a research perspective, I undertook a literature review of papers focusing on dance or choreography from The International Conference on New Interfaces for Musical Expression (NIME), The International Computer Music Conferences (ICMC), and Sound and Music Computing (SMC) from 2001 to 2016 [2] to find what other approaches have evolved since the interactive dance scene of the 1990s. When I found interesting approaches from these conference proceedings, I used the bibliographies of these papers to follow up ideas in the cited publications.

Composer Todor Todoroff composes electroacoustic music for dance and theatre, and his research focuses on developing sensors and gesture-based interactions to control sound synthesis. His research also started in the 1990s and was presented in the NIME, ICMC, and SMC communities. In recent years, he has developed wireless motion-tracking sensors with his research team and algorithms for stereoscopic cameras, which enable fast setup for dancers to use on stage (Todoroff, 2011).  The systems were used in the project FireTraSe to control the patent-pending fire ramp designed by pyrotechnician Pierre D’haenens. The twenty fire ramps were placed in a row and as a dancer moved around behind the row the fire ramps were activated. Later, Todoroff decided to combine the project with his Dancing Viola project, in which the viola player controlled the fire while she was playing the instrument. The mapping of sound and movement was relatively simple; higher notes lay towards the left, the lower notes towards the right, and the amplitude of the sound determined the height of the fire. The project proved that the algorithm for the stereoscopic camera and the wireless sensors were stable and fast and easy to set up. The system also seemed to work well aesthetically with the viola player because she was moving in a limited way by holding and playing the viola. However, in looking at the test with the dancer, [3] it is hard to see how the system and the fire ramps are integrated in terms of creating choreography, rather than simply as an additive effect on stage.

1.1. Reviewing recent interactive dance and sound collaboration

[2] The reason that I chose this period was because the survey was done in 2016, and I decided to search the papers published from the 21st century strictly.

Based on his research on the choreographer Doris Humphrey’s classification of rhythms in dance, Carlos Guedes (2007) created Max objects that can extract rhythmic information from dance movement captured with a video camera. Capturing data and analysing patterns to create art became a method when art research combined with Human Computer Interaction (HCI) (Polotti, 2011). With this rather scientific approach to human movement, I noticed that some researchers tried to capture even more sophisticated data from dancers using physiological data capturing facilities. For example, Jeong-seob Lee and Woon Seung Yeo (2012) captured dancers’ respiration patterns to improve the correspondence between music and dance, and Javier Jaimovich (2016) used electrocardiography and electromyography to reflect the biology of emotion in music. Nevertheless, these analytical approaches to evaluating the relationships between music and dance still caused me to ask where choreographers might put their aesthetical decisions during the compositional process.

The research I found interesting was the empirical research done by Anna Källblad et al. (2008) for their interactive dance installation for children. They developed their installation in several steps. First, they observed children’s movement in a free space with different types of music. Second, the contemporary dancers choreographed 23 minutes of dance based on the children’s movement. The interesting part of this study was that the movement analysis was not based on theories of gesture but on a concrete resource that was captured in advance. The analysis of the children’s movement became the choreographic challenge; the researchers found that there was “no expression of anticipation, planning or judging” in the children’s movement, whereas the adult dancers found it very hard to have the same intent (Källblad et al., 2008: 129). Third, the composer analysed the finished choreography to find rhythmical and spatial patterns and themes, and then composed music containing six sections with different characters. Fourth, the music was decomposed based on the sections. Each section of the music was more broken down according to the time and spatial analysis and installed as an interactive installation; in this way, the decomposed pieces were “choreographed” into the room (Källblad et al., 2008: 130).

The term interactive dance typically refers to dance works created with an interactive system that perceives movement data from the dancer in real-time to produce other events in other media such as sound or visuals. In turn, the sonic or visual results affect the creation of the choreography. The term has been in frequent use since the genre of dance and technology or dance tech emerged at the end of the 1990s as seeking the usage of newly developed tools “to reinvent the perceptual and ontological role of dance in the context of a digital zeitgeist” (Salter, 2010: 261). The origin of interactive dance can be traced back to John Cage and Merce Cunningham’s collaboration Variations V in 1965, yet vigorous research on developing wearable or camera-based motion-tracking sensors has only been conducted by a larger number of composers since the 1990s.  For instance, Todd Winkler created interactive dance works with Max using the analysis of gestural movement and music (Winkler, 1995a), and published a pedagogical book on interactive composition (Winkler, 1998). Wayne Siegel developed a wearable motion-tracking interface using flex sensors in collaboration with contemporary dancers (Siegel and Jacobsen, 1998). Because of its use of technology, interactive dance has also attracted scientific, engineering and computing research centres looking for artistic and real-world applications (Salter, 2010: 262–263). One example is the EyesWeb system, using gestural analysis of emotional and expressive values and developed by Antonio Camurri and his research team from InfoMus, University of Genoa, within the European Union-sponsored MEGA project (Camurri, 1997). The excitement around the genre became obvious as the entire Dialogue section of the 1998 Spring volume of the Dance Research Journal was dedicated to discussion about dance and technology, with both Richard Povall and Robert Wechsler writing about the subject.

[3] A video clip for the project development is available at: https://vimeo.com/198146154 (Accessed 12th February 2018).

figure 1.2
1.1

[4] Available at: http://www.music.mcgill.ca/~mallochj/media/gestes_promo_vimeo.mp4?_=1 (Accessed: 11th April 2018)

[12] An exerpt of the first act of Eidos : Telos is available at: https://www.youtube.com/watch?v=Q237dffzzxo (Accessed 17th October 2018).

​

[13] See 1:08–1:45 of the first act excerpt: https://www.youtube.com/watch?v=Q237dffzzxo (Accessed 17th October 2018).

 

[14] See 6:57–7:47 of the first act excerpt: https://www.youtube.com/watch?v=Q237dffzzxo (Accessed 17th October 2018).

Another interesting work is the prosthetic instruments designed by Ian Hattwick and Joseph Malloch (2014). Although the dominant perspective of Malloch’s (2013) thesis was an engineering one, as its purpose was to design instruments that were usable by professional dancers, the design process was done in conjunction with frequent workshops with the choreographer Isabelle Van Grimde and her dance troupe Van Grimde Corps Secrets. They were aware of how the dancers predominantly create movement within a visual domain, as opposed to musicians, and took advice from the dancers when deciding on the appearance and material of their instruments (Malloch, 2013). I found their Spine instrument [4] (Figure 1.2) for the performance Les Gestes (2011–2013) remarkable because it provoked the dancers to create choreography in terms of the relational movement between their head and lower back, which in turn played the instrument. This way of triggering an interactive system with wearable motion-tracking sensors is not common as usually the sensors are placed on limbs or the joints of limbs to receive more natural movement and so preserve the freedom of motion of dancers (Malloch, 2013).

I also searched other interactive dance works led by composers outside the NIME, ICMC, and SMC communities. One trend in media art in the twenty-first century is projection mapping. This can be seen in the interactive dance works made by the composer and visual artist Klaus Obermaier with Ars Electronica Futurelab. Obermaier’s work Apparition, [5] premiered in 2004, showed two dancers dancing with a massive projection landscape and used a camera-based motion-tracking system to project visuals onto the dancers’ bodies in real-time. This project’s unique approach was that the projection was not controllable by independent behaviour, but could be influenced by the movement in conjunction with the properties of the dancer and the system (deLahunta, no date). Another collective that creates interactive dance works with projection mapping is that between musician Daito Manabe and Rhizomatiks Research. Their artistic ideas are realised commercially for the mainstream market. Manabe used ideas developed by Obermaier (Dauerer, 2014), as can be seen in the work Cube (2013), [6] choreographed by MIKIKO. In the work Border (2015)[7] the dancers performed with 3D virtual dancers created with “massive amounts of movement data [collected] using motion capture, Kinect controllers and sensors to track dance movements” (Dauerer, 2014). Their audiovisual work showed another trend – tight synchronisation between computer generated visual work and sound that recall the aesthetics of Ryoji Ikeda and Ryoichi Kurokawa – with the visuals of augmented reality.

Both collectives present mesmerising and large-scale dance works with technology, but the choreographies were made to perform “with” the technology as a potential partner (deLahuntta, n.a). I found a more interesting approach to provoking new choreographic materials in Obermaier’s previous non-interactive works D.A.V.E. (1998–2000)[8] and VIVISECTOR (2002).[9] In these works, the combination of the precise choreography and the images projected onto the moving dancers’ bodies created an uncanny visual experience. Manabe’s non-interactive biosignal technology in his previous work Electric stimulus to face (2009)[10] was used in Rhizomatiks Circle, [11] a promotional video for both Rhizomatiks Research and Nike trainers, and the creepy look of the electric wires attached to the hip-hop musician’s face worked well with the narrative of the music video. 

Amongst interactive musical instrument and dance collaborations, I find the work Eidos : Telos (1995)[12] by the choreographer William Forsythe and the Studio for Electro-Instrumental Music (STEIM) composer Joel Ryan the most interesting, even though it was developed at the very beginning of the period of experimentation in interactive musical synthesis with computer in the 1990s. Across the stage, a net of massive steel cables are set to be amplified by contact microphones and in turn become a large-scale sonic instrument when plucked by the dancers (Figure 1.3). The choreography was composed around the steel cables; there was a moment when one dancer danced in front of the steel cables and a group of dancers danced behind the cables in lines.[13] The stage lighting was set to become dimmer when the dancers stood behind of the cables. Later, the group of dancers came in front of the cables and joined the solo dancer. At another point, one dancer danced in front of the steel cables, and another dancer danced behind them in a black costume. The dancer with the black costume held a panel and scratched the cables while moving to the left and right sides of the stage. [14] The instrument was “audio scenography: the replacement of visual scenography with a continually transforming audio landscape” and showed “the shifting of dance music composition in Forsythe’s work towards the design of total acoustic environments” (Salter, 2011: 57–58). Unfortunately, Ryan’s initial idea of using wearable acceleration sensors to control the signal processing techniques applied to a violin and the lights in the Frankfurt Opera House auditorium did not happen because of unstable communication between the STEIM-built sensor device and the house lighting console (Salter, 2001: 71). However, the instrument created simple and modern-looking scenography without superfluous technological aesthetic, which Forsythe usually seeks in his other works, and acted as work’s core compositional as well as dramaturgical strategy.

[5] Available at: https://www.youtube.com/watch?v=EjzzgoJRlag&t=57s (Accessed: 10th April 2018)

​

[6] Available at: https://www.youtube.com/watch?v=zBm3mJiJzh8 (Accessed: 10th April 2018)

​

[7] Available at: https://www.youtube.com/watch?v=gpE20khn8R0 (Accessed: 10th April 2018)

figure 1.3
Figure 1.2: The Spine instrument designed by Joseph Malloch and Ian Hattwick.

The common aspect of the two projects by Källblad et al. and Malloch et al. was that their collaborating dancers provided significant creative inputs towards the completion of their practical works, rather than just trying out those interactive systems. The main technical focus was not to extract more accurate motion-tracking data or movement patterns to produce generative music with gesture analysis, but to integrate their collaborators’ choreographic ideas with their research outcomes.

[8] Available at: https://www.youtube.com/watch?list=FLbnYNVoSbjcUoxinI4Jyxug&v=1bhNjYTQFQY (Accessed: 10th April 2018)

​

[9] Available at: https://www.youtube.com/watch?v=VtY-Ymval8M (Accessed: 10th April 2018)

​

[10] Available at: https://www.youtube.com/watch?v=pLAma-lrJRM&t=120s (Accessed: 10th April 2018)

​

[11] Available at: https://www.youtube.com/watch?v=mnX6xU2EwJY (Accessed: 10th April 2018)

Figure 1.3: From the first act Self Meant to Govern of Eidos : Telos.
bottom of page