Why animated, graphic notation? (Part 1: Reading a line as music notation)
FIg 1. Digram showing the representation of sound and the performance moment in my notation, with the image moving right to left, through time.
FIg 2. A digram showing a similar arrangment, but with the image moving from top to bottom, rather than from right to left.
Fig 3. A section of the score screenshot in the Decibel ScorePlayer for The Majority of One (2016) (for four sustaining instruments and room feedback). This is part of the section between 8’28” and 9’10” where the score is moving, but seems static like a a photograph. This is a still, but it is the same as if it were moving.
I thought it would be useful to write about how my compositional tools speak to my core compositional interests, given they are so intertwined, and ongoing. Since my first graphically notated composition Kingdom Come was written and peformed 17 years ago, I fundamentally engage the same same tools - animated, graphic notation - to write music.
I have written elsewhere about why I use graphic notation - notating glissandi, the way I want the notation to look more like it should sound, providing access to a wide range of musicians from different styles, making notation for electronic sounds, providing openness enabling more performer contributions. But why put it in motion? Putting an image in motion began as a solution to coordinate musicians, but has now it has gone beyond that. It has become a quest to understand the relationship of sound to drawing, colour and time.
This required coordination began with the desire to read a line as music notation. A line seemed to me to be the perfect expression of a clear, sustained, long tone on any instrument, a core interest in my early works. How do I show the duration of a line, whilst providing some approximation of its pitch and dynamic?
Pitch was easy - I didn’t want to provide an exact, tempered note - as I wanted harmony (or lack of it) to evolve from performer choice. But I did want some degree of structure to pitch relationships, without relying on grids or similar techniques breaking space into equidistant parts. Drawing on early sketches of music ideas by Krzysztof Penderecki, Iannis Xenakis and Gyorgy Ligeti, as well as some of their published scores and the Free Music notation of Percy Grainger, the proportional score format was ideal. The vertical plane represents pitch, the horizontal plane is time. This comes a little undone in regards to volume, represented in my scores by line width, which I try to overcome by considering the centre of the line as the core pitch. I arrived on colour to identify the instrument after trying a few different approaches. A vertical “playhead,” like on a tape machine, indicates the point of performance through time. I have tried to illustrate these concepts in Fig 1. Sometimes, this format can berotated 45 degrees, to accomodate a more spatial approach to performance, as seen in Fig 2 - the biggest change being that the horizontal parameter can demote instruments (e.g. in a pecussion set up) or a spatial approach to pitch (as relates to the piano keyboard, for example). I note how the change of volume on the line creates a shape that becomes, effectively, no longer a line - more on this in part II.
Movement, or animation, provided a duration for the line in this pitch/time space i was using as the notation canvas. Not all instruments can play a plain, straight line. A synth or sine tone generator can, perfectly - any string or wind instrument can attempt to, if they can hide the sound of bow changes/ circular breathing or colouring techniques like vibrato. Piano and percussion have to radically change the performance approach to the instrument to do so. Electric guitars can use effects pedals. I am often called on to help performers achieve the effect of a line in my work.
After a while, I became sonically fascinated with what lines mean when they look still, but are actually moving though time - when the start or end are not visible. This first happened in The Earth Defeats Me (2014) for two instruments with partition concrete - but the interest in that electronic part masked the stasis, and the dotted reference line reminded the performers there was movement (Fig 2). I tried it again in The Majority of One (2016), which has a feedback tone in the electronics that also remains static (Fig 3). Can sound ‘seem’ still - liberated from the need to travel through time? How long does it take before it feels like that? These are questions being asked by composers such as La Monte Young (see Fig 4) and many other Western composers since. Joanna Demers describes this effect as ‘maximal’, because it tests the physical limitations of listeners through what she calls “excessive durations and volumes” (Demers, 2010, p91). But what if the combination of pitches doesn’t create beatings (as the La Monte Young example likey will) or other sonic ‘effects' for the listener (and performer) to follow? It requires a different kind of listening that accepts long form stasis as a way to consider time.
I sometimes use lines as action notation, where the line describes when to start and end something, where the middle stays the same. This is a particular feature of my recent piano music, where I use colour ranges (ie reds and blues) for fingers on different hands, and the line is just when to put the finger on the key, and when to take it off. The line is no longer an accurate description of the sound - the fade in the piano after the hammer attack is not drawn in. This became a utilitarian solution that I first used in Kaps Freed (2014), as seen in Fig 5, but is also used in We Have Become Kin (2019) and The Quiet Friend (2023). It grew out of ideas I explored to create sustained effects on piano in Miss Fortune X (2012). Other piano music, such as Chunk (2010) uses shapes that are performed linearily. This makes the focus of part II of these blogs, where I will look at how putting images in motion can transmute pitch through time, and how that changes when lines become shapes.
A quick note about my notation system - it was created in Decibel new music rehearsals, when i came with graphic scores that i couldn’t work out how to coordinate in performance. The members in the ensemble worked with me to find a way - first using a kind of prototype in MAX MSP, a solution I used for some 20 pieces, before we moved to the Decibel ScorePlayer on tablets, which plays files made in the Decibel Score Creator. I have written over 80 works in this format. I get pushback for using ‘propriety’ software, but it is just a fancy video player which can highlight different elements on the fly - a musicians score movie player. Most of my scores could be read as a film, though some parts would be hard to see, and it is hard to coordinate the start accurately