News

ISEA2011 PANEL: Chasing Ghosts


ISEA2011 PANEL
Chasing Ghosts: Reactive Notation and Extreme Sight Reading

Chair: Arthur Clay
2nd Chair: Jason Freeman

Over the last decade, a growing number of composers have begun to use what is known as real-time notation in their work and many have developed diverse systems to facilitate its use in all types of performative situations.

Real-time music notation includes any notation, either traditional or graphic, which is formed or created during the actual performance. Other terms such as dynamic music notation, live scoring, virtual scoring, and reactive notation have also been used to describe the same process.

This panel event seeks to convey the excitement of current real-time notation practice to the public by presenting work done in the area by prominent composers, musicians, and researchers. The presenters will explore key issues behind virtual scoring and real-time notation from technical, musical and design perspectives and provide an overview of the various approaches, their systems, and the styles of music that have emerged from them.

Relevant works from the past and the present will be discussed to show how real-time notation relates to earlier experimental methods in open-form and malleable musical scores and in computer-assisted composition, in order to facilitate understanding through showcasing the exploration of the connections and boundaries among composers, performers and the audience.

Participants from the planned accompanying workshop Interactive Music Basics & RealTime Scoring will join the panel and discuss their experiences while using software and hardware tools to create real-time notation systems or dealing with the challenges as interpreters of extreme sight reading.

Above all, the organizers of the panel hope that the events will spark interest and discussion that will further the development of a community of practice around virtual scoring and real-time playing and raise awareness of this new area within the contemporary music circles to aid in attracting new people to this exciting field.

Contact emails: arthur.clay@inf.ethz.ch
jason@jasonfreeman.net
Papers
Aspects of Realtime Scoring & Extreme Sight Reading
by Art Clay

The relationship between the composer of a work and the interpreter of it can often be deducted from the type of score at hand and how it is notated. Various degrees of freedom have been given and taken away from the interpreter over the history of Western music. In early music styles, interpreters embellished melodies with simple to elaborate ornamentations and improvised cadenzas; in contemporary music the amount of freedom an interpreter is given varies, but is all too often very restrictive. The following paper introduces the concept of malleability in score making and reading, in a step-by-step manner. Key concepts are illustrated with examples of various score types from malleable paper scores to the interactive screen based ones.
Real-time Notation, Text-based Collaboration, and Laptop Ensembles
by Jason Freeman

For me, dynamically-generated music notation is a powerful mechanism for connecting people to each other through interactive music systems. Much of my artistic work is concerned with exploring new relationships among composers, performers, and listeners (often blurring those categories beyond recognition). Over the last six years, I’ve incorporated dynamic music notation into my compositional practice in three ways. In live performance, I use real-time notation to link the creative activities of audience members to the music performed by instrumental musicians in real time, creating a continuous feedback loop linking performers and audience members throughout the performance. On the Internet, I have use dynamically-generated scores as a way to incorporate the ideas of web-site visitors into future concert performances of works. And with laptop orchestras, I have used real-time notation to link the activities of improvising laptop musicians to the music played by instrumental musicians and to share this process with the audience. I bring considerable expertise to the panel in terms of design challenges and potential solutions, technical platforms and implementations of real-time notation and aesthetic and historical perspectives on its use.
Music, Typography, Semiotics
by Andrea Valle

The proposed contribution discusses the design and implementation of the “Dispacci”  (“Dispatches”) project, a musical peformance. Actually, most examples of real-time notation for live usage make use of monitors/laptops as devices for visually displaying instructions to the performers. Indeed, this is a optimal solution, as GUI elements are integrated into software environments, and the generation of graphic elements is a straightforward process. Also, from a hardware perspective, laptop networks are a common solution, and they can easily implement interactive music stands. In the Dispacci, the idea is to move back of one step, or at least to slow down the pace of real-time generation. A set of printers is connected thorugh a networked. Performers are distributed over a larger space than the usual one for an ensemble. Each performer is associated to a printer. During the performance, players are sent “dispatches”, i.e. instructions for music playing (generated following different algorithmic strategies) that pop up as sheets printed by the printers. In this way, real-time notation is coupled with old time paper support, and with a theatrical dimension that goes beyond laptop performing. The paper describes both technical and aesthetic aspects of the project.
Elements and Design for Real-Time, Interactive Performance
by Justin Yang

The advent of real-time animated notation has introduced compellingly novel possibilities for composers and performers, and not unlike Cage’s use of the iChing, or Nancarrow’s explorations into player pianos, these possibilities have moved beyond the field of notation into creating new paradigms for composition and new modes of performance practice. In this article I will outline several areas that I have explored in my work: Real-time creation of musical material, the crafting of sonic elements through a collaboration among performers, performance context (data and connective infrastructure), composers (architects) and composer(s)/score – performer(s), hyper-interactivity – augmenting the interactive context and exploring the ramifications of real-time interaction, and graphic/animation design for performance – outlining a framework of graphic and animation properties and how they translate into musical performance.
Interpretating Reactive Notation and Extreme Sight Reading
by Basak Dilara Ozdemir

One can take the sight reading and real- time score reading from different points of view but the basic idea of both is actually very similar. When we question sight reading, it is as an important level if inquiry as are the fields of harmony, structure, orchestration, and the tempi of the pieces may vary depending on the century of the written music. This paper’s contribution to the panel will focus on the reading part of notated and non- notated music. Real-time music includes surprises and as in aleatoricism, these occur automatically as the outcome of the interpretation may vary from between performances due to the parameters given at the time being. The real-time scores may include diverse styles as there is no obligation of only making contemporary music or classical music; there are no boundaries for writing the music and interpretating it.
Computer Music, Music Languages, Live Coding
by Thor Magnusson

My contribution to the panel will be to present live coding as a new path in the evolution of the musical score. I will argue that live coding practice accentuates the score, and whilst being the perfect vehicle for the performance of algorithmic music, it also transforms the compositional process itself into a live event. A continuation of 20th century artistic developments of the musical score, live coding systems often embrace graphical elements and language syntaxes foreign to standard programming languages. I will show how live coding as a highly technologized artistic practice, is at its core, still a scoring practice, and one that is able to shed light on how non-linearity, play and generativity will become prominent in future creative media productions. I might demonstrate some live coding systems and show videos of key performers at play.
Gesture Control – Score Translator
by Marek Choloniewski

The project Passage combines a series of interactive score actions that resulted in a parallel series of sight-reading performances. In PASSAGE, the choice of sound material is made by means of pre-defined methods of selection (i.e. algorithms), which in fact deprive the author (in the positive sense of the word deprive) of the possibility of taking his own individual decisions concerning the specific elements of the musical work. The choice thus depends on the method of selection which, if applied strictly and consistently, will form well-defined musical structures. In this way, the author frees himself partly from his own individual preferences and enters the realm of experimentation, which, as in the case of Passage, was subjected to rigorous control. The composition itself is a sum of my various experiences collected during many years of work on algorithmic computer compositions in which the division between instrumental and computer music was abolished, and so were also the boundaries of style and sound material with reference to the principal elements of the musical work.
Real-time Composition, Real-time notation, Spectral Composition
by Georg Hajdu

Schwer…unheimlich schwer (difficult…incredible difficult) is a piece for bass clarinet, viola, piano and percussion about German Red Army Faction member Ulrike Meinhof. It zooms into the moment when she, in a TV interview, talks publicly about the fate of politically active women and expresses the possibility of leaving her children in order to pursue her interests. In this piece, based on the transcription of Meinhof’s speech and composed in real time by a computer (sending parts onto computer screens), the difficulty of extreme sight-reading, including microtones and large leaps for viola and clarinet as well as complex harmonies for marimba and piano, conveys a sense of the dilemma that Meinhof is clearly experiencing. The real-time composition was done with MaxScore, a composition and notation environment developed by Nick Didkovsky and the author.
Animated Graphic Notations
by Shane Mc Kenna

Since the 1950’s composers like John Cage, Cornelius Cardew, Morton Feldman and many since, have been putting ideas of notational reform into practice. These ideas challenge not only the deterministic nature of traditional notation but reflect an alternative philosophy behind the creation of music. The use of graphic notation requires a change in the composer performer relationship and questions the traditional concept of musicality, creating opportunities for more accessible music making for amateur musicians. This essay discusses the authors’ use of animated graphic notation to encourage collaborative music making for a wide range of performers with different musical backgrounds and levels of experience. This includes an examination of research carried out by the author through an interactive installation that gives an insight into immediate vocal interpretations of moving shapes and symbols by a range of professional and amateur musicians. Understanding the common human associations between visual parameters and musical sounds is an important factor in creating animated graphic notation that is both accessible and engaging. This use of moving shapes, colors, visual rhythms and textures to encourage individuals and groups in creative musical collaborations will be discussed with reference to large ensemble performances and installation works by the author.
Bios of the Participants
Art Clay, a sound artist and curator was born in New York and lives in Basel. He is a specialist in the performance of self created works with the use of intermedia and has appeared at international festivals, on radio and television television in Europe, Asia and North America. His recent work focuses on media based works and large performative works and spectacles using mobile devices. He has won prizes for performance, theatre, new media art and curation. He has taught media and interactive arts at various art schools and universities in Europe and North America including the University of the Arts in Zurich.

Jason Freeman’s works break down conventional barriers between composers, performers, and listeners, using cutting-edge technology and unconventional notation to turn audiences and musicians into compositional collaborators. His music has been performed by groups such as the American Composers Orchestra and the Rova Saxophone Quartet and featured in the New York Times and on National Public Radio. Freeman studied at Yale University and Columbia University. He is currently an assistant professor in the School of Music at Georgia Tech in Atlanta.

Georg Hajdu was born in Göttingen, Germany in 1960. He is among the first composers of his generation dedicated to the combination of music, science and computer technology. After studies in Cologne and at the Center for New Music and Audio Technologies (CNMAT), he received his PhD from UC Berkeley. In 1996, following residencies at IRCAM and the ZKM, Karlsruhe, he co-founded the ensemble WireWorks with his wife Jennifer Hymer, a group specializing in the performance of electro-acoustic music. Georg Hajdu has published articles on several topics on the borderline of music and science. His areas of interest include multimedia, microtonality, algorithmic, interactive and networked composition. Currently, Georg Hajdu is professor of multimedia composition at the Hamburg School of Music and Theater.

Basak Dilara Ozdemir, pianist-composer started to study piano at the age of seven at Istanbul University State Conservatory; afterwards she got her diploma and master’s degree in Budapest at Liszt Ferenc Music Academy. She also studied at the Paris Conservatory and did a year long composition- electronic music course at IRCAM in Paris. Currently, she is pursuing her PhD and working as a lecturer at the music academy in Istanbul.

Andrea Valle earned a Ph.D in Semiotics at the University of Bologna, and is Researcher at the Department of Fine Arts, Music and Performative Arts, University of Torino, where he teaches Theory of Audiovision and Computer Music at the School of Multimedia. He is active as a composer, both for electronic and instrumental music, and as a player in different free improvisational contexts.

Justin Yang is a composer, improviser, music technologist, and theorist. He has taken degrees at Wesleyan University, Stanford University and is currently completing a PhD in Sonic Arts at the Sonic Arts Research Centre in Belfast, Northern Ireland where his research focuses on real-time interactive performance systems.

Marek Choloniewski is a composer, sound artist, performer and director and coordinator of many international projects – currently including, Director of the Electroacoustic Music Studios at the Academy of Music in Krakow, President of Muzyka Centrum Art Society and Polish Society for Electroacoustic Music, Secretary of International Confederation for Electroacoustic Music (CIME/ICEM) and Director of Audio Art Festival and Bridges. He is founder of many ensembles and new music/live arts groups.

Shane Mc Kenna is a music teacher musician based in Dublin. He has completed both a Bachelor of Music Education, specialising in performance and a Masters in Music and Media Technology in Trinity College, Dublin. His work seeks to encourage collaborative music making for professional and amateur musicians through graphic notation.

Thor Magnusson is a musician/writer/programmer working in the fields of music and generative art. He is a Senior Lecturer in the School of Art and Media at the University of Brighton and teaches courses on computer music and algorithmic and interactive systems. He is a co-founder of the ixi audio collective.

 

Posted by:  Ebru Surek