Do you notice a mistake?
NaN:NaN
00:00
Sustainability, a computer music work’s chance of surviving far into the future, even if the composer and the performer at the première are no longer supporting the work and the implementation technology has long become obsolete and/or unavailable, is central to the formal study of computer music repertoire. Currently, it is difficult for musicologists to research computer music repertoire with any kind of methodology that is used on the more sustainable and symbolically represented music of the acoustic domain, because many real- time interactive computer music works utilizing live processing cannot be easily preserved and recreated for performance or study in the future, due to the quick obsolescence of the technological platforms upon which many real-time interactive computer music works are created.
The solution is to eschew the widespread practice of bundling the set of musical intents and ideas within a single technological implementation with the hopes that the technology would not be rendered obsolete in the future, and to instead preserve the composer’s musical gestures as a precise (within the limits of appropriate summary and details), well-documented, and platform- independent set of intents and ideas that could be replicated accurately with any current or future technology.
A composer should divide their computer music work into two portions: a “composition” (or “music parameter control”) portion and an “instrument” (or “sound generation”) portion. The two portions would be coupled at the premiere, but at any time in the future, each portion could be modified and updated independent of each other, and thereby eliminating the complexities of modifying both at the same time. The “composition” portion consists of the composer’s intents and ideas and is communicated to the performer through platform independent descriptors that communicate the desired musical gesture regardless of technical implementation. The “instrument” portion is the sound generation module and control interface appropriate for the sound generation module and the performer. Like the decoupling of composition and instrument in the centuries before computer-mediated music, the composer’s intents and ideas are preserved regardless of updates to the implementation (instrument). Bach’s Well-tempered Clavier works are preserved to this day partly because the platform independent notation of Bach’s intents and ideas were preserved regardless of the implementation during Bach’s day and implemented today on modern instruments. In short, decoupling composition and instrument follows what composers, performers, and instrument designers have been doing for centuries in that composers of keyboard music were not the keyboard makers, and neither were the keyboardists. Central to sustainability is the notation of the composition. An effective approach is to use high-level, platform-independent descriptors to communicate a musical gesture, regardless of the technology being utilized for a specific performance. For example, in the scenario of a laptop/phone/alternative controller composition, performers would be completely responsible for the implementation of the gesture communicated from the composer, on their chosen technology. One of the primary obstacles in composing for most laptop orchestras and mobile device orchestras is that the various members show up with a variety of platforms. While a composer could write software for each device in the ensemble, that approach might prove to be not only very time- consuming given all of the various operating systems and hardware, but eventually the composer will tire of the continual updating and the piece dies inside the technology when the technology can no longer be supported. The solution is to shift the burden of implementation of a musical idea to the performer. The composer need not implement directly, their musical gesture. Instead, the gesture is communicated as specifically as possible to the performer (graphically, with text, with standard notation, or with other non-traditional notation), and then the performer implements the gesture on their chosen technology. For example, the composer could indicate to the performer to create a sine wave vibrating at 100 Hz that changes to 200 Hz over 10 seconds (instead of telling the performer to press this yellow button that produces a 100 Hz sine wave and changes to 200 Hz over 10 seconds on the software that the composer wrote for the performer). In placing the implementation duty on the performer, the performer would have to figure out how they will produce the 100 Hz sine wave (whether in Max, cSound, PD, Ableton, or something else) and trigger the modulation in musical time. In a more concise storyboard example, it’s the difference between “Do a 3 Hz LFO for pitch vibrato on the fundamental frequency” versus “Click the LFO button on the Max patch that I gave you and I hope the patch works on your Windows98 machine running Max 4.” In other words, the composer’s focus is on the musical output, rather than the technology that produces the output. More importantly, the work has a better chance for future study of the work by musicologists and for preservation by librarians.
Currently, outside of traditional notation, there is no universally accepted notation scheme, but there is a very precise oral tradition requiring strict adherence to grammar, syntax, and protocol. Like oral musical traditions, there is no universal standard to facilitate the computer music performance tradition through written mediation other than standard musical notation. Until a corresponding notational tradition that effectively communicates computer music gestures emerges, text instructions (in a language that most people can understand) allow for precise mediation of musical gesture between composer and performer. Graphical depiction, not as precise as, but more intuitive than text, has also been implemented successfully. Text and graphics in tandem that represent the temporal relationships of the musical gesture (like traditional sheet music) have proven to be most effective. What has to be avoided is the notation of a knob turn, a button trigger, a menu selection, a radio button selection, a slider push, or a number entry that has no obvious connection to a musical structure and no meaning to the performer or researcher outside of the immediate implementation platform.
These approaches would facilitate preservation and study of computer music repertoire. More importantly, however, it would move computer music repertoire away from the ephemeral fringe and foster the inclusion of interactive computer music repertoire in the canon of the western art music tradition.
November 5, 2024 00:26:25
October 30, 2024 00:27:08
October 30, 2024 00:31:12
November 5, 2024 00:29:20
October 30, 2024 00:27:09
October 30, 2024 00:30:39
October 30, 2024 00:28:47
October 30, 2024 00:27:55
October 30, 2024 00:33:36
October 30, 2024 00:28:35
October 30, 2024 00:31:55
November 5, 2024 00:30:42
October 30, 2024 00:23:26
October 30, 2024 00:28:55
October 30, 2024 00:29:53
November 5, 2024 00:29:25
October 30, 2024 00:28:39
October 30, 2024 00:21:01
October 30, 2024 00:26:35
January 6, 2016 01:10:16
October 30, 2024 00:27:12
November 5, 2024 00:25:50
October 30, 2024 00:25:30
November 5, 2024 00:30:35
November 5, 2024 00:30:35
October 30, 2024 00:29:02
October 30, 2024 00:29:02
October 30, 2024 00:19:43
October 30, 2024 00:19:43
October 30, 2024 00:24:43
October 30, 2024 00:24:45
November 5, 2024 00:29:25
October 30, 2024 00:25:24
November 5, 2024 00:28:54
January 6, 2016 01:10:12
December 15, 2021 01:36:22
Do you notice a mistake?