DP_Research

21st Century Instruments

David Plakon

The design of acoustic instruments has evolved over centuries, finely tuning each dimension to achieve the best possible timbre and playability. The issues these historic inventors had to face are rising again. Digital technology has rooted itself in nearly every facet of the music industry, except the instruments themselves. Synthesizers have given us wondrous new sounds, but now we must develop a new way to interact with sound, using digital age technology as our instrument. The antiquated instruments of the acoustic era can now be replicated note for note on a computer along with any other sound imaginable, yet there are only a few ways to play them. Standard keyboard controllers do not have enough expressiveness to touch the infinite expression of synthesis. There is hope, the task has been realized and many companies and artists are working on the solution.

The challenge is that the interface must be designed in a way that reflects the energy of the sound’s output. Toshio Iwai is a Japanese digital media artist that is working with Yamaha to create an instrument known as the “TENORI-ON.” He says, “I think these tools cannot catch people’s detailed expression yet, like an unplugged instrument.” ("Toshio Iwai") What will need to occur for digital interface devices to plug in and turn up the volume?

The same problem that Iwai addressed also was found in an essay addressing pedagogy for new instruments. “An Elementary Method for Tablet” was written by Michael Zbyszysnki at UC Berkley’s Center for New Music and Audio Technologies (CNMAT) and in it, he attempts to develop a practice regimen and curriculum for tablet based musical interfaces. He opens the piece addressing the lack of expression available in the tablet interfaces and presenting two reasons this has not been achieved. For one, he says that “the possibility for expressive nuance is constrained by the sensitivity of the interface/instrument (how to practice, how to hold the instrument, etc)” (Zbyszysnki). The inefficiency of the technology to capture the minute details of a performance is in due in part to the hardware, but also due to the youth of this concept. Zbyszysnki says, “It is difficult to be expressive on a new instrument because the fact of its newness means that the performer has not had the time to learn it… traditional instrumentalists are aided by centuries of pedagogical materials and methods” (Zbyszysnki).To combat this fact, Zbyszysnki offers Etudes for the tablet which attempts to develop methods of practice for the pen interface.

At the 2006 international New Interfaces for Musical Expression (NIME) conference in Paris, Christopher Dobrian presented a paper titled “Musical Expression with New Computer Interfaces.” He looks further into the problem of “expressive nuance” that Zbyszysnki outlines. He uses the term “virtuosity” to describe this phenomenon and attributes the inability to achieve “virtuosity” on digital instruments to a problem he calls “Mapping.” He says, “crucially, there is the question of how to map data from the interface onto specific changes in the sound, i.e., to map the relationship of interface to sound generator” (Dobrian). Even once a relationship between action and sound is created, it is difficult to use such a small set of data to recreate the complexity of a gesture on an acoustic instrument. Dobrian claims that we must be able to analyze a given set of data on “second- and third-order analyses of the input data—recognizing not only the input value, but also the speed and direction of change, acceleration of change, etc” (Dobrian). These sort of things must be taken into consideration when considering the future of digital instruments.

There is a distinction to be made between “instrument” and “controller.” Acoustic instruments have a complexity of gesture and timbre that makes each unique, able to produce a variety of sounds and notes in a unifying timbre that identifies the instrument. New interface instruments have a very difficult time achieving this unity because the sound attributed to a given gesture is more or less arbitrary. Physics define the timbre of an acoustic instrument while programmers decide the tonality of a controller. This is a serious challenge to new instruments and computer music in general, because there is no constant relationship between the performed action or gesture and the resulting sound. It could be argued that this is the beauty of electronic instruments, the endless possible mappings between gesture and sound. However, just as Zbyszysnki calls for rudimentary interface practice, digital instruments should begin to narrow their sonic possibilities to allow for a more focused training.

For example, say we put a classically trained pianist at the bench of a keyboard controlled synthesizer and gave it a timbre very different in envelope and timbrel quality than a piano. As the pianist begins to play, they will quickly realize that traditional playing techniques will not produce a desirable result. Every preset of a synthesizer can be played in entirely different ways and it is therefore difficult to become a virtuoso at any of them, especially since new ones are always being created. A specific sound must be developed for new instruments to allow for the “virtuosity” that Dobrian speaks of. This suggestion is not ungrounded as many other issues are moving in a similar direction; we now have access to endless amounts of information via digital technology, but the challenge of “post-digital” society is searching and organizing this information in an efficient and relevant manner. The same philosophy should begin to see its way into new instrument designs.

There are several categories of new controllers emerging: matrix designed interfaces, multi-touch interfaces, and gesture or motion based tablets. How do each of these address the problems presented by Dobrian, Iwai, and Zbyszysnki i?

Matrix designed controllers such as the Monome and Tenori-On allow for the users to interact in a visual way which aids the challenge of virtuosity. Sequencing on a 2D grid adds a literal dimension to traditional 16 bar sequencers. However, virtuosity on these instruments still require a great degree of programming and they are more controllers than instruments. The Monome has no inherent sound, while the Tenor-ON does, which helps define it as an instrument. However, the only level of physical control one has is pressing a button on or off, which limits the possibility of achieving a virtuoso status. These are not simple devices by any means, but they do not adequately address the aforementioned problem.

Multi-Touch Interfaces add another degree of complexity by being able to control variables on a more intuitive scale than just on and off. Products such as Jazzmutant’s Lemur and the Reactable enable users to have incredible depth of control over a sound. The Lemur even includes software which lets you design custom interfaces to suit any need. However, this aspect also limits the virtuosity available because the instrument’s design can be changed at any time. It is impossible to master every possible set of interface designs because they are unlimited.

Tablets are the closest to overcoming the setbacks of instrument design. the Korg Kaossillator is one such instrument and it is revolutionizing commercial electronic instrument design. It is not a controller, there is no MIDI port, the user is limited to the banks of sounds and effects on the small unit which are controlled by a small touch pad in the center of the device. The performer can set the pad to control any number of variables from tempo to key signature and loop them live to create multiple layered tracks. The touch pad is an intuitive interface, allowing for a scalable degree of control and has the potential to be mastered. One improvement that could really help the device would be pressure sensitivity, which would add more data to be used for sound control. I believe a hybrid between the Kaossilator and a Waacom tablet, which has pressure sensitivity, would be a quite excellent new design.

Variation is an essential part of the complexity of a given instrument’s sound. Instruments must shift away from seeking to provide as many different tonalities and towards supplying as many different ways as possible to manipulate a specific timbre in a direct and noticeable fashion. One suggestion is that designers should stop designing controllers which must send information to computer that determines its tone, and start designing individual instruments with gestures and tones unique to that particular device. Look at the success of the Theremin, a relatively simple electronic instrument with the most basic tone, but the complexity of the gestures and their resulting effects on the tone make it invigorating to watch and play. However, the Theremin is significantly outdated, there must be away to take the gestural and tonal complexities of the theremin and bring it into the 21st century.

Works Cited

Dobrian, Christopher, and Daniel Koppelman. The ‘E’ in NIME: Musical Expression with New Computer Interfaces. NIME. University of California.
<acweb.furman.edu/~dkoppelman/TheEinNIME.pdf>.
"Toshio Iwai." Interview with Alex Wiltshire. Pixel Surgeon.
<http://www.pixelsurgeon.com/interviews/interview.php?id=239>.
Zbsyszinski, Michael. An Elementary Method for Tablet. Center for New Music and Audio Technologies. UC Berkely.
<http://homepage.mac.com/mikezed/text/zbyszynski-tablet.pdf>.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License