Thursday, September 29, 2005
HAL: I am sorry, Dave. I am afraid.
PART ONE – IT’S ALIVE! (tip o’ the hat to Steve Layton)
Why are people so quick to dismiss electronic music as not “human?” Are acoustic instruments “human?” I listen to a lot of electronic music, and there is plenty of bad electronic music out there (especially at the Hotel Cadillac), but there is also plenty of great electronic music – music generated by humans via computers and electronic instruments. I sometimes worry that people who make proclamations about electronic music simply don’t have a wide range of experience with it. You wouldn’t base your opinion of the viola on only a few performers’ or composers’ use of it, would you? So if you’re doing that with electronic music, please stop (I'm not accusing anyone; I'm just saying...)
There is plenty of very dramatic, moving, sentimental, emotional, and heartbreaking electronic music out there, if that’s what is meant by “human.”
PART TWO – ELECTRONIC EVOLVES TO ACOUSTIC?
When composers first started using electronics, they’d utilize sounds that were completely foreign to many listeners. John Cage predicted that electronic sound-making would eventually be used primarily to replicate acoustic sounds. In some ways, it is sad that his prediction is panning out.
It goes without saying that creating sound electronically opens up possibilities that are not available with acoustic instruments. You can create new sounds from scratch. You can alter recorded acoustic sounds in thoroughly un-acoustic ways. You can adjust virtually any parameter of a sound with as high (or low) a degree of precision as you choose. I am interested in the differences between electronic and acoustic sounds. I appreciate and enjoy the inadequacy of the synthesized cello that doesn’t sound like a real cello. I hope there will always be a place for the triangle waves and oscillators and distortion generators and all of their robot cousins. I am not particularly looking forward to the electronic re-invention of acoustic instruments.
posted by Corey Dargel
5:40 PM
Relevance
I’m curious to know what S21 readers think of this quote:
“I regard all popular music as irrelevant in the sense that people in 200 years won't be listening to what is being written and played today. I think they will be listening to Beethoven. That's one of the reasons I don't take myself seriously.” -- Elton John
Raises a lot of questions in my mind. First of all, what do you think people will be listening to 200 years from now? Beethoven? Elton John? Something else?
Secondly, do you agree that popular music is irrelevant, by which little Sir John seems to imply that it only speaks to its own time? Is that an odd prerequisite for relevance?
And finally, do you think it’s a good thing that he doesn’t take himself seriously?
posted by Lawrence Dillon
5:05 PM
Monday, September 26, 2005
Faking It
A discussion arose in the comments section for the "Is the Symphony Dead?" posting on the front page around my suggestion that synthesized orchestras will soon be indistinguishable from real orchestras in recordings, and that the future of the Symphonic tradition will turn to artificial orchestras as actual orchestras disappear and perform less and less new music. In the interest of muddying the waters as little as possible, I'll save the future of the orchestra and its performance of new music for another day and turn my attention exclusively to the technology question.
First is the question of constructing a real-sounding virtual instrument. Back in the early days the only option was building sounds up with oscillators -- additive synthesis -- or filtering white noise down -- subtractive synthesis. The Yamaha DX-7 made a major breakthrough with the development of FM (frequency modulation) Synthesis, and FM was one of the standard tools up until the late 80s and early 90s. These synthesis techniques were far better at creating interesting new sounds than at synthesizing the sounds of traditional instruments. The original computer sound cards and videogame consoles all used FM synthesis, which is why early computer games have such a distinctive sound. The next phase was sampling: recordings of actual instruments are processed (often the sustain part of the sound is set up to loop to save disk space, for example) and played back by the synthesizer. The best known of the early sample-based instruments was the Mellotron keyboard, which used tape loops activated by the keyboard keys. One of the first major digital sampling keyboards was the Synclavier, and it became the must-have for film composers and big music studios. The Kurzweil K2000 was the next big thing, and for the last 10 or 15 years the sampler has reigned supreme, with bigger and better sample libraries for more and more robust hardware and software samplers coming out every year. GigaStudio is probably the current industry standard -- it can play sample libraries that are many gigabytes in size, which allows for many different samples for any given instrument and eliminates the need for looping. The Vitous and Garritan orchestra sample libraries have been used in countless films and other projects. Sounds of full orchestral sections have become realistic enough that most listeners can not tell the difference between a film score made with samplers or recorded live. Solo instruments, characterized as they are by very subtle variations and changes (especially the differences in attack from one note to the next) are less convincing. And sampling, ultimately, will probably not get us to perfect emulation -- the need for increasingly larger sample libraries and increasingly cunning programming to fuse them together will be overtaken. "Physical Modeling" is, in my opinion, the way of the future. Rather than rely on a limited number of recordings of a real instrument, why not build in the computer a mathematical model of the physics of the instrument? The only limitations are the amount of processing time required to render the performed sounds, and the cleverness of the coders who build the instrument. Given how close CGI has come to realism (often indistinguishable from the real thing) I have no reason to think that physical models of instruments won't be able to achieve complete realism. (Furthermore, I can think of no physical limitation that would prevent it.) Our ears and eyes and brains have limits to the subtleties we can perceive, and I suspect that both CGI and Physical Modeling of musical instruments are rapidly approaching those boundaries.
Of course once you have your virtual Stradivarius, you have to play it. Starting with raw, quantized MIDI data and trying to turn it into a realistic performance is quite a challenge, and the perfect computerized virtual instruments I'm predicting will have many more variables. One key element of human performance is imperfection, and a small amount of randomization of attack times and note velocities will already do wonders for your MIDI performances; and you can always do the tedious work of drawing in controller data by hand to get the envelope of each note just right. MIDI input devices (keyboards, wind-controllers, etc.) already generate fairly convincing performances for today's instruments, and better MIDI controllers will allow for better performances on the Physical Modeling instruments of the future. Instead of hiring the whole orchestra, you might hire a violinist to play the string parts and a clarinetist to play all of the wind parts, and map the performances to the corresponding virtual instrument groups. But a little bit farther down this path we arrive at Artificial Intelligence. Philosophy of Mind is a vast and fascinating subject area, and given a whole lot more space, I could take you through my argument that there is no reason an AI (although it might not be "A" anymore) couldn't be programmed to perform a virtual instrument with the same level of artistry as a Yo-Yo Ma, but we don't have to go that far out to reach the point of an AI that performs on a normal professional level. The algorithm for reading a score is doubtless extraordinarily complicated, and every performer has his or her own variant on it, but the algorithm and the ranges of its variables should theoretically all be findable. (We know that the variables have limits because we can tell when a performance isn't working and instruct the performer how to adjust the performance to make it better.) Expect AI performers who can be set to perform in particular styles, and whose technique and performance choices can be adjusted to match your taste.
I don't know what the timeframe will be for these developments, although I will casually predict that Physical Modeling will supplant sampling within the next 10 to 15 years. Convincing AI performance will probably be farther off, but I might be wrong. How composers will use these tools is an entirely separate matter, and will be fascinating to watch.
posted by Galen H. Brown
2:30 PM
|