Stephen Oberbeck
Stephen Oberbeck

Fellowship Title:

“Other Voices, Other Rooms”—Computer Music

Stephen Oberbeck
October 12, 1968

Fellowship Year

This Stockholm composer is making music. Program music, you might say. His orchestra: Computers. The tempo is in real-time, not 4/4. For more input (Yes?No?), go to step two…

Stockholm—To the uninitiated ear, electronic music and musique concrete still tend to sound like a mixture of Surrealist train-wreck, avalanching glaciers of glass or the chamber music of spastic chimpanzees. The sonic squawks and howls, whiplash static and celestial twitterings comprise a language as familiar to the average music listener as spoken Sanskrit.

Even some people who have nibbled at, say, Webern or Schoenberg shy from progressive compositions that seem to be simulating a complete nervous breakdown of some radio station. But this electronically bent, folded and spindled music was never really meant for the average ear, of course. Synthetic musicians have long played to their own crowd. Cage, Boulez, Stockhausen and Babbit are weathered veterans; the late Edgar VaVarese and Pierre Schaeffer are grand-old-men of musique concrete. Now, composers and groups (LaMonte Young, for example, and Musica Elettronica Viva) are writing compositions of one sustained tone and playing on amplified rubber bands and panes of glass.

New instruments are not new, though. An oldster such an American Harry Partch has for years been performing concerts with instruments of his own design—some of which are trundled around the concert stage like kiddie cars by their players. Cage has employed a battery, of noisy household appliances as “instruments,” amplifying and transforming their sounds. French sculptors Francois and Bernard Baschet produce musical sculptures called “structures sonores” which, in collaboration with composer Jacques Lasry, have been featured in concerts of shimmering, otherworldly music. It is normal the followings of such musicians are not swollen with large numbers of the unspecialized public.

Jet Whoosh

But the public ear is changing, I think—and the changes are being wrought, ironically enough, not so much by progressive experimentalists in “serious” music as by current changes in commercial musical taste and production technology. Now, it is commonplace on pop-rock radio stations everywhere to hear the jet whoosh of an electronic organ, the electric guitar’s high-decibel Doppler effect or the sinuous whine of the sitar and its droning backup instrument, the tamboura. It has been pop music, tailored largely to the multi-million dollar youth market, that has given the general public a taste for new sounds and a new dimension of musical space, a glimpse of acoustical possibilities.

Record producers resort to ever more daring electronic pyrotechnics and arrangers are increasingly introducing highly sophisticated (or erstwhile highbrow) musicology into the mass market. One is no longer surprised to hear Doric modes, fugue forms or flatted fifths on teenage radio programs of pop music. New latitudes of musical space are suggested in a kind of psychedelic slight-of-sound, and the decibels are near-deafening, in the penultimate offering of that Juggernaut of a group, the Rolling Stones, for example. Or take the Beatles’ “A Day in the Life.” This “song” (and all the others in the genre it represents) has, with its LSD-laced acoustics and spatiality and its witty, ironic lyric, doubtless turned on several million young people, whetting their aural appetites for new sounds and musical conceptions.

In those scores of records that appeal to adults as well as youngsters, one cannot help but remark the atmosphere of experimentation and musical transformation—a joint product of changing tastes and production equipment and process. As the well-tooled commercial apparat borrows voraciously from many of the concepts developed by pioneering synthetic musicians, today’s composers are turning to newer modes of composition and conception—towards the computer, for example. The computer not only offers a practical tool to solve many of the modern musician’s problems; it also promises access of new areas of acoustical research.

Brain Drain

As a problem solver, the computer has already proved itself. Some of the major problems which have plagued the composer of synthetic or electronic music have been the drastically time-consuming burdens of setting up sound-generating and processing equipment, physically splicing together quantities of recording tape and maintaining proper recording levels and sound quality. Early practitioners of the “new” music had to develop not only facile fingers but also a facile aural memory, for it was usually a relatively long period between the laborious cut-and-paste process and the actual hearing of a finished portion of work. The critical time-lag between tinkered production and broadcast creation, full of mechanical busy-work, was a significant drain on inspiration. Imagine Beethoven having to walk around tuning all the stringed instruments in an orchestra before hearing a movement he created.

Baschet “Structure”—Wet fingers on glass rods resonate the metal elephant-ear: shimmering

But even an orchestra, considered as an automated compositional instrument, often does not suffice for the seeker after new sounds. For him, the computer’s ability to store information and accurately generate sounds has alleviated the time-consuming chores once faced by new music composers. A perfect example of this in the new Electronic Music Studio (EMS) which will soon begin operation in a Swedish Broadcasting Corporation building on Stockholm’s busy Kungsgatan. There, studio chief Knut Wiggen (at right and on Page 1), a soft-spoken, intense Swede of 41, is overseeing the finishing touches on what he describes as perhaps “the most advanced electronic music studio in the world.” It is the product of the dreams of Wiggen and several contemporaries. They dream expensively. The EMS will cost, completed, about half a million dollars, $300,000 of that in machine costs alone. The bill is funded primarily out of a research fund Swedish radio maintains.

In its physical layout, the EMS is a study in “Other Voices, Other Rooms;” a complex of consoles and circuitry linked to a hybrid computer system which gives the composer an astonishing latitude in sound production. “This is probably the optimal way to use computers in composition,” says Wiggen, “and you need a computer for accuracy and control. The potentialities here are so great you can’t possibly comprehend them all. The first years…” He pauses, smiling with sheepish condescension: “Well, it will take us some years to understand the musical possibilities.” He leaves it at that, which is all right with me. The EMS is a little overwhelming at first sight, hard enough to assimilate on its own.

Banker’s Blue

The main elements of the studio are housed in three separate rooms, with others on the drawing board to be integrated into the overall plan. The console-room, furbished in snowy, almost clinical neutrality, is dominated by a wrap-around console system of gleaming metal and polished wood surfaces—radiating that richly appointed mechanical atmosphere more suited to banker’s blue or an IBM showroom than the imagined creative clutter of a “music studio.” The back of the console system was still undressed, revealing its imposing electronic nervous system of neatly grouped wiring and connections. On the console’s inclined frontal panels are the various “command posts” available to the composer—ranked grids of white plastic squares, about postage-stamp size, each bearing a number. Representing a breakdown of quantities, or designations, in wave-form contour or sound intensity, each grouping of white chips activates one of the 24 sound generators, which Wiggen says produces a repertory of 16,000 sounds.

But a better understanding of the EMS console’s function can be had from the specifications for its construction which Wiggen outlined several years ago in an article (which, in the computeristic spirit of time and labor-saving he handed me). The most important of various factors, said Wiggen, was that the EMS operate on a real time basis, with the sound being produced audibly as soon (relatively) as the instructions for its creation were transmitted into the console. He also wished to eliminate manual tape editing, abolishing the time spent formerly coordinating different pieces of equipment. He hoped to do away with conventional copying techniques “in order not to ruin the quality of the sound.” With the real-time factor, Wiggen’s other crucial requirement was that “the composer must have the possibility of using his imagination at the control table in order to avoid having to know in advance the music that the studio was going to help him to discover.” Other requisite features Wiggen envisioned were computer control of sound production and recording and that the studio be adaptable to an additive “plug in” expansion system.

Wiggen At one of the EMS console’s command posts: Touch control

Below: The EMS brain center, with panel wiring lower foreground

No New Toy

All of these specifications derived from recognized needs in the field of synthetic composition. Watching Wiggen demonstrate the components of the EMS is not in the least like watching the proverbial kid with a new toy. It is more like observing someone who has long known exactly what mechanical aids would facilitate his labors (as scientists ache for processing equipment not yet available to them for cost reasons) and who has finally satisfied his desires. It is most likely in the realm of music, more than in any other of the arts, that “technology”—in terms of hardware and the conception intrinsic in its production—can actually lead to new art forms and methods. Wiggen regards his electronic storehouse of precious marvels, however, not only as a fantastic problem-solver but also as an area of exploration and research which slightly overwhelms him in its potential.

New Notation: A block diagram for chromatic writing, indicating program steps used in a section of “Illiac suite for string quartet,” a computer work by Lejaren A. Hiller, Jr. and Leonard M. Isaacson.

No Sounding Brass: Racks of circuit boards and switching gear in room adjacent to console through which sound signals pass via music production route.

His enthusiasm is both operational and visionary. For the EMS represents not just a technological labor-saving device but also the changes in conception and interpretation which music has undergone in the past decades. Says Wiggen, “We cannot express our thoughts with the old machinery; we must use the new to learn ourselves about the possibilities of artistic creation.” And he adds: “As MacLuhan says, we have loudspeakers and we will make music for loudspeakers and not for concert halls.” This notion of not using the old content in the new medium implies some of the new directions music has been taking. Musical theory has almost kept pace with information theory: central to computer use in composition in the basic fact that music can be accurately represented as a sequence of numbers(the sound waves can be plotted) and that these numerical values can be converted into the sounds they designate.

Sequence to Wave

In a New Scientist article published a year before Wiggen’s proposal, Dr. J. R. Pierce wrote that “20,000 three-digit numbers a second can describe not only any music that has ever been played, but any music that ever could be played, if only we are able to choose the right numbers…A digital computer as a source of a sequence of numbers, together with not very complicated equipment for turning this sequence of numbers into an electric wave that can drive a loudspeaker, in truly the universal musical instrument—the instrument which can, in principle, create any sound that can be created, any sound that can be heard by, the ear.” Besides explanation, Pierce’s use of the conditional demonstrates the theoretical sense that permeates much of modern music.

 

In The Groove

Lejaren A. Hiller Jr., the University of Illinois computer authority who with Leonard M. Isaacson has produced musical compositions on the Illiac computer system (block diagram Page 4) amplifies the numbers game: “In recent years,” he wrote in a Scientific American article of 1959, “the ‘physics of music’ has disclosed much that is mathematical in music. It reveals how sound waves are formed and propagated, how strings, membranes and air columns vibrate and how timbre depends upon complex wave-structure; it has provided universal standards of frequency and intensity…” Then, continuing, he approaches music as a study of acoustics: “In its most compact form, acoustics reduces the definition of musical sounds to a plot of wave-form amplitude versus time. The groove of a phonograph record, for example, contains only this information and yet builds a believable reconstruction of an original musical sound.”

The reduction of the creative impulse to numerical values brings up a host of other problems for the composer—but primarily if he is dealing in the sounds of conventional orchestra. Pierce speaks, for instance, of the difficulty of electronically achieving the richness of centuries-old instruments such as violin or horn. But this is not a problem for Wiggen and many of his contemporaries. “New music has another function that cannot be connected with the concert hall or its social situation or traditional playing techniques,” he says. “In this new music, you’ll not find symmetry or hierarchy, not a symphonically developing plan…What will we find? That’s the question. That’s our work.”

“400 Hz-10 dB”

That work, for Wiggen at least, will be done on apparatus which consists of the console, a roomful of circuit boards and memory-bank elements and an alcove housing two Ampex T-11 computers. The digital elements came from the Swedish firm of Data System and Swedish radio was responsible for the analogue elements. In the EMS process, digital signals are transmitted to various tone-generators, pass through a network of modifying apparatus and are recorded on an analogous tape recorder. But, says Wiggen, “the digital signals are the pilot signals which control the sound sources, connections and modifying apparatus and analogous tape recorder. Thus the composer has in front of him a control-table on which he can, by pressing various buttons, create digital pilot signals which control a tone-generator to produce a tone with, for instance, a frequency of 40O Hz, a level of -10 dB, sine waveform, in order to direct this tone later by pressing other buttons through the connecting network to a loudspeaker, or even to a modulator before the loudspeaker.”

The beauty of the EMS apparatus is that it allows a composer to do a number of erstwhile laborious jobs almost simultaneously—and to shape his sounds through knowledge of the EMS components.

No players are present to heed the age-old gesture of the raised hand. At the console, the only “conductor” needed is that which activates the electronic impulses on the panels. Posture persists.

Ten-Finger Exercise

Thus, the console-composer can formulate, designate and store in rapid sequence a series of numerically expressed tones simply by activating the buttons in front of him. Actually, the buttons on the EMS console are simply metal brads, the round head of which can be activated to transmit an electronic impulse by the touch of a stylus or the finger (on which a conductor is put). The point, driven through the console panel, is soldered to a wire leading to the computer bank. The point Wiggen makes about the digital signals also being the pilot signals, I take to mean that the initial signals representing numerical values sent from the various metal “buttons” on the panel are simultaneously converted into an analogue phase, into voltage values which activate the loudspeakers and produce the desired tones. [Note: As of Oct. 18, Wiggen informs me that there is yet no analogue converter, “but when we get one it will also work in real time; that is the output of the converter will directly create the sound…”]. Other controls on the console make it possible to govern duration of tones, bypassing the problem of a manual edit. The astonishing quality of the EMS, according to Wiggen, is that it can be operated by composer alone. No engineer is required in conjunction.

“If it is wanted to record a series of sine tones,” Wiggen explains further, “the pilot signals are first recorded on the digital tape recorder while one listens to the tone, then [one]rewinds the digital tape to the first tone of the series, depresses the ‘record’ button, after which both the digital and analogous tape recorders are set in motion; the first produces the pilot signals which make the tones ‘resound’ and the second records the tones well without needing any editing. Since numbers of tones can be stored together in the digital recorder, no copying in needed.”

Probability Plug-In

If this information is not difficult enough to intake and process, Wiggen also speaks of a “probability generator” which can be plugged in to take over the job of frequency production as well as tone amplitude and duration. He proposes a “drawing pen” by which the computer-based composer can manually alter, or design, the wave-form of his sine tones, giving them his own choice of dynamics. Speaking of the possibility of composers studying each others’ computerized works, he remarks, “A detailed indication system is an essential part of this studio. When a composition ‘exists’ also on a digital tape in the form of pilot signals, an interested composer can borrow his colleague’s digital tape from the archive, and play back a certain part of it in order to study at the control table just how this sound has been achieved.”

Even to the layman, the Swedish radio EMS can be seen to possess tremendous potential. The basic aspect to understand is the reduction of musical sound to numerical functions. One last comment, by composer Gerald Strang in the “Cybernetic Serendipity” catalogue of London’s Institute of Contemporary Arts, in thankfully clarifying: “If I write a series of notes and I say this passage is to be played mezzo forte by the oboe, with certain particular kinds of attacks and certain phrasing patterns and certain expressive devices, I am really saying that certain raw data shall be processed and modified through this instrument, coming out as acoustical sound waves.” Like in a microphone, Strang illustrated, for amplification or recording, music is broken down from acoustical sound waves into electrical voltages and reconverted back into acoustical energy. This job, the computer is extraordinarily qualified to perform.

What’s The Score? This one is visual, from a computer composition by British composer Peter Zinovieff. Horizontal forms refer to pitch sequences, timbre, waveform, loudness and crescendo. The work is entitled “Four Sacred April Rounds.”

The EMS in Stockholm, with its ability to structure sound, intensity and duration of tones, to store and record what it is fed obviously represents a valuable tool to the Swedish composer and implies new directions for Swedish music. It also implies an immense amount of knowledge and intimacy with the EMS apparatus and process. But “computer music” seems a logical product of technological advances brushing up against the composer’s ancient limitations of traditional instruments and performer’s skill. Says Pierce: “As long as the maker of electronic music simply creates new but limited instruments, he suffers the limitations of all particular instruments, whether they be mechanized or electronic…” Pierce seems to have been pushing for a major breakthrough such as the EMS represents. But the problem does not end here, for “unlike the composer who has used a symphony orchestra as one huge instrument, [the electronic composer] has neither the years of tradition nor the skill of the instrumentalists to rely on.”

Wiggen, and those who will work with the EMS system, seem to be patently avoiding the pitfalls Pierce describes. Like many modern men who approach science for aid and inspiration, they are getting more than they bargained for—and looking forward to the windfall.

How will the public react to any broad new influx of these sine-wave symphonies? Probably much as it always has, despite the aural changes in its environment. In “Our New Music,” composer Aaron Copeland writes, “During the very critical years of change that followed the death of [x], composers had come to take it for granted that their works could be of interest only to the most forward-looking among their audience. How could the ordinary music lover, comparatively unaware of the separate steps that brought on gradual changes in musical methods and ideals, be expected to understand music that sounded as if it came from some other planet?”

“X” Factor

The “X” factor is interesting: it was the death of Wagner that Copeland referred to, and the year his book was published was 1941. But all that was written before the epoch of space exploration, before the physical theories of the universe had really penetrated the modern artist’s consciousness—before even something as blatant and commercial as pop music thrust into new realms of acoustical space.

We have, many of us, already assimilated that music “from some other planet.” Bolder explorers now are careening off into the stratosphere in search of other, more distant planets. With the technical aids and electronic instruments such as Wiggen’s EMS, they are likely to find them.

Picture Credits: Block Diagram and Circuit Boards courtesy of Institute of Contemporary Arts, London, from the “Cybernetic Serendipity” catalogue published by Studio International. All others: SKO.

Received in New York October 21, 1968.

Mr. Oberbeck is an Alicia Patterson Fund award winner, on leave from Newsweek, Inc. This article may be published, with credit to S. K. Oberbeck and the Alicia Patterson Fund.