Classical Jazz '05


Williams Mix (John Cage, 1952; fv = 5'42")

Dripsody (Hugh le Caine, 1955; fv)

Idle Chatter (Paul Lansky, 1985; fv = 9'26")

The difference between Electronic music and Musique Concrète
Electronic music uses sounds that are synthesised directly from soundwaves whereas musique concrète begins with pre-existing sound elements.

Musique concrète: Etude aux chemis de fer (Pierre Schaeffer, 1948, fv = 2'53")
Electronic music: Kontakte (Karlheinz Stockhausen, 1960, fv = 35')

Since many compositions use elements from both musique concrète and synthesised music, the umbrella term 'electroacoustic music' is used to refer to music that uses electronics in any way to produce or manipulate sound material.


Sound Techniques

Various techniques are used to produce and manipulate sound.

The timbres of some sounds, especially the attack, can change dramatically when they are reversed. Long fades become gradual crescendi. Words become an abstract series of sounds. Different sonic qualities emerge. Although a very simple process, reversing a sound can help prepare for more effective use of other types of transformation. 
(Note also that this is a stereo file - there are two channels, left and right. A mono file has only one channel. Try inverting a file that has slightly different waves in each channel)

(Creating a loop )
A certain amount of unity can be attained through repetition of an idea. This can also create rhythmic and metrical patterns and even a hypnotic effect.

In a performance, do all the musicians sit on the same chair at the same time? 
So, should all the sounds come from the same direction?

Create a feeling of movement by panning from left to right speakers 
Look at this screen recording to see how it's done.

This is a sort of delay. Add a reverb to a sound to give a sense of space. Listen to a 'dry' oboe signal compared to one with reverberation added. Adjust the settings and preview each result until you are satisfied.

(very useful effects)
Some sounds react more dramatically than others. Try to find out what they have in common.

How can time affect pitch?
When the speed of a sound is changed, its pitch is affected. Think of the sound of a tape when the battery is running out. Straightforward key changes can be carried out here.


For example:

This is jauntyguitar. This is jauntyguitarpitchshift. As you can hear, the key has been lowered by a tone

Sound is made up of harmonics

These are also known as partials or overtones. They can be viewed in the spectral window. The bottom note in this overtone series is C, two octaves below middle C.


Filtering before | after 
Filters can be applied to boost or reject certain frequencies. Examine the full range of the church organ. Then listen to it when everything is filtered out (audio) except the note A.

Composers of electronic music often use noise as a source of sound waves. They use Subtractive Synthesis to filter out some harmonics and build their own timbres by boosting or rejecting frequencies.

Attack - Decay - Sustain - Release
The sound envelope is how the intensity or strength of a sound varies with time. The start portion of a sound determines a lot of its characteristics.

Match the following sound production methods with the sounds you hear.

(a) Bowing (b) Blowing (c) Shaking (d) Plucking (e) Striking

Sound 1Sound 2 Sound 3 | Sound 4 | Sound 5

Stringing together a series of sound effects without any musical aim will produce a mish-mash of aural events that have no relationship to each other.

Try to help the listener find a meaning in the piece or recognise a plan or hear the connection between the 'gestures'. It gives that person a reason to stay listening.

You'll find out how to organize your sounds here:

  • Finding contrasting elements
    • Variety means Contrast
      Contrast is a basic ingredient in literature, cinema, the media, art, fashion, cookery... life. Opposing or contradictory ideas in a work can become more interesting when they are presented near each other. In conventional music contrast is achieved by introducing a new tune or harmony, by changing the dynamics or the instruments or by having rhythmic variety. The challenge is to have a balance between unity and variety.
    • Unity and variety are not the same as monotony and incoherence
      You are going to compose a piece of musique concrète as part of this project. Recurrence of key elements and the use of a drone can create unity. Start listening out for elements that can provide contrast in a piece.
    • Listen to the following 41" extract by Roger Doyle and decide if the two sounds used are:
      • a) lawnmower and guitar
        (b) 'moo-ing' cattle and oboe 
        (c) distorted typewriter and uilleann (Irish) pipes
        (d) thunder and bagpipes
    • Apart from the contrasting timbres how does he achieve contrast with these two very different sounds? Do they play at the same time from the very start? (C)
  • Placing events in time
    • The problem with electroacoustic music...
      In electroacoustic music there may not be a melody or rhythm or harmony or even instruments so it can be difficult to know what to listen out for. Each sound has been placed in a certain position because the composer felt that it was appropriate.
    • What do listeners have to do?
      As listeners it is our job to figure out what the relationship is between the various elements in a piece. Consider every event or 'gesture' you hear as a musical happening.
    • Listen to Silence versus Loudness (Susan Wilkinson, 2004, 59")
      • Does the composer introduce all of her sounds immediately?
      • Is there suspense? Is there a climax?
      • How far apart, in seconds, are the main events? Can you say what the events are?
      • Does the composer keep our attention?
      • Is there unity in the piece?
    • How important is repetition? How else can unity be achieved? 
      Other musical elements contribute to the overall structure.
    • One of these elements is 
  • Using silence
    • The role(s) of silence
      Consider the use of the silent sections in music. Are they used just as brief rests during a piece? Has silence any other function?
    • Now that you've seen how repetition, position of musical elements and silence all contribute towards the structure of a piece, you may move on to the next section. You will find out about tension and release.
  • Releasing tension
    • Why is the climax important?
      Without a high point - a climax - somewhere during a piece of music the ideas would probably ramble. In an hour long television drama, for example, at what point does the crisis occur? What are the events that lead up to the climax? How do they cause the tension? What happens afterwards to resolve the tension?
  • Building texture
    • Sounds happening at the same time
      Here is a piece which is built entirely on the sound of a drop of water. Digital technology was not available for the composer, but would have made the task of exploring and using the sound a lot easier.

Check list about Structure
1. Contrast, Variety (and Unity)
2. Positioning of events in time and in relation to each other
3. Silence - more than just a 'rest' 
4. Climax - focal point 
5. Texture, Layers, Richness and Sparseness


"Electroacoustic refers, not to a hypothetical style, but to an assortment of various techniques by which composers are freed to work in whatever style they choose. " 
Electroacoustic describes music in which the use of an electronic component is vital to the piece. It is not a style of music. It simply refers to the range of tools and devices used by a composer to achieve what cannot be achieved with regular instruments. Here is an extract from Crosstalk by Mike Vaughan. It is an example of electroacoustic music. 
To claim that Schaeffer, Stockhausen, Wishart and Dennehy all write in 'an electroacoustic STYLE' would be as outrageous as claiming that Bach, Mozart, Beethoven, Bernstein and Deane all write in 'a violin STYLE' simply because all have written for the instrument. An attempt will be made in the following paragraphs to highlight some of the very many different experiments and achievements that have taken place in the area of electronically-assisted musical composition. By referring to some of the work undertaken in this area, it is hoped that the reader will be dissuaded from placing electroacoustic composers under an imaginary umbrella of a non-existent style.

Pierre Schaeffer was a pioneer of electronic music. An engineer with Radiodiffusion Television Francaise in the early 1940s, he persuaded them to initiate the science of musical acoustics. He discovered that he could lock-groove records (ie instead of spiralling towards the centre of the record, the needle could be made to stay in one groove, creating a loop). He examined the properties of percussion sounds and was drawn to the possibility of isolating naturally produced sounds which led to the term Musique Concrète– the sounds were based on natural sounds recorded and played back in a musical context. Our mental apparatus is predisposed to allocate sounds to their sources but Schaeffer worked at detaching sound-objects from any association with their original context. He experimented with bell tones, succeeding in eliminating the attack portion of an event. His first official composition, Etude aux chemins de fer, was a montage of sounds, including six steam locomotives whistling, trains accelerating and wagons passing over the joints in the tracks. This was a significant experiment because it was an act of musical composition accomplished by a technological process, elements were ‘concrete’ (as distinct from ‘abstract’) and replaying was not dependent on human performers. Schaeffer then began playing records at different speeds, which affected not only pitch and duration but also the amplitude envelopes of the sounds. For his collaborative work with Pierre Henry entitled Symphonie pour un homme seul he used repeated patterns of spoken words mixed with other sounds such as prepared piano. After the first public performance of Musique Concrète in Paris in 1950, Schaeffer devised a system of notation using categories such as voices, noises, prepared instruments and conventional instruments. In 1951 he began using tape recorders to achieve echo, a pseudo reverb and he played pre-recorded loops at different speeds. Stereo was still in development so Schaeffer’s spatial experimentation with sounds consisted of playing up to five separate tracks with five separate speakers, one of which was hung from the ceiling. This brief resumé of Schaeffer’s attempts to modify real sounds while working with a limited technology puts him in the category of electroacoustic composer but it certainly does not pigeon-hole him into a so-called ‘style’.

Meanwhile in Cologne a mathematician, Werner Meyer-Eppler, an inventor, Robert Beyer, a researcher in Bell Labs and a composer Herbert Eimert (Klangstudie 11) approached the whole idea of electronic composition from a different angle. All saw the acoustic limitations of available instruments and they set about generating sound waves electronically. They analyzed and synthesized speech with a Vocoder. They tried to exercise control over every aspect of musical composition. Electronic synthesis by means of sine-wave oscillators eliminated the innate characteristics of natural sound sources. Filters were applied to highlight certain harmonic components of sawtooth, triangle- and square-waves and attenuate others. Karl Stockhausen continued working on synthesis technology, creating sounds from harmonics and deriving timbres from tables of frequencies. The French complained that the sounds in Stockhausen's Studie 1 and Studie 11 lacked a human element. The Germans accused their 'rivals' of being unable to dissociate sounds from their context. These electroacoustic composers differed in their views and in their philosophy concerning composition. Ideological warfare ensued. Their equipment, their techniques, their results have placed them at opposite sides of the electroacoustic fence: real sounds versus synthesized sounds. How could they share the label of 'style' when their methods and approach were so different?

Stockhausen's Kontakte is probably the first four-channel music production ever. He invented a special rotating loudspeaker with a horn that was picked up by four microphones in a circle. (The Dolby Surround System, in the cinema, is a logical continuation of this idea.)

Other groundbreakers of the early years included Luciano Berio, Luigi Nono and Bruno Moderna who belonged to the Milan school and used a combination of techniques. Maderna produced Musica sue due dimensioni in Darmstadt in 1952. Taped tones were projected through a loudspeaker alongside flute and percussion, revealing a softening in the hard-line principles of pure electronic music. Berio used speech (or ‘treated phonèmes’) as well as sine wave generators, an Ondes Martenot, four-track recorders and filtered noise thus breaking down the barriers between the Paris and Cologne schools of thought. Even Eimert eventually used spoken text, manipulated beyond recognition, in Selecktion (1959). A major turning-point was Stockhausen’s Gesang der Junglinge (1955/6). It was structured around recordings of a boy’s voice, treated and integrated with electronic sounds.

In 1953 Edgar Varèse recorded iron mills, saw mills and other factories in Philadelphia, making a library of sound material for use in Déserts. He used electronic elements too. His 8-minute long Poème Electronique was produced for the Philips Radio Corporation pavilion at the Brussels Exposition of 1958. It was played through 450 speakers and was a multi-media event: projected images accompanied the music. Varèse rejected the term Musique Concrète as a description of the work. It was a montage of both 'found' sounds and synthetic creation ie 'organized sound'. At last the so-called barriers were breaking down.

The one principal that all these electroacoustic composers share is a refusal to submit themselves, in Varèse's words: "...only to sounds that have already been heard" The RCA synthesizer at Columbia-Princeton Electronic Music Centre allowed low and high pass filtering along with noise, glissando, tremolo and patchable resonance and attenuation sections. Milton Babbitt used the Mark 11 extensively - for example in Philomel, for soprano and synthesized sound. In an essay entitled "Who cares if you listen?" (High Fidelity magazine, 1958) Babbitt famously said that non-experts would not understand his music and rejected their presence at any concert of his music. But how can we say that he wrote in an 'electroacoustic style'? His repertoire includes non-electronic serialist compositions as well as Jazz compositions.

Meanwhile, Steve Reich was doing important work with tapes - speeding up, slowing down, reversing, splicing and looping. In It's Gonna Rain (1965) and Come out (to show them)(1966) all the original meaning in the repeated speech disappears. The mechanical yet rhythmical progression in each work is hypnotic. Reich's Different Trains (1988) marked a new compositional method in which speech recordings generated the musical material for musical instruments. He won a Grammy for this in 1990. Speech has been treated in original ways by many composers, depending on the available tools of the time and the personal style of the composers. Reich chose minimalism. Others did not.

When computers and other technology offered greater possibilities for manipulating sound, composers hailed the new resources as a release from the bondage of acoustic instruments. Max Matthews, a Bell engineer, had already begun exploring the use of the computer itself as a means of calculating and generating sound samples, as far back as 1957. John Chowning worked successfully, at Stanford, on the hugely important FM synthesis in the 1970s. Barry Vercoe developed C sound in1986. It is a unit-generator-based synthesis language that enables the user to create new sounds. These are tools of composition for all electroacoustic musicians who are free to use them in own personal way. Composers have never been so unrestricted in their work. It is no wonder that there is no one common style in the music.

In the mid-1970s the composers Iannis Xenakis (Hibiki-Hana-Ma) and Curtis Roads suggested granular synthesis as a technique for producing complex sounds. It is based on the production of a high density of small acoustic events called ‘grains’ that are less than 50 milliseconds in duration. This method is difficult to work with because of the large amount of calculation required so, until recently, few composers have experimented with it. Barry Truax, working at Simon Fraser University in Vancouver, developed a real-time implementation in 1986 with a programmable digital signal processor as part of the PODX system. Granular synthesis is now a viable compositional tool.

Paul Lansky created Cmix in 1982. It is implemented as a library of signal-processing functions embedded in the C programming language. Signal-processing functions in Cmix allow users to filter, reverberate and spatially process sound files. In his four Idle chatter pieces Lansky creates banks of formant filters. He isolates words, flattens the pitch contours, transposes them while using granular synthesis for sustained sounds. He also uses algorithmic composition and random probability.

He was inspired by rap music in Idle chatter and by "swaying background singers" in just_more_idle_chatter. He used fourth species counterpoint in Notjustmoreidlechatter and now invites browsers to download idlechatterjunior from his site on the Internet. Do other composers share his style? No. Katharine Norman, a former student of his at Princeton University, uses Cmix, C sound, comb filters and other software packages in London E17 and Hard Cash in which the original sounds are recorded real-world sounds, eg a spinning coin in the latter. Norman deals in digital soundscape and writes radiophonic, documentary-style pieces, combining real sounds from London with synthesized sounds. Like other electroacoustic composers, her style is unique.

Riverrun (1986), by Truax, belongs to the abstract world of pure sound. It is not programmatic. It is designed to sound different spatially on headphones and in the concert hall. There are small binaural time-delays that localize the sounds outside the head when headphones are used. Spatialization is a feature of Roger Doyle's Babel Project too but his compositional methods do not fall into the same category as Truax. He, too, processes sound in a personal way. Doyle sifts through hours of recorded concrete sounds and uses just a small amount of the material. In Under the Green Time (1995) the sound of the typewriter has been processed beyond recognition. It becomes an acousmatic accompaniment to the uilleann pipes.

The composer, Morton Subotnick, does the opposite. He uses sounds of electronic origin to evoke strong real-world images in Wild Bull (1968). He is an innovator and does not follow the narrow path implied by the term 'style'. His place in electronic music history is guaranteed by his series of firsts: Silver Apples of the Moon (1967) was the first ever commission of a composer for electronic music and the first piece to be composed expressly for LP. Touch (1970) was Columbia Record's first quadrophonic piece written for home listening. (Subotnick considered the living room to be the listening area for 20th Century chamber music.) All my Hummingbirds have Alibis (1992) was the first work released exclusively on CD-ROM. In this piece the computer listens and reacts to the MIDI-controlled piano and mallet instruments. There are a great deal of computer-generated passages and random music 'composed' by the computer in response to the performance of live musicians. Subotnick's influence is in the creation of new performer-to-computer interaction. He is an electroacoustic composer who does not write in the same style as, say, Hildegard Westerkamp, a founding member of the World Forum for Acoustic Ecology.

The majority of her compositions deal with aspects of the acoustic environment. Nowadays, DAT machines are favoured over analogue recorders for capturing sounds cleanly. She focuses our attention to details in urban and rural soundscapes and has composed film soundtracks and ‘sound documents’ for radio. In her Fantaisie for Horns 11 (1979) she uses a French Horn and tape. The sound sources on the tape are Canadian train horns, foghorns, factory and boathorns and an alphorn. She explores the echo effects and the pitches of the horns, which she found were affected by the landscape. Active in the World Soundscape Project, Westerkamp tries to retain the context of original sounds in Beneath the forest floor (1992). She makes transformations but keeps the links to the original setting. There is some time-stretching and some use of filters.

Trevor Wishart, a Sonic Artist, says that '".. the sophisticated control of this dimension of our sonic experience has only become possible with the development of sound recording and synthesis and the control of virtual acoustic space via sound projection from loudspeakers." Wishart's Vox cycle (1980–88) shows a huge range of ideas and techniques. Fascinated by the formant structure of the human voice, he has created six pieces based on it, using abstracted concrete and electronic sound sources. He says himself that many of the rhythms used in Vox 3 would have been impossible to carry out without the use of computer-generated sync tracks. The ongoing invention of powerful equipment and the availability of the Composers Desktop Project, for example, allow Wishart and other 21st Century composers the freedom to work in their own style.

Digital technology gives composers and studio engineers more power to manipulate the inner make-up of a sound. Compared to the early years of the 20th Century, it is now easier to produce one’s ideas. The composer does not have to be an engineer, mathematician or inventor, for one thing.

During the past few years concerts in Ireland have included electroacoustic works in their programmes: George Crumb's Black Angels (1970) for amplified String Quartet; Donncha Dennehy's derailed (1999) for harpsichord, flute, clarinet, violin, cello, double bass and tape and his Metropolis Mutabilis (1996) for tape and video; Zack Browning's Banjaxed (2000) for soprano, piano, violin, drum-kit and tape; Katharine Norman's Fuga Interna 11 - Sequence (2000) for piano, flute, clarinet, cello and electronic sounds; Arne Nordheim's Partita for Paul (1985) for violin solo with digital delay; Iannis Xenakis's s.709 (1994); Rhona Clarke's Pied Piper (1994) for flute, tape and live electronics; Benjamin Dwyer's Crow (1999) for amplified tenor recorder or flute and tape and Steve Reich's Vermont Counterpoint (1982) for flute and tape - to name but a few. These composers come from different countries and their knowledge of electronics and computers as well as their ambitions in composition are determined by the academic institutions they have attended, the people they have come into contact with and their own experience of listening. They do not share an electroacoustic style. Their works spring from different personalities and philosophies.

Unlike Bach and Handel who wrote in a Baroque style, Mozart and Haydn who wrote in a Classical style and Beethoven and Chopin who wrote in a Romantic style, there is no common thread among electroacoustic composition apart from the continuing invention and manipulation of sonic resources in an artistic way. We may refer to Westerkamp and Subotnick, Stockhausen and Schaeffer, Wishart and Doyle, Dennehy and Krivet, Otte and Vercoe, Ferrari and Corcoran as electroacoustic composers but that is the only label they all have in common.

Varèse said: "I have been waiting a long time for electronics to free music from the tempered scale and the limitations of musical instruments. Electronic instruments are the first step towards the liberation of music." (And the liberation of composers too, it would seem.)

Essay by A M Higgins

Printable version

Ticket info - call 800-555-1212