About the Pieces
Program notes provided by the composers.
Micrófono Abierto/Open Mic[2017]
for voice and live processing
Guzmán Calzada (Uruguay, 1995), Sofía Scheps (Uruguay, 1987)
The piece proposes to attend the acoustic qualities of an specific architectural space as an invisible dynamic layer that can be activated or rebuilt using the voice, sound synthesis and amplification. With his or her voice, a singer activates a system that responds, artificially recreating (through sound synthesis) the acoustic response of space, given by its architectural dimensions. Following a score, structured as an extended descending glissando, the interpreter sings in front of a microphone. The frequency information captured by the microphone is analyzed by the system, which compares the input information with the resonance frequencies of the space. When there is a match, it activates an envelope that is applied to an oscillator with that frequency. This recreation / reconstruction is superimposed on the "natural" response of space, arising the idea of "dubbing".
CLUSTER Vfm[2017]
version for fixed media
Enrico Francioni (Italy, 1959)
CLUSTER Vfm (stereo version on fixed medium) can be considered as one of the possible versions of CLUSTER. The composition consists of sixteen sequences and each sequence ends, or opens, with a cluster. It has an overall shape that we could define as a speculative type; this element becomes the extended criterion to the details of the material as well as to the substance of the processing principles and the spatialization / localization of the material. The whole work is, in the first part, the construction and, in the second, the decomposition (in time and space) of textures and sound agglomerations dominated all by the interval of fourth.
CLUSTER Vfm is the CLUSTER V (for percussion, live electronics and support sounds) version on stereo. The material is of instrumental origin (vibraphone); It is made in shape: original, elaborate, or continuum. Csound's role: events come into a Csound algorithm, then reproposed, or elaborated, after a time lag. Elsewhere the events processed are reproposed elaborated after a delay. The original (processed or not) material is output in output with the processed one. Types of signal processing: original, delayed but unprocessed, elaborate, delayed and elaborate. The continuum is derived from a single instrumental sound [B4] then processed with analysis and resuscitation (Phase–vocoder) to create sound bands, up to clusters. Effects made with Csound and applied to the signal: distortion, flanger, harmonizer, randomization, remodulation, reverse reading, analysis / resynthesis, delay.
THREE MINIATURES[2016]
I. Minus one
II. Sonom
III. Ordi ventilo
Massimo Vito Avantaggiato (Italy, 1974)
In these pieces I only used sounds derived from wind, industrial or domestic fans/ventilators and water droplets. In this acousmatic miniatures the intervention of man on nature is represented by computational strong modelling of original sounds, while wild nature is represented by natural sounds.
Carnyx[2017]
Chris Arrell (USA, 1970)
A ceremonial and military trumpet, the Carnyx features prominently in contemporary depictions of the Iron Age Celts. Standing 10-12 feet in the air when the mouthpiece is brought to the performer's lips, the Carnyx places the bell and mouthpiece on opposite ends of an elongated "S" shape. The bell, typically designed as the head of a wild animal or mythical monster, comes to life with fiery red eyes, wagging tongue, and cacophonous bellows. One can imagine the terror of first encountering an army of Carnyces, the towering heads slowly emerging from the early morning mist with "cries so loud and piercing, that the noise seemed to come not from human voices and trumpets, but from the whole countryside at once". (Polybius, Histories, II, 29).
I used Ircam's OpenMusic to build algorithms for the generation of .orc and .sco parameters (Csound instruments, note onset, durations, intensity and panning). I then imported the files into Csound for audio synthesis. All sounds were synthesized with Csound.
Cycle Etude[2017]
Tong Chen (China, 1992)
The inspiration of this piece is that the world is based on cycle. The universe, the solar system, the seasons, the history, etc. are all in cycle development. I hope to search the rule of the world in this piece. In 1948, Pierre Schaeffer recorded the noises made by trains running along railroad tracks, used the analogue recording and editing techniques, he composed the famous piece “Etude aux chemins de fer”, and musique concrete became an important style of electronic music. The word "Cycle" also represents bicycle, so I choose the sounds made by bicycle. 70 years later, I use modern digital audio technology, and composed this piece, would like to pay my respect to Pierre Schaeffer.
The samples of the piece were came from the different sounds of bicycle. I edited the samples in Logic, and used Cabbage plugins to modify the samples, then organized all the materials, finally composed this piece, and I also want to show the life and art are also running in a cycle.
Shell[2017]
Michael Gogins (USA, 1950)
"Shell" is composed and rendered using csound.node, which embeds Csound in the JavaScript context of NW.js, an application that runs Web pages in the Chrome browser without need of a Web server. The piece is composed using my JavaScript algorithmic composition library Silencio. The piece is based on a parametric Lindenmayer system, which not only generates notes, but also voice–leading transformations that are used to fit the notes into chords. In other words, the Lindenmayer system generates not only the notes, but also the harmony. The score is rendered to audio using Csound. The piece also features graphical sliders that are used to adjust instrument parameters and levels, and a 3-dimensional piano roll display of the generated score.
Csound is used to generate and process all audio. The piano sound is synthesized by the Pianoteq VST plugin, which is a high–quality physical model of a grand piano. The plugin is run in Csound using the vst4cs opcodes. All other instruments are synthesized completely by Csound. There are no samples in the piece. Instrumental sounds are routed to effects and outputs using the signal flow graph opcodes.
Nakx[2017]
Marcelo Carneiro (Brazil, 1971)
This works deals with attack–resonance profiles and different allures and forms of continuation imposed to them. Each attack triggers multiple events that gradually develop through time. Accumulations and rarefractions are part of the compositional strategy that deals with a process of unfolding components during the continuum.
All sound objects were created using CsoundQt, and some additional processes were done with the Composer Desktop Project (CDP). The mixing has been done in the Reaper environement.
Dark Path #6[2016]
Anna Terzaroli (Italy)
This piece is a part of the “Dark Path” series. This electroacoustic music focuses on sound marks of a sonic landscape. It embodies a sense of history beyond itself. Beyond the analysis of the used recordings, there is a history that is personal. The piece aims to examine and explore the transformative possibilities of the computer music.
Sound material is from a soundscape located in Italy. The sounds were recorded, then they are processed by Csound (AM, RM, FM, Granular synthesis etc.), so they are mixed up by Audacity and Csound.
911[2017]
Wanjun Yang (China, 1977)
911 is the emergency call of many countries. Every people may encounter some emergency situation, and need the help of police. They need to dial the numbers on the phone. The sounds of 911 may be the hope of the people in need. In this piece, I generated the dialing sounds of 911 in Csound, in order to simulate the analogue phone call, I used the DTMF way to generate the sounds. I believe the sounds of 911 is so important and unforgettable to every people in danger, and the sounds of 911 will not always be the same in the whole procedure of the emergency, the feeling will be changed in different periods. In this piece, I used doppler, reverse, pvsBlur, pitchShift and some other Cabbage plugins to sketch a scene of a building on fire, and some people in danger were rescued and came back to life. The form of this piece is A+B, part A is building on fire, part B is the rescues.
I programmed a simple patch in Csound to generate the DTMF sounds of analogue telephone, and export the sounds of code "9" and "1" as wave files, as source materials of this piece. I choose some FXs in Cabbage, then exported as VST plugins. I copied the Cabbage VST plugins dll files which works correctly into VST plugin folder, and indexed them in Cubase. I imported the material files into Cubase, then I processed the sounds with Cabbage VST plugins to modify them, used different Cabbage FXs in FX chain to get new sounds. After getting all the sounds, arranged them in timeline, and inserted Cabbage FXs to the tracks, and used the envelope to modify the parameters of FXs of the track. In the console of Cubase, I used automation of pan, fader and other parameters to mix all the track, then used compressor in Main out and mastered the piece. Finally exported as FLAC.
Early morning. I can see three planets in the sky[2016]
Anton Kholomiov (Russia, 1985)
The music describes the fragile mood of the early morning. It's the state of mind when you are not fully awake but already not sleeping. This is kind of fragile moment when something unexpectant can happen. Like the unusual sight of three planets. There are rare moments when they can be seen. There are actually even four of them if we count the Earth but it obviously can not be seen in the sky. There are too many very good emulators that create ideal piano sound and the living piano seems to be getting on its way to Red List of Threatened Species. That's why I'm very happy to record the live piano and hear it's alive and slightly out of tune sound. And present it to the audience of the confrence.
The whole production cycle was made with Csound. But there are no synthesized sounds! I've recorded live improvisations with grand piano and sontoor. Then I've layered piano in many ways with lot's of reverses and random panning. The piece was made by happy accident too. Initially I was planning to record the completly predefined piece on piano. I've recorded it but at the last moment I decided to record some random passages to fill the possible gaps in the piece. And when I came home I realized that the random passages were the most interesting thing in the whole session :) Then with magical wand of Csound I've scattered them in random ways and through a lot of relistening found out the parameters that generated most interesting sound to me. Then came the birds in the jungle sample and some random natural noises. At the last moment I've recorded the santoor to add some touch of psychodelic state of mind to the piece.
Fábula[2014]
Marcelo Machado Conduru (Brazil, 1955)
Starting from the point that sounds are carriers of expressions, the next step could make us think about how to tell a story without words. Or better, to build a scenario where dialogues and other types of speech rule out the words to develop an unword conversation. This is our sensation when the birds share their chants or when the dogs start to bark along the night. Dialogues in some level. Inflections, intonations, rhythm and more vocal details could be summed in a word – Utterance. Sense and Utterance is a hard and subjective theme to dig on, so the focus here is just oriented to - Music and Utterance.
The piece presents types of "speakers" in sections that are named as: 1 – Terrestres; 2 – A Fala das Coisas; 3 – Aéreos; 4 – Humanos.
The base for technical proceeds was to keep the sounds crude where gesture and spectral changing could be clear.
Source Code: zip file.
Videojuegos, zombies y otros[2017]
Lucía Chamorro (Uruguay, 1991)
This experimental composition was programmed using Csound. It resulted from the working process at the Electroacoustic Music Studio in the Universitary School of Music (Montevideo, Uruguay), in the ambit of the Csound workshops.
This composition was made mostly using Csound. I made 6 different sections using Csound and then I used a sound editor software just for the assemblage.
Principal opcodes: fmvoice, pan2, rand, moogvcf, vlowres, fmmetal, reson, tambourine, diskin2, nreverb, foscil, lowpass2
Short audio files:
- twopeaks.aiff (for the fmmetal opcode)
- crunch.wav (from a sound bank) – (for diskin2)
- oua.wav (recorded by me) – (for diskin2)
Split[2016]
3 Movements and Live Audience Participation (First and Second Movement)
Shijie Wang (China, )
This piece is based on author's own experience. One day he was doing a boring, mechanical and repetitive work for quite a long time. At the begining, he was fully concentrating on the job. As time goes on, he started to lose focus, his mind started to think about other non–relevant and logic–less stuff, kind of like a dream, while he was still physically doing the work. Later, he even heard acousmatic voices talking to himself. This piece intents to recreate this mental journey. There are 3 movements. The first one is called "Distracting", which describing the transition between fully focused to distracted. The second movement namely "Overhearing", which means mind and spirit was traveling even further from reality than that was in movement 1. The final movements is called "Splitting", which insinuates that mind and spirit was completely split apart from reality.
First Movement: Sound materials used: Sound of an old–school needle printer, Sound of someone writing, and a Piano.
Second Movement: Composed and performed using Csound. Seven modules were defined for making sound, which are instr 2, instr 20 using pluck UGen for a percussion–like sound; instr 3, instr 30 using oscil UGen with 3 different tables, instr 4, inst 41, inst 42 using diskin2 for sample playback; 5 modules manipulates the parameters from all the sound–providing instruments and effects, which are instr 1, instr 10, instr 101, instr102 and instr 98; a delay effect instr 99; at last, the score part which does the job as its name refers.
Third Movement (not included): This movement includes Live Audience Participation, implemented using Web Audio API, p5.js, node.js and Max/MSP
Source Code: https://github.com/Rexhits/Split (Second movement).
SaxGui[2004]
Basilio Del Boca (Argentina, 1976)
SaxGui, is an acousmatic piece composed starting from the combined sounds processed of Alto Saxo and Guitar. It does not present in its beginning an extramusical program, but it is only a work thought to obtain tensions and distensions in the textures and sonorous densities, given by the sumories of sounds and by rhythmic structures more or less animated.
The work was mixed using Adobe Audition. The original samples of saxophone and guitar were processed using Adobe Audition tools, SMS tools (by Xavier Serra), and specifically Csound: Soundin, sndwarp, Butterworth and clfilt Filters (clfilt) and hrtfer. Moreover, we used the application Clusters for algorithmic composition (score maker, written in Visual Basic, by Basilio Del Boca), in combination with Csound, with which was used to create specific sections of the piece.
Pulso Interior [2017]
Leonardo Secco (Uruguay–Canada, 1966)
The 450 bpm pulse acts as a “binder” to sustain a continuously transforming polyrhythm. This background pulse guides the piece and acts as “hidden” support, allowing a complex control of all the parameters to define a multi level rhythmic fabric. Internal pulse is based on a generative algorithm that allows for the control of the temporal, dynamic, timbral and spatial parameters of the piece by way of human interaction. The result is an immersive electroacoustic performance, in which percussive sounds seem to interact to configure a continuously transforming virtual space.
Occupazione dello spazio [2017]
Michele Del Prete (Italy, 1974)
Occupazione dello spazio (The occupation of space) is a piece working on three modalities of the taking place of space as sounding space. The piece is composed for a minimum of 8 loudspeakers after the Renaissance polichoral music of the Venetian school. The movements and the positions of the sound events in space determine the nature of the music material (impulses, granulation, filtered white noise) and not vice versa. Changements on the space scale are more important of those on the time scale.
The entire piece is the result of sound synthesis processes entirely made with Csound. Four instruments are used: 1) impulse trains with variable filtering and reverberation; 2) a filtered granulation (with variable filtering), 3 and 4) two distinct filtered noise sources. Several global variables assure interdependency and cross relations among the sounds generated by those instruments. Space is an essential component of Occupazione dello spazio (8 tracks): its elements are spatialized according to three distinct patterns (a "stable–static" figuration for the impulses, an evolving form for the granulation, a double choir constellation for the two noise sources).
Source Code: zip file.
[stereo downmix]
Ad Inifinitum: A tribute to Jean–Claude Risset[2016]
Georg Boenn (Germany, 1965)
'Ad Infinitum' is a tribute to Jean-Claude Risset, composer and pioneer of Computer Music, who died in November 2016. One of his many contributions was an acoustic illusion, the famous infinite glissando. My composition explores his glissando in a polyphonic way by using various transformations. First, the speed of the glissando is continuously changed. Secondly, the harmonic series is used to transpose the glissando and to transform its sound via granular synthesis. Finally, several voices are composed together and projected in circular movements around the auditorium.
The piece uses an implementation of Risset's glissando for csound. The usual sinusoidal oscillators were replaced by square–wave oscillators. The classic algorithm is further modified to have a precise control over the tempo and the direction of the glissando in real–time and by means of an LFO. Several versions were synthesized with interaction in real–time. The sound files were further processed by granular synthesis modules in Cecilia (csound and pyo version). Spatialisation to a 6.1 channel output was synthesized by csound using vbap opcodes. Furthermore, a trigonometric formula was developed for generating accelerandi and decelerandi of two pulse stream towards the end of the piece that signify a translation of Risset's pitch–based glissando to a 'glissando' in the rhythmic domain.
[stereo downmix]
Vocalisense [2015]
Oscar Pablo Di Liscia (Argentina, 1955)
The title is a portmanteau for Vocalise and Sense. A vocalise is a very well known kind of musical piece, and Sense stands for the name of the vocal ensemble Nonsense (whose members recorded the source sounds of the work). The source sounds are both speech (normal, whispered, shouted) and sung (sustained eight–note chords with different vowels). However, these two categories are combined or transformed the one in the other trough several synthesis and processing techniques. Since the work was conceived for a two dimension surround sound system using the Ambisonics spatialisation technique, the exploration of several “spatial paradoxes” (such as the spatial dissociation of sounds from a single vocal source, the spatial projection of the constituent elements of a word or a syllable, and the spatial dissociation of the direct sound and its room effect) is most relevant in its structure.
The piece was entirely synthesized using Csound. Basically, a set of instruments that perform granulation, espectral based synthesis and spatialisation designed by te author were used. The code and the source sound files used are available if requested. Recording Engineer of the source voice sounds: Natalia Perelman. The final render of the piece is a First Order Ambisonic B-Format and must be decoded for the appropriate set of loudspeakers. The spatialisation of sound treatment is basically 2D (except for the early echoes and dense reverberation).
[stereo downmix]
Al disopra dimorano gli uccelli del cielo, cantano tra le fonde [2007]
Fabio De Sanctis De Benedictis (Italy, 1963)
"Al disopra dimorano gli uccelli del cielo, cantano tra le fronde" is a quadraphonic electroacoustical piece. The title is desumed from a Psalm, and the sound material is taken from recording of bird songs, an explicit reference to Messiaen. The form wants to be a trip towards highness, as if the public was inside an aviary, a resemblance from composer's memory from childhood age. This trip is a metaphysical one, too, because in sciamanism birds are Gods messengers.
Csound and Cmask have been used to edit and spatialize the sounds. Snd, the CCRMA software, has been used to create artificial bird songs, for an interplay between natural and cultural/artificial. The audio files have been finally edited in Ardour, with a final mastering by Jamin, Linux softwares.
Source Code: zip file.
[stereo downmix]
Second Body Awareness[2017]
Michael Rhoades (USA, 1956)
Second Body Awareness was composed using Csound, Cmask and AbSynth. The latter was used to create the 4 sound files the piece is comprised of. It was originally intended to be diffused in a 3D cuboid sound environment and is the basis for a 3D/360 visual music piece, currently in progress. Rather than acting as a description of second body awareness, it is meant as an expression of it.
[stereo downmix]
SCP[2016–2017]
Joachim Heintz (Germany, 1961)
Two spaces in alteration. In the first, "real" space: something wants to get out. Direct, beating energy, obviously without recognizable result. In the second, "unreal?" space: resonance of the hits and clashes, resonances which go their own way, more and more extended — to which end? SCP ... escape ... scape.
Some recorded samples of different snare drum hits are the main sound source of this piece. All is done in Csound, without any other application:
- the rhythms and the accents
- the spatialization (vbap for the direct sound)
- the resonator which is built up by a combination of a Filter (clfilt) and six variable comb filters with different chord structures and glissandi of every single unit, put then in an ambisonic setup
Source Code: zip file.
[stereo downmix]
Definierte Lastbedingung [2016]
Clemens von Reusner (Germany, 1957)
“Definierte Lastbedingung” (engl. defined load condition) is based upon the sounds of electromagnetic fields as they arise when using electric devices. Numerous recordings of electromagnetic landscapes were made at the "Institute for Electrical Machines, Traction and Drives" (IMAB) of the Technical University of Braunschweig (Germany) with a special microphone. This sound material has little of what a “musical” sound is intrinsically. There is no depth and no momentum. In their noisiness these sounds are static, though moved inside. They usually seem bulky, harsh and repellent, even hermetic as the well known electrical hum. “Defined load condition” (a technical term when testing electrical machines) is about with these sounds which are explored in their structure, reshaped and musically dramatized by the means of the electronic studio. The main frequency of electrical current in Europe is 50 hertz and hence 50 and its multiples is also the numerical key this composition is based upon in a variety of ways. Spatialization: 3rd–order ambisonic. "Definierte Lastbedingung" is also the german contribution to ISCM "World New Music Days" 2017 in Vancouver, Canada.