Keynotes ICSC2019
Øyvind BRANDTSEGG
Norwegian University of Technology and Science, Norway
Abstract - "21 years of live performance and installations with Csound" - How Csound has always been there for me
I started using Csound in the late 1990's, when it was just getting possible to use it in realtime. I had some musical desires that prompted me to look into it, even if the learning curve was pretty steep at the time. In the beginning I would combine Csound with hardware synthesizers and samplers, and also used Max to interface with external sensors and overall control. Over the years, as the available processing power increased and Csound developed, it was possible to build a live setup based exclusively on Csound. With the advent of the Csound API it was easier to interface to other languages and technologies. This opened possibilities for writing a realtime algorithmic composition system for use in live performance and sound art installations. Building an audio system for installations running several years required some extra attention to issues of stability and maintenance. As these systems became more complex, the need for modularization grew stronger. Some of the tasks involved could also be identified as being of a more general nature, allowing integration with off-the-shelf tools. This again enhancing Csound's strong points as a development tool for customized audio processing. Isolating the components that actually need to be new, implementing them as opcodes or instruments. The talk will be illustrated with projects done in Csound over the last 21 years, including recent efforts into crossadaptive processing and live convolution.
Richard BOULANGER
Professor of Electronic Production and Design Berklee College of Music Boston Massachusetts, USA
Abstract - Dedicating My Musical Life to the Mastery of a Virtual Instrument – Csound
In memory of a brilliant, passionate, and truly gifted young csounder – Shengzheng Zhang (a.k.a. John Towse)
I am truly honored to have been invited to present one of the keynote addresses at The 5th International Csound Conference - ICSC 2019 in Cagli (Pesaro-Urbino) Italy. Thank you so very much for this wonderful invitation to share some of my more recent thoughts, to perform some of my newest music, and most importantly, to publicly express my gratitude to so many, here at this conference, and in the international Csound community, whose instruments, code, research, and music have been constant sources of inspiration. And it is just these many “sources of inspiration” that are brought to mind in today’s keynote, especially the beautiful instruments and music of my student Shengzheng Zhang (a.k.a. John Towse), in whose memory this keynote is humbly dedicated.
Leonardo GABRIELLI
Università Politecnica delle Marche, Italy
Abstract - From Le Marche to MARS: a journey through accordions, synthesizers and computer music
The region where ICSC 2019 takes place, Le Marche, is known worldwide for its long tradition of musical instruments manufacturing, which dates back to 1863, when - according to the tradition - Paolo Soprani built his first accordion. Since then, however, electronic pioneers and DSP developers have joined traditional accordion craftsmen to cyclically renew the industry, in the effort to keep up with global standards. In 1988, the Bontempi-Farfisa group founded the IRIS lab, led by Giuseppe di Giugno and run by several outstanding developers and computer music researchers. The MARS workstation was one of its most prominent outcomes, and it was employed for several computer music works of the 1900s. It was programmed using ARES, a rich computer music platform based on graphical patching. All this material and history is now coming back to light after the accidental discovery of machines and documents long forgotten in an abandoned factory. After reactivating and restoring computers and their software, thanks to the effort of the Acusmatiq-MATME association, we are now able to run ARES and its patches. This software will be described and linked to other existing computer music languages, including CSound and Max. The talk includes footages and documents produced by the Acusmatiq-MATME association.
Joachim HEINTZ
HMTM Hannover, Germany Head of Electronic Studio FMSBW
Abstract - Learning Csound, Learning Music
Whatever music be, it is based on listening. Composing can be considered as listening to sounds and investigating their tendencies. Can learning Csound be considered as learning music by learning to listen? And how well is Csound suitet to materialize the composer's ideas about sounds and structures? The keynote will float around these questions — certainly not with a final answer at its end, but hopefully with some inspirations for the listeners.
Victor LAZZARINI
Professor of Music, Maynooth University, Ireland
Abstract - Csound - Notes on an Ecosystem
This talk discusses Csound as a sound and music computing system at the centre of an ecosystem of applications. For about fifteen years now, the software has developed a formidable array of connections to other programs, at various levels of user interaction, from high to low. Since its first release, Csound has provided an ideal studio platform for research and production, providing means for extensions and connections to other systems. In time, this ecosystem was widened as part of a calculated development strategy that placed Csound at the centre of a variety of applications. In this talk, we will explore the Csound ecosystem, with some illustrated examples. As part of this, we will also evaluate critically these developments, proposing some thoughts for the road ahead towards Csound 7.
Steven YI
Assistant Professor, Interactive Games and Media Ronchester Institute of Technology, USA
Abstract - Tomorrow's Csound
What might Csound look like in the future and how do we get there? In this talk, I will assess the state of Csound today, both the good and the bad, and propose a roadmap to guide us through the next generations of Csound.
Session 1
Processing Nature: Recordings, Random Number Generators and Real-Intrinsic-Extrinsic Perceptual Threads PDF
Mark Ferguson
Abstract
In this paper, I propose the concept of the real-intrinsic-extrinsic perceptual thread in acousmatic composition, which has become deeply inter- twined with my wildlife sound recording practice and non-realtime use of Csound as a processing tool in the studio. The concept draws heavily from De- nis Smalley's spectromorphological discourse regarding intrinsic-extrinsic threads and source-bonding (referenced throughout). Following a brief intro- duction (and in an attempt to articulate my thoughts from a practical perspec- tive), I discuss processing approaches for two, recently-completed acousmatic works, in which Csound's random number generating opcodes were employed to break apart natural source recordings and create complex, secondary source materials. I then proceed to break down and describe the real, intrinsic, and ex- trinsic thread components separately. The paper concludes with a brief summa- ry of the proposed concept and the role of Csound in its development, followed by a consideration of its apparent linear aspect and recent influence on my technical recording methodologies.
Keywords
wildlife sound recording, nature, random number generator, non- realtime processing, acousmatic composition, methodology, spectromorpholo- gy.
A Musical Score for Csound with Abjad PDF
Gianni Della Vittoria
Abstract
This paper presents the advantages of using a traditional musical score to make music with Csound. After illustrating some alternative approach- es, examines Abjad, a Python library for printing music through Lilypond, and explains the technique for linking Abjad to Csound. Since in order to compose a musical score of synthesizers it is necessary to manage complex envelopes, particular attention is paid to how to represent the envelope profiles of the vari- ous parameters on a score and how to interpret them in Csound. As the system is open to several possibilities, the choice is proposed that seems simpler and allows the user to see the envelope as a musical element to be set on a staff, providing a better overview of the whole composition.
Keywords
Score, Abjad, Python, Algorithmic composition, Lilypond
Modeling of Yamaha TX81Z FM Synthesizer in Csound PDF
Gleb G. Rogozinsky and Nickolay Goryachev
Abstract
The paper presents authors’ original method of hardware synthesizers and sound processing devices modeling, focusing on the Yamaha TX81Z FM synthesizer as an example. The Csound 6 is used for the software simulation of original TX81Z, which is 4-operator FM syn- thesizer from 1987, well-known for its peculiar C15 preset called Lately Bass, and total amount of 8 waveforms. The paper gives a review of the most prominent FM synthesizers, considering both hardware and soft- ware implementations, brief description of Yamaha TX81Z features, the review of modeling method used by authors, and analysis of modeling results. During the modeling, we measured and modeled DAC unit of TX81Z to achieve the same waveforms. It was done using MATLAB Filter Design Tool, prior to code the corresponding pair of LP and HP filters in Csound. After that step, we modeled oscillators and envelopes. The given figures show the comparison between original TX81Z recorded sound samples and ours Csound-based model.
Keywords
sound synthesis systems, modeling, Csound, FM synthesis
MIUP Portable User Interface for Music
Example of jo tracker - a tracker interface for Csound PDF
Johann Philippe
Abstract
This article presents graphical tools designed to work with Csound. First section introduces the context in which those tools were built. Then, second part presents MIUP, an open source graphical library designed to build audio softwares. Finally, last part describes jo tracker, a tracker software for Csound built with MIUP.
Keywords
MIUP, IUP, jo tracker, Csound, User Interface, Lua, C++
Red-Tratos. Visual Art and Sound Art for the Web PDF
Emiliano del Cerro
Abstract
"RED-TRATOS" is a work made for the web and is hosted by the CVC (Cervantes Virtual Center) belonging to the Cervantes Institute, an institution dependent on the Spanish Government. RED-TRATOS was designed as a mix of visual poetry and as Sound Art. The central part is dedicated to Cervantes and has audio files attached to the visual poem and plays with the name of Cervantes and with phonemes and syllables derived from his name.The work was a pioneer in the field of interactive sound art and visual art and was a key piece in the combination of both worlds for the net (net art).This paper will explain how the project was developed with information on the technology used in the digital signal process as well as the software needed to carry out the work. The main audio application used for audio was Csound, as well VRML, CORTONA, and Softimage for the visual aspects of the work.
Keywords
shynthesis, random distribution, net art, sampling,..
Implementing Arcade by Günter Steinke in Csound PDF
Daria Cheikh-Sarraf, Marijana Janevska, Shadi Kassaee, and Philipp Henke
Abstract
This paper is about the process of implementing the live- electronics of the solo cello and electronics piece ”Arcade” by Gu ̈nter Steinke. We will discuss problems that occurred during the process of implementation and how we approached the transfer of the electronic procedures that were originally on big hardware machines to the Csound programming environment. The main focus of this paper also discusses the possibilities of the Csound FrontEnd CsoundQt, that we mainly used for the performance with its GUI capabilities.
Keywords
CsoundQt, Live-electronic, Instrument, Hannover, Incontri, FMSBW, Gu ̈nter Steinke, Arcade
Improving Csound’s Ambisonics decoders PDF
Pablo Zinemanas, Martín Rocamora and Luis Jure
Abstract
This paper describes the e↵orts we devoted to improve Am- bisonics decoders in Csound. Current version of the existing opcode, namely bformdec1, has some limitations that should be surpassed in or- der that the decoders better fulfill the Ambisonics criteria. In particular, the implemented decoders have no near–field compensation and do not use a di↵erent decoding matrix for low and high frequencies. These issues are addresses in a new implementation of the opcode, namely bformdec2, that also adds some features, such as additional loudspeaker array con- figurations (rectangle, hexagon) and a binaural output for headphones.
Keywords
Ambisonics decoder, HOA, Csound
Session 2
Preliminary study for a chorus opcode PDF
Daniele Cucchi and Stefano Cucchi
Abstract
In this paper we submit the hypothesis of a new “chorus” opcode in Csound. We wrote a simple program in Octave language witch gen- erate random values read by Csound in a further step. These values are used to modify the original playback speed of a audio file. Good choice is to use white sequence of uniform distribuited samples filtered by 1-pole system to obtain low-pass behaviour. Some considerations about ampli- tude distribution of results and proposals to manage it will be done.
Keywords
Chorus, New opcode, Asynchronous playback, Variable de- lay, Random speed, Random, Noise.
Digital Signal Processing Techniques Used to Model the Ibanez Tube Screamer Guitar Pedal PDF
Rory Walsh and Conor Walsh
Abstract
This paper aims to provide a basic overview of methods used in replicating analogue distortion units, using the Csound audio program- ming language. A key focus will be on the TS- 9 Tube Screamer (TS) analog overdrive guitar pedal by Ibanez. Although lacking in the com- plex theoretical analysis seen in typical digital audio e↵ects papers, it is hoped that enough information is provided for beginners who wish to begin their own journey into the world of digital emulations of hardware devices.
Keywords
Csound, Analogous Distortion, Emulation
Synthesis by Parametric Design PDF
Simone Scarazza
Abstract
As a composer I started my research aiming at developing a relationship between the graphic elements and the sound ones. Particularly I focused on the possibility to employ in the sound synthesis, processes and concepts belonging to Parametric Design used in computer graphics. Thanks to this study I built up a kind of library consisting of several models useful for the composition and source of stimulus to master the research toward a graphic approach. In this paper it will shown an example trough Csound.
iVCS3 Programming & The Repurposing of Audio Files To Carry Control Voltage Levels PDF
James Edward Cosby
Abstract
1. To Show how CV Signals can be encoded using Audio Samples and stored within a standard “.wav” File to be used to expand the sonic possibilities of iVCS3 and equivalent hardware...
2. To show Examples of iVCS3 Programming including the transmission
of CV Signals over Audio busses within iOS showing the Control of another iSO Synth using APEMatrix for the audio bus connection.
Show the use of CV Audio Files to add highly programmable enveloping to iVCS3 patches...
Keywords
iVCS3, Control Voltages, Audio Files, Sound Design, Modular Synthesis.
MVerb: A Modified Waveguide Mesh Reverb Plugin PDF
Jon Christopher Nelson
Abstract
MVerb is a plugin that is based on a modified five-by-five 2D waveguide mesh developed in Csound within the Cabbage framework. MVerb is highly flexible and can generate compelling and unique rever- beration effects ranging from traditional spaces to infinite morphing spaces or the simulation of metallic plates or cymbals. The plugin incor- porates a 10-band parametric EQ for timbrel control and delay random- ization to create more unusual effects.
Keywords
Reverberation, Effects Plugin, Physical Modeling, Wave- guide, Scattering Junction, Csound, Cabbage.
Session 3
Kairos - a Haskell Library for Live Coding Csound Performances PDF
Leonardo Foletto
Abstract
Kairos [1] is a library for the Haskell programming language designed to live code patterns of Csound score instructions to be sent to a running UDP server [2] of a pre prepared Csound orchestra.
Keywords
Live Coding, Csound, Haskell
The Hex System: a Csound-based Augmentation of Hexaphonic Guitar Signal PDF
Tobias Bercu
Abstract
The impetus behind the Hex system was a desire to create new guitar effects using Csound processing of hexaphonic guitar audio, and to present these effects to the user in a format that allows playing the instrument to meld with playing the effects. Processing one’s guitar signal with a laptop or a desktop opens many doors, but can also be cumbersome. The Hex system is meant to provide guitarists a smaller and more liberated DSP apparatus that feels more like an augmentation of the instrument itself than a separate module. The Hex system processes audio via a Raspberry Pi running Csound. Using the Pi’s onboard wifi, the system accepts control from TouchOSC, so that parameters can be adjusted in real-time from a nearby smartphone. It is intended for this smartphone to be attached to the guitar adjacent to the pickup and tone controls. The Raspberry Pi and its audio hat are housed in a small box, and this container is roofed by footswitches used to engage and disengage effects.
Keywords
Csound, real-time guitar effects, DSP, hexaphonic, Raspberry Pi
Algorithmic Composition with Open Music and Csound: two examples PDF
Fabio De Sanctis De Benedictis
Abstract
In this paper,after a concise and not exhaustive review about GUI software related to Csound, and brief notes about Algorithmic Com- position, two examples of Open Music patches will be illustrated, taken from the pre-compositive work in some author’s compositions. These patches are utilized for sound generation and spatialization using Csound as synthesis engine. Very specific and thorough Csound programming ex- amples will be not discussed here, even if automatically generated .csd file examples will be showed, nor will it be possible to explain in detail Open Music patches; however we retain that what will be described can stimulate the reader towards further deepening.
Keywords
Csound, Algorithmic Composition, Open Music, Electronic Music, Sound Spatialization, Music Composition
An opcode implementation of a finite di↵erence viscothermal time-domain model of a tube resonator for wind instrument simulations PDF
Alex Hofmann, Sebastian Schmutzhard, Montserrat Pàmies-Vilà, Gökberk Erdoğan, and Vasileios Chatziioannou
Abstract
This paper presents an opcode for Csound, that is based on a physical time-domain model of a closed-open tube resonator which is capable of simulating wind instruments like clarinets or saxophones. The tube model hereby considers sound radiation parameters as well as vis- cothermal losses that occur inside the tube. The model was implemented in C++ using the Csound Plugin Opcode Framework. The resontube op- code allows users to provide complex geometries for the model construc- tion in k-time together with arguments for sound radiation and pick-up position. The opcode is published together with its source code as a git repository including documentation and examples.
Keywords
csound, opcode, physical model, resonator, tube