Keynotes


Øyvind Brandtsegg

Norwegian University of Technology and Science

"21 years of live performance and installations with Csound"

   - How Csound has always been there for me

 

ABSTRACT

I started using Csound in the late 1990's, when it was just getting possible to use it in realtime. I had some musical desires that prompted me to look into it, even if the learning curve was pretty steep at the time. In the beginning I would combine Csound with hardware synthesizers and samplers, and also used Max to interface with external sensors and overall control. Over the years, as the available processing power increased and Csound developed, it was possible to build a live setup based exclusively on Csound. With the advent of the Csound API it was easier to interface to other languages and technologies. This opened possibilities for writing a realtime algorithmic composition system for use in live performance and sound art installations. Building an audio system for installations running several years required some extra attention to issues of stability and maintenance. As these systems became more complex, the need for modularization grew stronger. Some of the tasks involved could also be identified as being of a more general nature, allowing integration with off-the-shelf tools. This again enhancing Csound's strong points as a development tool for customized audio processing. Isolating the components that actually need to be new, implementing them as opcodes or instruments.

The talk will be illustrated with projects done in Csound over the last 21 years, including recent efforts into crossadaptive processing and live convolution.

 

BIO

Øyvind Brandtsegg is a composer and performer working in the fields of algorithmic improvisation and sound installations. His main instruments as a musician are the Hadron Particle Synthesizer, ImproSculpt and Marimba Lumina. Hadron is an extremely flexible realtime granular synthesizer, widely used within experimental sound design with over 200.000 downloads of the VST/AU version. Brandtsegg use it for live processing of the acoustic sound from other musicians. In this context he has also developed tools for live convolution and crossadaptive processing. As musician and composer he has collaborated with a number of excellent artists, e.g. Motorpsycho, Maja Ratkje, and he runs the ensemble Trondheim Electroacoustic Performance (T-EMP). Recent writings include Csound: A Sound and Music Computing System (Springer, 2016, with J. ffitch, S. Yi, J. Heintz, O. Brandtsegg, and I. McCurdy).

In 2008, Brandtsegg finished his PhD equivalent artistic research project on musical improvisation with computers. Øyvind has done lectures and workshops on these themes in USA, Germany, Ireland, and of course in Norway. Since 2010 he is a professor of music technology at NTNU, Trondheim, Norway.

 


Steven Yi

Assistant Professor, Interactive Games and Media

Rochester Institute of Technology, USA

Tomorrow's Csound

 

 

ABSTRACT

What might Csound look like in the future and how do we get there? In this talk, I will assess the state of Csound today, both the good and the bad, and propose a roadmap to guide us through the next generations of Csound.

 

 

BIO

Steven Yi is a composer and programer. He is an Assistant Professor in the School of Interactive Games and Media at the Rochester Institute of Technology. He is the author of the Blue music composition environment, author of the Pink and Score music libraries, and core developer of Csound. He has contributed work on Csound’s parser and compiler, helped to develop Csound’s language design, developed opcodes for Csound, and worked on moving Csound to mobile platforms (Android, iOS) and the web. He also served as the co-editor of the Csound Journal from 2005–2015, and is co-author of the book Csound: A Sound and Music Computing System, published by Springer International.

 

Steven is a long-time supporter of free and open source software for

music. He has presented at the International Computer Music

Conference, Linux Audio Conference, and Csound Conferences. In 2016,

Steven received his PhD from Maynooth University for his thesis work

on “Extensible Computer Music Systems.”

 


Victor Lazzarini

Professor of Music

Maynooth University, Ireland

Csound + _: Notes on an Ecosystem

 

ABSTRACT

This talk discusses Csound as a sound and music computing system at the centre of an ecosystem of applications. For about fifteen years now, the software has developed a  formidable array of connections to other programs, at various levels of user interaction,  from high to low. Since its first release, Csound has provided an ideal studio platform for research and production, providing means for extensions and connections to other  systems. In time, this ecosystem was widened as part of a calculated development  strategy that placed Csound at the centre of a variety of applications. In this talk, we will explore the Csound ecosystem, with some illustrated examples. As part of this, we will also evaluate critically these developments, proposing some thoughts for  the road ahead towards Csound 7.

 

BIO

Prof. Lazzarini is a graduate of the Universidade Estadual de Campinas (UNICAMP) in Brazil, where he was awarded a BMus in Composition. He completed his doctorate at the University of Nottingham, UK, where he was received the Heyman scholarship for research progress and the Hallward composition prize for one of his works, Magnificat. His interests include musical signal processing and sound synthesis; computer music languages; electroacoustic and instrumental composition. Dr Lazzarini received the NUI New Researcher Award in 2002 and the Ireland Canada University Foundation scholarship in 2006. He currently leads the Sound and Digital Music Research Group at the NUIM and has authored over one hundred articles in peer-reviewed publications in his various specialist research areas. He is the author of Aulib, an object-oriented library for audio signal processing, and is one of the project leaders for Csound.

 

Prof. Lazzarini has also forged links with Industry, providing consultancy and research support to Irish companies in the area of computer music. In addition to these activities, he is active as a composer of computer and instrumental music, having won the AIC/IMRO International Composition prize in 2006. His music is regularly performed in Ireland and abroad, and has been released on CD by FarPoint Recordings. Recent publications include Csound: A Sound and Music Computing System (Springer, 2016, with J. ffitch, S. Yi, J. Heintz, O. Brandtsegg, and I. McCurdy), 'Ecologically Grounded Creative Practices in Ubiquitous Music' (Organised Sound, 22, 2017, with D. Keller), ' Supporting an Object-Oriented Approach to Unit Generator Development: The Csound Plugin Opcode Framework' (Applied Sciences 7 (10), 2017), Computer Music Instruments: Foundations, Design and Applications (Springer, 2017), and 'Live Convolution with Time-Varying Filters' (Applied Sciences 8 (1), 2018, with O. Brandtsegg and S. Saue), and Computer Music Instruments II: Realtime and Object-Oriented Audio (Springer, 2019). Prof. Lazzarini is also currently leading an Enterprise Ireland-funded project in the area of headphone music playback enhancement.

 


Richard Boulanger

Professor of Electronic Production and Design Berklee College of Music

Boston Massachusetts, USA

Dedicating My Musical Life to the Mastery of a Virtual Instrument – Csound

A Keynote Speech and Presentation to The 5th International Csound Conference - ICSC2019

 

In memory of a brilliant, passionate, and truly gifted young csounder – Shengzheng Zhang (a.k.a. John Towse)

 

 

ABSTRACT

I am truly honored to have been invited to present one of the keynote addresses at The 5th International Csound Conference - ICSC 2019 in Cagli (Pesaro-Urbino) Italy. Thank you so very much for this wonderful invitation to share some of my more recent thoughts, to perform some of my newest music, and most importantly, to publicly express my gratitude to so many, here at this conference, and in the international Csound community, whose instruments, code, research, and music have been constant sources of inspiration.  And it is just these many “sources of inspiration” that are brought to mind in today’s keynote, especially the beautiful instruments and music of my student Shengzheng Zhang (a.k.a. John Towse), in whose memory this keynote is humbly dedicated.

 

BIO

Richard Boulanger, (b.1956) Professor of Electronic Production and Design at the Berklee College of Music, is an internationally-recognized performer, composer, researcher, developer, programmer and the founder of Boulanger Labs. He holds a Ph.D. in Computer Music from the University of California, San Diego where he worked at the Center for Music Experiment’s Computer Audio Research Lab (CARL). He has continued his computer music research at Bell Labs, Stanford's CCRMA, the MIT Media Lab, Interval Research, IBM and for OLPC. Throughout his career, Dr.B. was been a close collaborator with "The Father of Csound" - Professor Barry Vercoe at MIT and "The Father of Computer Music – Dr. Max Mathews at Bell Labs. Boulanger has premiered his original interactive works at the Kennedy Center, and appeared on stage performing his Radio Baton and PowerGlove Concerto with the Krakow and Moscow Symphonies. Dr. Boulanger has published articles on sound design, production and composition in all the major electronic music and music technology magazines; and for MIT Press, Boulanger authored and edited two canonical computer music textbooks: The Csound Book and The Audio Programming Book.  His company’s Csound-based iOS apps include csJam, csGrain and csSpectral.  For the Leap Motion controller, Boulanger Labs published an app called MUSE and he recently composed and performed a major symphonic work built around these apps called: Symphonic Muse.  Among his many grants and honors, Boulanger was a Fulbright Professor at the Krakow Academy of Music. And at Berklee, where he has been teaching for over 33 years, Boulanger has been honored with both the “Faculty of the Year Award” and the “President’s Award.” Boulanger's Philosophy: “For me, music is a medium through which the inner spiritual essence of all things is revealed and shared. Compositionally, I am interested in extending the voice of the traditional performer through technological means to produce a music which connects with the past, lives in the present and speaks to the future. Educationally, I am interested in helping students see technology as the most powerful instrument for the exploration, discovery, and realization of their essential musical nature – their inner voice.”

 


Joachim Heintz

HMTM Hannover, Germany

Head of Electronic Studio FMSBW

Learning Csound, Learning Music

 

ABSTRACT

Whatever music be, it is based on listening. Composing can be considered as listening to sounds and investigating their tendencies. Can learning Csound be considered as learning music by learning to listen? And how well is Csound suitet to materialize the composer's ideas about sounds and structures? The keynote will float around these questions — certainly not with a final answer at its end, but hopefully with some inspirations for the listeners.

 

BIO

Joachim Heintz, composer and writer, studied first literature, then composition with korean composer Younghi Pagh-Paan. Important was also his membership in a performance group, the Theater of Assemblage Bremen. For Csound, he collaborated with others in the Csound FLOSS Manual and contributed some other stuff, in particular for CsoundQt. Lately he focusses on contacts between music and spoken word, for instance in improvisations with his live-electronic program ALMA.

 


Leonardo Gabrielli

Università Politecnica delle Marche, Italy

From Le Marche to MARS: a journey through accordions, synthesizers and computer music

 

ABSTRACT

The region where ICSC 2019 takes place, Le Marche, is known worldwide for its long tradition of musical instruments manufacturing, which dates back to 1863, when - according to the tradition - Paolo Soprani built his first accordion. Since then, however, electronic pioneers and DSP developers have joined traditional accordion craftsmen to cyclically renew the industry, in the effort to keep up with global standards.

In 1988, the Bontempi-Farfisa group founded the IRIS lab, led by Giuseppe di Giugno and run by several outstanding developers and computer music researchers. The MARS workstation was one of its most prominent outcomes, and it was employed for several computer music works of the 1900s. It was programmed using ARES, a rich computer music platform based on graphical patching.

All this material and history is now coming back to light after the accidental discovery of machines and documents long forgotten in an abandoned factory. After reactivating and restoring computers and their software, thanks to the effort of the Acusmatiq-MATME association, we are now able to run ARES and its patches. This software will be described and linked to other existing computer music languages, including CSound and Max. The talk includes footages and documents produced by the Acusmatiq-MATME association.

 

BIO

Leonardo Gabrielli, PhD, was born in Recanati, Italy in 1986. He got his BSc after an intern at Intel Germany and a MSc after a visiting period at Aalto University. He got his MSc, BSc and PhD respectively at  Università Politecnica delle Marche. He is currently a research fellow at the Dept. of Information Engineering at the same university, where he conducts research and gives lectures in Music Production. His research interests include: Deep Learning for sound synthesis and analysis, physical modelling, networked music performance. In 2012 he co-founded DowSee srl, a spin-off company developing solutions for water, gas and electricity grids, prized by Telecom Italia and eCapital. He also conducts development activities for the musical industry and leads the scientific activities of the Acusmatiq festival and Acusmatiq-MATME association.