Te conference proceedings are available in PDF format here



Session 1

Working with pch2csd — Clavia NM G2 to Csound Converter

Gleb Rogozinsky, Eugene Cherny and Michael Chesnokov pdf

The paper presents a detailed review on the pch2csd application, developed for conversion of popular Clavia Nord Modular G2 synthesizer patch format pch2 into a Csound-based metalanguage. The Nord Modular G2 was one of the most remarkable synthesizers of late 90s. A considerable number of different patches makes Nord Modular G2 to be a desirable target for software emulation. In this paper we describe the pch2csd work flow, including modeling approach, so the developer may use the paper as a starting point for further experiments. Each model of Nord Modular's unit is implemented as an User-defined Opcode. The paper gives an approach for modeling, including description of ancillary files needed for the correct work. First presented at the International Csound Conference 2015 in St. Petersburg, the pch2csd project continues to develop. Some directions for future developments and strategic plans are suggested. The example of transformation of Nord Modular G2 patch into the Csound code concludes the paper.
Nord Modular G2, converter, metalanguage.

Daria: A New Framework for Composing, Rehearsing and Performing Mixed Media Music

Guillermo Senna and Juan Nava Aroza pdf

In this paper we present a new modular software framework for composing, rehearsing and performing mixed media music. By combining and extending existing open-source software we were able to synchronize the playback of the free Musescore music notation editor with three VST audio effects exported using the Csound frontend Cabbage. The JACK Audio Connection Kit sound server was used to provide a common clock and a shared virtual timeline to which each component could adhere to and follow. Moreover, data contained on the musical score was used to control the relative position of specific Csound events within the aforementioned timeline. We will explain the nature of the plugins that were built and briefly identify the five new Csound opcodes that the development process required. We will also comment on a generic programming pattern that could be used to create new compatible VST audio effects and instruments. Finally, we will conclude by mentioning what other related software exists that can interact out-of-the-box with our framework, how instrument players and computer performers can simulate the performance experience while practicing their corresponding parts at home and what our future plans for this software ecosystem are.
Mixed media music, Musescore, Csound, Cabbage, JACK.

Session 2

Interactive Csound Coding with Emacs

Hlödver Sigurdsson pdf

This paper will cover the features of the Emacs package csound-mode, a new major–mode for coding with Csound. The package is in most part a typical emacs major mode where indentation rules, completions, docstrings and syntax highlighting are provided. With an extra feature of a REPL, that is based on running csound instance through the csound–api. Similar to csound–repl.vim csound–mode strives to enable the Csound user a faster feedback loop by offering a REPL instance inside of a text editor. Making the gap between development and the final output reachable within a realtime interaction.
Emacs, Csound, REPL.

Vim tools for Coding and Live Coding in Csound

Luis Jure – Steven Yi

Vim is a powerful, free, cross-platform text editor, very popular among programmers and developers. Luis Jure's csound-vim plugin provides a set of tools for editing Csound files with Vim, like syntax recognition and highlighting, folding, autocompletion, on-line reference, and templates, as well as macros for compiling Csound orchestras from within Vim. Steven Yi's csound-repl plugin provide functionalities to live code with Csound. In this talk, we will demonstrate the features found in each of these plugins and discuss workflows for Vim and Csound.
Vim, Csound, REPL.

Session 3

Chunking: A new Approach to Algorithmic Composition of Rhythm and Metre for Csound

Georg Boenn pdf

A new concept for generating non–isochronous musical metres is introduced, which produces complete rhythmic sequences on the basis of integer partitions and combinatorics. It was realized as a command–line tool called chunking, written in C++ and published under the GPL licence. Chunking produces scores for Csound as well as standard notation output using Lilypond. A new shorthand notation for rhythm is presented as intermediate data that can be sent to different backends.The algorithm uses a musical hierarchy of sentences, phrases, patterns and rhythmic chunks. The design of the algorithms was influenced by recent studies in music phenomenology, and makes references to psychology and cognition as well.
Rhythm, NI-Metre, Musical Sentence, Algorithmic Composition, Symmetry, Csound Score Generators.

Interactive Visual Music with Csound and HTML5

Michael Gogins pdf

This paper discusses aspects of writing and performing interactive visual music, where the artist controls, in real time, a computerized process that simultaneously generates both visuals and music. An example piece based on Csound and HTML5 is presented.
Visual music, generative art, algorithmic composition, computer music, Csound, HTML5.

Session 4

Spectral and 3D spatial granular synthesis in Csound

Oscar Pablo Di Liscia pdf

This work presents ongoing research based on the design of an environment for Spatial Synthesis of Sound using Csound through granular synthesis, spectral data based synthesis and 3D spatialisation. Spatial Synthesis of Sound may be conceived as a particular way of sonic production in which the composer generates the sound together with its spatial features. Though this type of conception lives in the mind and work of most composers (specially in electroacoustic music) from many time ago, some strategies applied were inspired on the work of Gary Kendall. Kendall makes specific mention of both Granular Synthesis and Spectral data based synthesis as examples of resources through which the composer may partition the sonic stream in both the time domain and the frequency domain, respectively. These procedures allow a detailed spatial treatment of each one of the obtained parts of a sound which, in turn, may lead to realistic or unusual spatial images. The aim is not to describe in detail granular synthesis, nor spectral data based synthesis neither sound spatialisation techniques, but to describe the particular strategies in the design for the aforementioned purposes.
Spectral data based synthesis, granular synthesis, sound spatialisation.

Integrated Tools for Sound Spatialization in Csound

Luis Jure and Martín Rocamora

This talk is a report of an ongoing project aiming to develop a series of opcodes providing an integral solution for sound spatialization in Csound, using state–of–the–art techniques. The specific goals include extending and improving the present Ambisonics opcodes, developing a configurable high quality 3D Ambisonics FDN reverberator, and an opcode for sound localization with Ambisonics plus distance.
Spatialization, Ambisonics, FDN reverberation.