PROCEEDINGS

Abstracts

Session 1

Developing Cabbage Plugins For Composition and Sound Design and Sharing Them Online

Caio Jiacomini pdf

Abstract
In 2020 I discovered Csound. Using the FLOSS Manual, I started teaching myself about this amazing program. With Cabbage, I realized I could make my own plugins and distribute them on the internet. My goal was to create custom tools for my own composition and sound design work, and be able to share both my musical creations and the tools I designed to create them. The result of this work was Vendaval, Granulera, and Cristalera. This paper will present a detailed overview of these Cabbage plugins, and share my experience and advice about distributing them through the itch.io storefront
Keywords
Csound, Cabbage, Plugins, Distribution, Granular

Educational Tools for Csound

Gianni Della Vittoriapdf

Abstract
This paper presents a work platform aimed at simplifying the musical composition process with Csound for students and beginners. The tools developed for this purpose aim to reduce the distraction that comes from having to carry out intermediate tasks that could be automated. They concern the edi- tor's text expansion, automatic GUI and plotting, fast syntax for operations with arrays, python-style list comprehension, multichannel expansion for Csound opcodes. Although initially designed for the beginner, these tools may also prove useful for experienced users should they wish to evaluate these procedures with these tools.
Keywords
workflow, text expansion, compositional tools, fast syntax, GUI, multichannel expansion, arrays, list comprehension

New Arduino Opcodes to Simplify the Streaming of Sensor and Controller Data to Csound

John ffitch and Richard Boulangerpdf

Abstract
An alternative communication mechanism between the Arduino and Csound is proposed and described in detail, with simple examples. Comments on this design and possible developments and enhancements are sought.
Keywords
Csound, Arduino, UNO-R3, sensors, controllers

New Opcodes for MIDI CC Preset Banks and MIDI Note-on Toggles for Csound in the Bela, Csound in the Nebulae, and Csound in General

John ffitch and Richard Boulangerpdf

Abstract
In Csound, designing a MIDI synth that could store and recall MIDI Continuous Controller (CC) settings on the fly (MIDI presets), or turn a specific MIDI note-on message into an on/off toggle that would turn a Reverb, Flanger, or Distortion effect on or off are quite basic design needs that, until now, have required some pretty ingenious, advanced, and sometimes quite convoluted coding tricks to do the job. Some solutions use widgets and functions in Cabbage, CsoundQt, or Blue, but what if you not using Cabbage, CsoundQt or Blue, or what if you are not working on an OS, or in an Application, or on an embedded computing platform that supports them such as the Bela, the Qu-bit Nebulae, or Chrome, Safari, Firefox? To address this general need, a new family of counting opcodes cntCreate, cntCycles, cntDelete, cntRead, cntReset, cntState, and a new family of MIDI controller opcodes ctrlpreset, ctrlprint, ctrlsave, ctrlselect have been added to Csound. In this paper, a discussion of their design, and examples of their use in general Csound and in Csound running on the Bela will be presented.
Keywords
Csound, MIDI, controller, preset, toggle

Modeling a ’Classic’ Hardware Sequencer in Csound: The Design and Use of the sequ Opcode

John ffitch and Richard Boulangerpdf

Abstract
Over the years, there have been many instruments, designed and shared, that model the ’classic’ analog step-sequencer. Some used the table opcodes, some did it with Gens, some employed score macros, and score commands, and others simply copy-pasted lines in the note- list. More recently, impressive sequencer instruments are being built with arrays, schedule, and event, and schedkwhen. These designs have ranged from the simple to the sublime and reveal many wonderful and inspiring approaches. All are worthy of study and imitation. Still, beginners always ask, ”How can you do sequencing in Csound?” This question often leads into a deeper dive than they are ready for. Or they ask, ”Does Csound have a sequencer opcode?” Until recently, the answer to that question was ”no”, but now the answer is ”yes!”. This paper will introduce the sequ opcode, discuss how it was designed, show how it works, and showcase some of the novel features, and the more esoteric possibilities, associated with it’s unique design.
Keywords
Csound, sequencer, sequ

Session 2

A Just-in-Time Compiler for Csound Opcodes

Victor Lazzarinipdf

Abstract
This paper inroduces a work-in-progress project targeting the development of a user-defined opcode just-in-time compiler. It describes the newly introduced module compiler which can take C or C++ code, compile it and make it availble inside a running instance of Csound. The principles of a C++ opcode object factory and its applications are also discussed. The direction of travel leading to the completion of a UDO compiler is outlined.
Keywords
Just-in-time compilers, extending Csound

The Design and Use of Minimal7: Creating Subsets of Csound for Embedded Applications

John ffitch and Richard Boulangerpdf

Abstract
There have been complaints of ”opcode bloat” with Csound, especially when embedding an audio application into a small devices. The Minimal7 system offers a solution by automating the process of only including the opcodes and fgens that are actually used in a customised Csound system. and not ”all” of Csound. In the following paper, the mechanism for this process is explained, and a simple example of the work-flow is presented. This is followed by a description of the limitations of the current version and suggestions of what could be done to a generate even smaller customised versions of Csound.
Keywords
Csound, Embedding, Customised

Analysis, DSP and Composition

Emiliano del Cerro Escobar

Abstract
This paper presents the process of a musical composition (Resemblance) based on the analysis of children's songs and the use of the results of this analysis to produce a new work using Neural Nets, Artificial Inteligence, Grammars and Digital Signal Processing. The idea of the composition came from the theory enunciated by Noam Chomsky that indicates how children recognize timbre and contour of melodic songs before semantic meaning of language, as well as the consideration of music as natural language. The concept of music as natural language allows the use of concepts and algorithms derived from Computer Science and Artificial Intelligence.
Keywords
Musical Analysis, Resynthesis, Audio DSP. Csound Composition

Session 3

Csound and Python: A State of the Art Survey

Marco Gasperini and Giuseppe Ernandez

Abstract
The aim of the authors is to present a brief survey on the state of the art and a short tutorial for musicians on the chances given by the interaction between Csound and the Python programming languages regarding the field of algorithmic composition. Some examples using historical models have been developed in particular an environ- ment for generating John Cage's Imaginary Lanscape n° 5 variants and another to simulate James Tenney'Four Stochastic Studies grammar.
Keywords
Python, Algorithmic Score Generation, Electronic Music Teaching, John Cage, James Tenney.

Using a Waveguide to Model the Pipa in Csound

Ningxin Zhangpdf

Abstract
Csound offers a huge set of tools for composers and sound designers. The author is a sound designer, electroacoustic composer, and classically-trained pipa player who was especially inspired by the sound of a number of Csound’s waveguide opcodes, and many of the physically modeled Csound instruments. Immediately, the author used them in an attempt to imitate the sound of the pipa. However, the result was quite different from the acoustic pipa tone and thus led to the research presented in this paper. What is shown here is how, in order to simulate a more convincing pipa, the author needed to follow physical modeling practice, and hand code the filters mathematically.
Keywords
Csound, Pipa, Waveguide, Karplus-Strong

Rhythmic Synchronization of Events based on OSC Data from an External Source

Øyvind Brandtseggpdf

Abstract
OSC messages is an efficient and versatile manner in which to communicate between Csound and other software. An inherent drawback of using network communications is the potential for timing jitter. An event displacement of just a few a few milliseconds will in many cases be perceived as a problem for music performance. To alleviate these potential problems, one can use different methods for time stamping each OSC message, and the receiving module can use this for rhythmically precise synchronization when playing the events. The current article explores a method for such rhythmical synchronization within Csound.
Keywords
OSC, Python, Rhythm, Timing and Synchronization

Session 4

New Utility Classes and Sketches for Developers and Sound Designers in CsoundUnity

Mateo Larrea and Caio Jiacominipdf

Abstract
TheCsoundUnity wrapper brings the realtime synthesis and signal processing power of Csound to applications created with Unity. Despite the various benefits that it provides, the lack of an extended set of examples and/or tutorials presents a challenge for those who are new to the workflow. In the following paper, we present a series of C# utility classes that will facilitate the creation of new models and applications by showing what is possible. These i nclude scripts/classes that assist in the formatting of Channel and ScoreEvent data, getting Transform and RigidBody data, establishing trajectories for the spatialization of Audio Sources, processing spectrum data for audio reactivity, timing events based on mathematical sequences, processing real-time input from a microphone or AudioClip, and incorporating haptics into a list of potential interactions. Essentially, this collection aims to exhibit an accessible interface for those who are not Unity or Csound experts - abstracting repetitive methods and providing a layout for the further development of ‘sketches’ and fully-implemented applications.
Keywords
CsoundUnity, C# Classes, Csound, Unity

Designing VR Applications with CsoundUnity

Pedro Sodrepdf

Abstract
This paper presents two interactive music systems built for the Oculus Quest that use the CsoundUnity package. The first system explores different ways to control Csound instruments and samples by interacting with 3D objects using object collision and the grip buttons of the VR controller. The second system transforms Boulanger’s classic “Trapped In Convert” into an interactive system that allows users to play and spatialize adaptations of the original Csound instruments used in the piece - to remix it and recompose it and play along with it.
Keywords
Csound, Unity, CsoundUnity, VR

Realtime Audio Raytracing and Occlusion in Csound and Unity

Bilkent Samsuryapdf

Abstract
Extensive research has gone into the development of interactive virtual environment tools but its studies on audio technology are limited. There are many acoustical modeling tools which can be ap- plied to audio processing for virtual environments and the integration of Csound and Unity presents an opportunity to explore this area of research. This thesis describes an implementation of a realtime ray tracing and sound occlusion system to model acoustics in a virtual environment. The game engine Unity is chosen for the environment and Csound is the sound system chosen for the audio processing component. In the following paper, the mechanisms for which the process is outlined and the workflow is presented. A test to evaluate the effectiveness of the system is conducted. This is followed by a description of its limitations and proposal for a revised model.
Keywords
Csound, Unity, Raytracing, Occlusion

Session 5

Implementing Andy Farnell’s ‘bouncing ball’ in Csound

Marijana Janevska, James Anderson and Joachim Heintzpdf

Abstract
This paper discusses the Csound implementation of the bouncing ball model from Andy Farnell's Designing Sound [Farnell 2010]. We will consider Farnell’s approach to sound design with Pure Data, then present two possible procedures for extending this model to and improving it in Csound. Finally, we will present creative examples of varying and employing the model in a musical context.
Keywords
Csound, CsoundQt, Pure Data, Hannover, Incontri, FMSBW, Andy Farnell, Designing Sound

HYPERCURVE - A hybrid curve forge in Csound

Johann Phillipe and Jacopo Greco d’Alceopdf

Abstract
HYPERCURVE is a new library designed to combine different curve algorithms inside one function table. It has been thought as a tool for musicians looking to shape precisely their envelopes and function tables. The library is exposed to several environments, like Csound, Faust, Lua and C++.
Keywords
Curve,Perception,Csound,Shape,Envelop,Waveform,Control

The road to electronic music education and composition in China based on Csound

Wanjun Yang and Jinhao Hanpdf

Abstract
Using music programming for electronic music composition has been an important way of electronic music research and composition for more than half a century, but how to educate and promote it is a great problem for researchers. This paper will discuss the experiences and achievements of Chinese electronic musicians in electronic music education and composition with Csound. With the Sichuan Conservatory of Music as the object of analysis, we will explore how to combine new technologies with the essence of Chinese culture in electronic music composition and composer training, find new forms and explore the impact of the intermingling and collision of different cultures on the promotion of music art and technology.
Keywords
Csound, Chinese culture, teaching, composition, The Night of Coding

Session 6

Scanned Synthesis Then & Now – The Design and Enhancement of Csound’s Scanned Opcodes

Richard Boulanger and John ffitchpdf

Abstract
Scanned Synthesis was introduced at the 2000 ICMC in Berlin. The underlying algorithm was developed by Max Mathews, ex- panded upon and coded into Csound by Paris Smaragdis, and, over the years, enhanced and expanded further by John ffitch. Recently, when working to clarify the wording and improve the examples in the Csound Manual, a number of questions arose such as: ”what are the differences between all the scanned opcodes?”; ”how do they interact with each other?”, and ”how do the scanned opcodes actually work under the hood?” While answering these questions, and coding new examples, the authors realized that there was room for further improvement, and that some powerful optional arguments could be added to support further sonic exploration. In fact, where the scanned opcodes started, and where they are today is quite an inspiring story. This paper will tell the story of the birth of scanu and scans in the mind of Max Mathews, and their current scanu2, scantable and scanmap capabilities today.
Keywords
Csound, scanned, scanu, scanu2, scans, scanmap, gen44

Creating Modulation Matrices for Modern Synthesizers and Effects in Csound and Cabbage

Jonathon Walterpdf

Abstract
When designing synthesizers and effects often the most important and characteristic component is its modulation capabilities. When looking at commercial hardware such as pedals by Chase Bliss, essentially every parameter can be modulated internally, as well as controlled via MIDI and CV. Alternatively, hardware such as Sequential’s Prophet 6 offers limited customizable modulation, which helps establish the synthesizer’s signature sound. Internal modulation presents an interesting problem as one doesn’t want to limit the modulation capabilities of the user, while also not sacrificing CPU. One solution is to build all-encompassing internal routing matrices into your plugins. The following paper will explain how to create a routing matrix for synthesizers and effects inspired by the one in Ableton’s wavetable synthesizer.
Keywords
Csound, Cabbage, ntrpol, Modulation, Matrix, Plugins