Visit unique concerts of composers, who write their music in Csound. Share your works with the community. All authors are welcome!
Meet the Csound community and share your ideas and personal view on Csound. Exchange fresh ideas about future developement.
Visit workshops on Csound programming, sound design, interactive synthesis and embedded systems delivered by the Csounders from all over the world!
Dr. Richard Boulanger was born in 1956 and holds a Ph.D. in Computer Music from the University of California, San Diego where he worked at the Center for Music Experiment’s Computer Audio Research Lab. He continued his computer music research at Bell Labs, CCRMA, the MIT Media Lab, Interval Research, and IBM while working closely with Max Mathews and Barry Vercoe.
For the past 25 years, Dr. Boulanger has been teaching computer music composition, sound design, alternate controllers, and programming at the Berklee College of Music. There, he is a Professor of Electronic Production and Design and has been awarded both the Faculty of the Year Award and the President's Award. Over the course of his career he has been a driving force behind the spread of Csound, since its early days at MIT. He has taught thousands of students, lectured all over the world, and worked to bring Csound to the OLPC project.
His published work includes two seminal electronic production texts from MIT Press: The Csound Book and The Audio Programming Book.
The ICSC2015 begins its work at the Bonch-Bruevich St. Petersburg State University of Telecommunications, the region biggest training and scientific complex specialized in information technologies and communications. There are so many interesting papers and talks during the whole day. Do not miss the meeting with Csounders from all over the world!
This presentation will look at the current state of the upcoming Csound 7. It will cover planned changes to the language such as explicit types, user-defined data structures, and new-style user-defined opcodes, discussing in detail their design and rationale. It will look also both look at how features in Csound 6 made way for Csound 7, as well as how new features planned in Csound 7 will make way for goals in Csound 8. The goal of the presentation will be both to inform the user community of planned changes as well as seek feedback to help shape the future of the system and language.
Patching environments presented in software such as Pure Data, AudioMulch and Max/MSP seem to attract a far higher percentage of computer music novices than traditional text based compilers. So how would Csound fare if presented in such an environment? The latest version of Cabbage attempts to find out. Cabbage Studio is a new Csound based DAW that provides users with a fully functional patching interface and development environment. Users can edit the audio graph on the fly where each node in the graph can be either a Csound instrument, or a 3rd party audio plugin.
This presentation will take a closer look at Cabbage and introduce a number of lesser known Cabbage features, extended techniques and esoteric approaches to instrument design. It will show that Cabbage can offer innovative approaches to user interaction that go beyond those provided by most plug-ins. This can extend beyond just a means of user input to methods of visualising Csound's mechanisms and a redefinition of the scope of a Csound csd.
Computer Music is a field that is now almost sexagenarian. Since the early days of Max Mathews Bell Labs experiments, it has evolved a symbiotic relationship between the artistic and the scientific. In a way, it is the former that gives meaning to the latter, and the latter that allows the former to move forward. Because of this, composers who want to engage solidly with the field need to come to terms with both aspects of Computer Music. For this, there is an emerging set of skills and craft that is becoming the basis for his or her education. These are very different to the ones normally gained in a traditional `conservatory’ musical training, although the two are not mutually-exclusive and can coexist very well. In some places, a very common practice has been of composers employing technicians to create the electronic or computer parts for their music. Although, as a creative-collaborative effort, this might have its merits, it is often the case that the motivation has more to do with the lack of skills from the `composer’, than a inherent desire in the cooperative endeavour. This mode of operation is very dated, and we have every right to expect that a computer musician will have to learn the tools of his/her trade. Thus, it is high time we face to the task of developing in well-defined way what a Gradus ad Parnassum for Computer Music could be, and how Csound can contribute to it.
The basic concepts and the new algorithms that are used in Csound6 in order to enhance performance via parallelism are described in some detail. The hope is that this will assist users in determining when this technology is advantageous, and in how to write instruments that are suited to this level of parallelism. The talk ends with a short consideration of how the scheme could be improved.
The “Csound On Stage Musical Operator” (COSMO) project brings the sound processing language Csound to a portable, standalone musical device in a stomp box guitar pedal format. This pedal is designed for live performances on stage and allows an easy integration of customized Csound based effects into an existing live-electronic setup (Ervik et al., 2013). Inside the COSMO box, a Raspberry Pi micro-computer is running Csound 6.0 (Csound Python API) with a custom designed pre-assembled circuit board (Hardware Attached on Top, C-HAT), which is mounted to the Raspberry Pi GPIO socket. C-HAT allows to connect up to eight free assignable foot-switches, knobs and LEDs from the front panel of pedal to the Raspberry Pi. Though, the COSMO pedal is originally designed along the lines of a classic guitar effect pedal, the C-HAT allows to connect any type of analog control inputs (sensors, expression pedals etc.), as well as an array of LEDs. The C-HAT furthermore, supports a MIDI-In and MIDI-Out connection, which makes the COSMO device more flexible for applications like a standalone Csound based synthesizer or for interactive sound installations. A concept for a beginner workshop has been developed which foresees to build the COSMO device and explains the basics of Csound effect programming to the participants in a two-day schedule. Being able to provide the pre assembled C-HAT board reduced the amount of soldering in a workshop by one day. This presentation will give an overview about the COSMO project and will outline the workshop concept.
Auditory displays present information through sound. As part of an auditory display, the process of rendering information and interaction as sound is called sonification. Sonification can take many forms and be applied to many different problems: from understanding radiation through the clicks from a Geiger counter to developing the complex sound language presenting information in some computer games today. The study of sonification is very developed and a scientific community with expertise in sound synthesis, big data, user interaction, computer science and cognition (among others!) has gathered together around it.
This paper descirbes the system configuration for the piece that was played at Csound Conference 2013 and for the revised piece with audio and visual feedback to audience. The piece played at Csound Conference 2013 is based on the data sent from audience by using their own smartphones or tablets without having to install any dedicated app. The system uses a WiFi router for accepting connection from each participant, and a Linux laptop that runs Csound and lighttpd. By performing XMLHttpRequest data transmissions from the smartphone browsers, OSC message is generated by a PHP script to control Csound. The revised system that includes processing by HTML5 Web Audio API and Websocket that were added to enhance feedback to each participant is also described.
This paper describes [self.], an open source art installation that embodies artificial intelligence (AI) in order to learn, react, and respond to stimuli from its immediate environment. Biologically inspired models are implemented to achieve this behavior, and Csound is used for most parts of the audio processing involved in the system. The artificial intelligence is physically represented by a robot head, built on a modified moving head for stage lighting. Everything but the motors of the stage lighting unit was removed and a projector, camera and microphones added. No form of knowledge or grammar have been implemented in the AI, the system starts in a ‘tabula rasa’ state and learns everything via its own sensory channels, forming categories in a bottom-up fashion. The robot recognizes sounds and faces, and is able to recognize similar sounds, link them with the corresponding faces, and use the knowledge of past experiences to form new sentences. Since the utterances of the AI is solely based on audio and video items it has learned from the interaction with people, an insight into the learning process (i.e. what it has learned from who) can be glimpsed. This collage-like composition has guided several design choices regarding the aesthetics of the audio and video output. This paper will focus on the audio processes of the system, herein audio recording, segmentation, analysis, processing and playback.
The main idea behind Knuth and Alma is the development of a simple live-electronic setup for a speaker. It follows the german child song "Ich geh mit meiner Laterne", which describes a couple of a child and a lantern. For the speaker, their lantern is a small loudspeaker; small enough to be carried and to be put on the table, beneath the speaker when they read a text, large enough to be a counterpart to the human voice. Although the setup is the same for both, Knuth and Alma focus on quite different aspects of spoken language, and output quite different sounds. Knuth analyzes the rhythm of the language and triggers pre-recorded samples of any sound, whereas Alma recalls parts of the speaker's past and does not use any other sound except the speaker's live input itself. Author will first describe the two models and their implementation in Csound, then he will discuss some use cases and future possibilities.
Join us for the global Internet piece!
Our second day is full with events! We start with two paper sections at the Bonch-Bruevich St. Petersburg State University of Telecommunications and after we have User-Developer Round Table. Next we have an amazing concert!
MaxMSP is a popular audio programming environment that enables the quick prototyping of complex graphical interfaces which may be distributed as standalone applications for OS X and Windows. It is possible to exchange audio/control signals and table-based functions between MaxMSP and the Csound engine, thus taking full advantage of Csound’s DSP and Max’s GUI. This paper discusses the idiosyncrasies of the intercommunication between Csound and MaxMSP, and presents an application for multichannel granular and spectral sound transformation.
Adaptive audio for games presents a serious challenge for both developers and sound designers. Existing audio middleware offer some solutions, but fall way short of the kind of tools sound designers are used to working with. As games consoles become more and more powerful it is time for developers to embrace more powerful audio engines. CsoundUnity is a fully integrated audio middleware for the Unity(3D) game engine. It extends previous work on the development of a C# wrapper to the Csound API, by creating a Unity specific C# library that embeds Csound directly into Unity games. With CsoundUnity users have at their fingertips one of the most powerful synthesis systems in existence. Exiting audio middleware force users to work outside of Unity in some kind of hybrid graphical environment. CsoundUnity lets user’s script directly in Unity without needing to constantly move between environments.
This article is about the process of creation of an iOS based app for the digital emulation of the famous electronic synthesiser VCS3 by EMS using a Csound orchestra as sound engine. Despite the out of standard features of the original instrument, we have attempted to clone a large amount of details regarding the sound generation, the connectivity, the look and its specific original ergonomy. We will try to illustrate the general synthesis strategies adopted in creating this musical synth application and in particular, we will focus on Csound code of the main sound modules and their relative individual modeling approach.
ChuckSound is a plugin for ChucK (otherwise known as a “chugin”) that allows Csound to be run inside of ChucK. Prior to ChuckSound, a typical setup for getting Csound + Chuck working together would be to start ChucK and Csound as separate applications, and to use OSC and/or JACK to communicate. With ChuckSound, Csound is spawned inside of ChucK’s audio engine via the Csound API. This approach allows Csound to work seamlessly with ChucK objects without any sort of latency that OSC would produce. ChuckSound has the ability to evaluate Csound orchestra code inside of ChucK as well as send score events.
Throughout more than 20 year of its history, Csound had always been on the edge of the computer music research, implementing novel synthesis methods and spreading them beyond the research to the hands of musicians and sound designers. At its present state, Csound provides the efficient environment for agile sound experimentation, allowing to deal with almost any known sound synthesis and processing method via the vast collection of ready-made opcodes. But despite this fact, the Csound community still lacks for quality reproductions of well-known synthesizers, with commercial-grade UI and possibilities, which in compiled form could have been used without knowledge of Csound. To fill this gap, the authors implemented two commercial-style synthesizers, one of which has been additionally augmented with the semantic synthesis control knobs. The paper describes their architecture, Csound specific aspects of full-featured synthesizer development and a simple solution for “semantization” of the synthesizer's control space.
This project aims to harness WebSockets to build networkable interfaces and systems using Csound’s Portable Native Client binary (PNaCl) and Socket.io. There are two methods explored in this paper. The first method is to create an interface to Csound PNaCl on devices that are incapable of running the Native Client binary. For example, running Csound PNaCl on a desktop and controlling it with a smartphone or a tablet. The second method is to create an interactive music environment that allows users to run Csound PNaCl on their computers and use musical instruments in an orchestra interactively. In this paper, we will also address some of the practical problems that exist with a modern interactive performance system - latency, local versus global and everyone controlling every instrument versus one person per instrument. The result of this paper culminates in a performance system that is robust and relies on web technologies to facilitate musical collaboration between people in different parts of the world.
Csound is a powerful synthesis environment, that has been traditionally exclusive from a modern DAW setup. While it is possible to route MIDI to Csound, and audio from Csound, there has never been a solution that has fully integrated Csound into a DAW environment. Over the course of this paper, we will discuss how users can use Csound for Live to create Max for Live Devices for their instruments that allow for quick editing via a GUI, allow for tempo and transport synced operations and automation, as well as the ability to control and receive information from Ableton with Ableton’s API, and preset saving and recalling. Users will learn best practices for designing instruments that leverage both Max and Live, and will see demonstrations of instruments used in a full song, as well as how to integrate the powerful features of Max and Live into a production workflow.
Starting with ixi lang, an application created by Thor Magnusson, Hloðver learned about the practice of live coding. Since being a Csound user, he experimented with on-the-fly evaluation starting from Csoung 6.0. With few public performances in Reykjavik with pure Csound, he left the Csound languge in live-coding for Clojure, a LISP running on the JVM. The power of Clojure language is very suited for live-coding and Hloðver will talk about how Clojure can be used to communicate with Csound, ending with a short performance.
Csound is a very powerful audio engine, but by syntactic ease of use it is still far behind the modern languages like Python, Ruby or Haskell. That is sad because clumsy syntax prevents many users from unlocking the real powers of Csound. In my library csound-expresion I'd like to take the best parts of Csound (powerful audio engine and scheduler) and supply them with the wings of syntax gifted to Haskell. The proposed solution can greatly enhance the productivity of the Csound musician.
On the last day we have a ambisonics pieces by Csounders and a beautiful chamber concert. Hear the pieces written for live musicians and Csound. Exact times will be announced later.
The region biggest training and scientific complex specialized in information technologies and communications
22/1 Bolshevikov ave., St. Petersburg, Russia
The apartment of the Russian famous composer N. A. Rimsky-Korsakov
28 Zagorodny ave., St. Petersburg, Russia
Pulkovo International Airport is located 23 km south from the town. You can get to the city by bus or taxi.
For travel to Russia most of our guests will need visa. On further details about getting visa please contact us.
We agreed on a special discount for our guests with the Koffer Residences Hostel (450 RUR/night) and the Ambassador Hotel (4 stars, single room — 4660 RUR/night, double room — 5410 RUR/night). Both places located near metro station and two conference venues, including the Rimsky-Korsakov Museum-apartment. To get the special rates, please book a hostel/hotel by writing a letter to firstname.lastname@example.org noting that you are the ICSC2015 participant. Regarding any issues, please do not hesitate to contact us.
For any questions, please contact us by email: email@example.com
For up to date news, please follow us in the social networks: