[MWS]: Berlin ICMC 2000

useif useif at uni-hamburg.de
Mon Mai 29 09:00:18 CEST 2000


Some information about the International Computer Music Conference
Berlin (=ICMC) 2000.


-----------------------
Workshops at ICMC 2000
-----------------------

This year, the International Computer Music Conference (ICMC) will
take place in Berlin, August 27 - September 1. With its motto
"Gateways to Creativity", ICMC 2000 underscores the creative use of
technological developments in the arts and the emergence of a new
generation of human-oriented technologies. A special feature of ICMC
2000 will be the 11 workshops on topics from aesthetics and cognition
research to a number of important tools and techniques in sound
design and computer assisted music creation.

There will be nine short workshops lasting half a day and two long
workshops lasting 3 days each. Registration will be possible both on
an individual workhop basis and for a workshop group for the entire
duration of the event. The workshops topics are at a glance:

- Rhizome Café: Networked Digital Sound in David Tudor's Rainforest
- Soundscape Composition and Multichannel Audio Diffusion
- Computer music studies, aesthetics and intercultural issues
- Cognition and Perception of Computer Music: Principles and Issues
- Notation and Music Information Retrieval in the Computer Age
- Computer Music Programming for the Web with JSyn and JMSL
- Networked Realtime Sound and Graphics Synthesis with SuperCollider
- Collaborative Composition for String Instruments and Live Electronics
- Spatialization Techniques with Multichannel Audio
- Sensors for Interactive Music Performance
- Composing with Algorithmic Processes

In addition, four panel discussions partly with a strongly applied
workshop character will take place during the ICMC itself. The topics
are:

- Aesthetics of Computer Music
- Analysis-Synthesis Techniques
- Content Retrieval of Music
- Digital Audio Effects

What follows is a summary of the contents of the workshops and
panels. We invite students, practicing artists and researchers from
all related fields to participate. For registration details or
further information, go to:

          http://www.icmc2000.org/




3 Day Project workshops: August, 24th to 26th 2000

==Rhizome Café: Colliding with Rainforest (Ronald Kuivila, USA)



This workshop will introduce techniques of sound analysis, sound
synthesis,
and network control in order to create sound material for a realization
of
David Tudor's Rainforest. The workshop should be of interest to people
who wish to familiarize themselves with Rainforest as a work and
SuperCollider
as a tool. Naturally, participants wishing to use other tools in the
preparation of
sound material are free to do so. (Please notify us, so we can arrange
OSC support
for the networked component of the workshop.) Rainforest is remarkably
effective at engaging musicians and non-musicians alike in the creation
of
electroacoustic music, so the workshop should also be of interest to
teachers
of electronic and computer music.

What is Rainforest?

In Rainforest, speaker drivers are attached to found objects to create
an
orchestra of sounding objects. Rainforest objects are better imagined as
'filters'
than as loudspeakers. Many Rainforest objects have spatial distribution
characteristics ill-suited to traditional concert presentation. For
example, an oil drum
hung upside down a few feet off the ground with a driver attached
creates a
reverberant miniature environment available to those who duck their
heads inside it.

Consequently, Rainforest is presented as a walk-through environment that

blurs the distinctions between "workshop", "installation" and "concert"
while remaining
centered on musical concerns. This makes Rainforest a remarkably
effective
'pedagogical' work that it is simple enough in conception for beginners,
while
making subtle demands that can keep very experienced musicians quite
occupied.

The Workshop

Developing a realization of Rainforest involves choosing an object,
discovering the best way to attach a 'driver' to the object, and
developing sounds for the resultant
instrument. Because of the limited time available, a collection of
objects will be developed
before the workshop begins.

The primary focus of the workshop will be on creating sound material for

these objects and incorporating that material into performance
structures based on
network communication. A detailed understanding the acoustical
characteristics of the objects in
Rainforest makes it much easier to develop sound material form them.
Consequently, the
workshop will also introduce methods for extracting impulse responses
from the objects and
graphing their spectra.

Ideally workshop participants will develop their own objects and bring
them
to the workshop. The 'drivers' (speaker coils, not software!) needed are
available from:

http://www.centuryinter.net/invisiblestereo/index.html

Additional information, assistance, and software (see below) will be
available throughout the summer via email to registered workshop
participants.


Day 1:
The Rainforest Toolkit

The Rainforest Toolkit is a library of classes and examples written in
SuperCollider and designed for use as an extensible environment for
creating sound material for Rainforest. The toolkit can be used naively
as a library of preexisting 'patches' for which users can save
'presets'.
(It is possible to interactively interpolate between these presets, in a

manner similar to that found in GRM tools.) The first half of the day
will
introduce the basic features of the toolkit (including impulse response
extraction) and apply them directly to Rainforest objects.

The second half of the day will introduce the basics of synthesis in
SuperCollider, how to make synthesis programs, how to use the Rainforest

Toolkit to make GUI controls, and how to add the resultant 'patches' to
the
toolkit. This segment will stay firmly focused on the unit generator
library augmented by discussion of lists, closures, classes, and objects

only where necessary.

Day 2:
morning: SuperCollider as a sequential programming language

One of the most challenging aspects of Supercollider for newcomers is
the
interaction between sound synthesis and event oriented control. We will
introduce the basic syntax of the underlying programming language and
take a process oriented approach to using this language to control sound

synthesis, using mechanisms provided by the Rainforest toolkit that
simplify
control.

We will then illustrate the support of Open Sound Control that
SuperCollider
provides and with a simple networked spatialization scheme. (The audio
outputs of each computer will be connected to a sixteen channel
spatialization
system. OSC packets from those computers will control where the sound of

each computer appears in the Rainforest.)

afternoon: SuperCollider as a object oriented programming language

SuperCollider provides a more powerful object oriented approach to a
score
language. We will provide a brief overview of this approach, introducing
the
Stream, Pattern, and Event abstractions and illustrating their
application both
to note oriented Event's (as found in Event.protoEvent in the
Supercollider
source) and non-standard event abstractions more suited to Rainforest.

Day 3:
Preparing realizations

This day will be devoted to preparing and assembling sound material for
the installation and rehearsing performance structures based on network
intercommunication.

An ensemble structure based on 'flocking' algorithms applied to the
spatial
and temporal distribution of sound material from the participants will
be a
simple first step. Workshop participants are invited to propose their
own
network-based structures for realization. (Please do this early, so
there
is time to prepare them.):

evening:
Performances with the Rainforest installation will run from 8:00PM until

midnight. The evening will begin and end with a 'walkthrough'
performance
based on the sound material created in the workshop. Other performances
through the evening will be the network based structures together with
individual contributions that approach the collection of objects as a
single
'instrument'.

Afterwards:

Rainforest will run as a sound installation throughout the ICMC using
the
sound material prepared during the workshop.







==Soundscape Composition and Multi-channel Audio: Techniques, Issues
(Barry Truax, CA)


The workshop will focus on soundscape composition in the context of
multi-channel diffusion. Drawing on the pioneering work of the World
Soundscape Project and contemporary compositions by composers associated

with Simon Fraser University, the workshop will include presentations on

the processing of environmental sound using granular and other
approaches,
the psychoacoustics of multi-channel diffusion (comparing
discrete-channel
approaches to emerging commercial formats), as well as an overview of
the
hardware and software systems involved. These presentations will be
complemented by listening sessions for octophonic works, and hands-on
experience with a multi-channel system.

Soundscape studies began formally with Canadian composer R. Murray
Schafer’s call to establish the World Soundscape Project (WSP) in
Vancouver, Canada in the late 1960’s. The original WSP culminated in a
number of definitive texts (such as Schafer’s The Tuning of the World
[1977] and Barry Truax’s Handbook for Acoustic Ecology [1978] ), as well

as archival and recording projects (including the European Sound Diary
and the Vancouver Soundscape [1973] recordings). Soundscape studies and
composition continue to have an international influence (with soundscape

projects from New Delhi, Brasilia, Madrid, Amsterdam, Buenos Aires, and
Soundscape Vancouver [1996]). For a brief account of the WSP see
http://www.sfu.ca/~truax/wsp.html.

The history of multi-channel diffusion is equally rich, from early,
experimental works of the 1950’s (Schaeffer’s Potentiometre d'espace
[1951], Varèse’s Poème Electronique, [1958], et al.) to the latest
automated, psychoacoustically-informed, multi-channel systems. Our own
research designing and composing with automated matrix mixers begins
with the development of the DM-8 (first used at the Banff ICMC [1995],
subsequently in the Soundscape Vancouver [1996] realizations) and
continuing with Vancouver-based commercial development of the
AudioBox/ABControl diffusion system. For an overview of research into
automated diffusion at Simon Fraser University see:
http://cec.concordia.ca/contact/contact101Tru.html.
And:
Barry Truax, "Composition and Diffusion: Sound in Space in Sound",
Organised Sound, 3(2), 1998, pp. 141-6.

Soundscape composition and automated, multi-channel diffusion complement

one another. Soundscapes (both real and imaginary) are inherently
immersive and so best conveyed through multiple-speaker arrays, while
the difficult technical problems associated with controlling complex,
multi-channel diffusion are solved by using flexible, automated systems
driven by a host computer.

There are also some cautionary lessons to be drawn from soundscape
studies and our research with discrete-channel systems with regards to
emerging industrial multi-channel audio formats. Public and home theater

surround sound systems, for example, are being marketed as new
technologies, when in fact, there is already a long-standing musical
aesthetic and practice for multi-channel sound reproduction. If that
practice and accumulated compositional and listening experience is
brushed aside, are the new technologies then likely to enhance our
collective listening experience, or degrade it (as in the case of so
many modern additions to the urban soundscape)?

Additionally, we would hope to extend the minimal scenario for the
workshop to a longer period and include active participation and
composition/diffusion realizations by local and visiting composers.

Repertoire and Links

Over several years we have accumulated a diverse repertoire of musical
material which could be presented in the workshop or concurrent and
subsequent listening sessions. Others are currently being developed. A
brief selection of existing soundscape diffusions includes:

The Hidden Tune Sabine Breitsameter (Germany)
Pendlerdrøm Barry Truax (Canada)
Recharting the Senses Darren Copeland (Canada)
Sequence of Earlier Heaven Barry Truax (Canada)
Vanscape Motion Hans Ulrich Werner (Germany)
Vancouver Soundscape Revisited Claude Schryer (Canada)
Talking Rain Hildegard Westerkamp (Canada)
Toco y me voy Damián Keller (Argentina)

Further artist information and discographies are available at:
http://www.sfu.ca/~truax/cdlist.html
http://earsay.com

For current concert and workshop information relating to the
AudioBox/ABControl system and additional repertoire (soundscape and
other) see:

http://www.interlog.com/~darcope/adven.html (D. Copeland’s Sound Travels
site)

For AudioBox manufacturers’ software and hardware technical
documentation see:
http://thirdmonk.com (Third Monk Software, ABControl diffusion software)

http://hfi.com (Harmonic Functions, AudioBox matrix mixer)

========

Workshops / Tutorials: August, 26th and 27th 2000

Computer Music: For whom is this music intended? (Leigh Landy, UK)
(Computer music studies, aesthetics and intercultural issues)

This workshop focuses on a number of contemporary topics within the area

of computer music studies, in particular that of new aesthetics and
accessibility issues. Questions presented, which will be subjects for
debate, include:

1.Why is a good deal of computer music marginalised (ane what are the
causes)?
2.Are there new aesthetic approaches to computer music?
3.Similarly, are there 'schools' of computer music, and, if so, what
holds them together?
4.How does computer music reflect today's multi-cultural world?
5.How might bridges be built to a broader electroacoustic community?

More specifically, the areas of computer music studies will be
delineated. Recent discoveries of importance and new paradigms will be
introduced. Delicate problem areas, such as the schism between computer
music studies and that of traditional forms of music will be discussed.
Similarly, the separation between computer 'art music' and 'popular
music' studies will be criticised.
Following a general introduction, several subjects will be introduced,
including relevant demonstrations, and a number of debates will take
place.
The session will be issue and method based.
The goal of the workshop is not only to increase understanding of new
music, but
also to increase awareness of the issues this electroacoustic music
raises in a very dynamic society.

==Cognition and Perception Issues in Computer Music (Ian Whalley, NZ)


This workshop will give an introduction to current research issues
and methods about music cognition and perception and their application
on the composition of Computer Music, as well as their implications
in developing new music theory. A main area of focus will be the use of
systems
dynamics modelling to map narratives as a way of approaching
composition, including
demonstrations of this principle with computer software that allows
users to
develop their own models. Furthermore, a basic introduction to
connectionist
(Neural Network) models for auditory perception as well as to auditory
scene analysis will be given.

==Notation and Music Information Retrieval in the Computer Age (Carola
Böhm, UK)


The interrelated issues of description, representation and retrieval of
time-based data are subjects of rapidly accelerating interest to the
ever wider community of users of digital resources. One area with its
own technical, cognitive, perceptual, and aesthetic problems is that of
music. The very word 'music' embraces an enormous range of cultural
activities and meanings. Technical aspects, as for instance
"Representation", "Standards", "Storage and Retrieval" and "User
Interfaces" need to be reconsidered in a music relevant context. Any or
all of these may have importance in the design of systems intended to
handle 'music' in the context of digital service provision.

The workshop addresses the need to redefine certain traditional
Computing Science methodologies within the context of music and asks the

question if traditional methodologies are still valid or need to be
changed or expanded to fit the special needs of music and music relevant

use contexts. While doing that, the addressed methodologies will be
briefly explained. These will draw on the results of Music Information
Retrieval
Workshops in the UK at the DRH99 and the SIGIR99 in the US,
while exploring current fundamental problems of Music Information
Retrieval. Whenever systems are planned or standards are proposed,
funding bodies and the scientific world in general, expects the music
technologists to build and base their development upon existing,
standardised, conventional or proven methodologies coming out of the
Computing Sciences in general, specifically from the fields of
Information Retrieval, Human omputer Interaction, Signal processing,
Object orientation, Language design, and similar. The question seldom
asked or answered is: do these methodologies still work within the
context of music? More specifically, kinds of issues addressed are:

User interfaces:

              Human-Computer-Interaction (HCI): What is a user-friendly
interface in a musical context? ·
              HCI: Are traditional Evaluation Models like Nielson,
Nivergelt, Conceptual models,

Golden rules of HCI enough to evaluate interaction in a musical
context? And if not, with what do they have to be expanded? ·

Information-Retreival (IR):

              How can a large number of relevant musical search 'hits'
be represented clearly

and how can this relevance be measured?

Music representation:

              Data Structures: Music Encoding Syntaxes vs/and/or (?)
Music Programming Environments
              Language Design: What makes a good Music Language?


Standardization Processes:

              MPEG7 and SMDL, a promising future, never to be fulfilled?

              The need for interchange file formats


Storage and retrieval:

              Signal Processing: and Metadata: Data compression vs Data
description explosion?
              IR: Can traditional IR evaluation methodologies be used
for music information retrieval?
              HCI: The specification of what the user wants to find in a
(content based) musical search?


In this context the workshop will contribute to the experiences shared
by

1.academic users pursuing music research or education
2.people involved with music management and retrieval systems
3.music industry

Specifically the area of Music Information Retrieval will benefit for
example:

1.Digital Libraries using information retrieval technologies for
time-based Media
2.Applications using synchronisation technologies for creation processes
of multiple and different types of
time-based media
3.Educational applications with usage of dynamic and changing music
representation over wide area
networks


== Computer Music Programming for the Web with JSyn and JMSL (Phil Burk
and Nick Didkovsky, USA)


The web can be your concert hall. Using JSyn you can develop complex
interactive computer music pieces that run in a web browser. JSyn is a
Java API that provides real-time synthesis for Java applications and
Applets. It is based on a unit generator model and is designed for
real-time interaction. JMSL, the Java Music Specification Language, is a

composition toolkit written in Java that can control JSyn or JavaSound.
JMSL provides hierarchical scheduling tools, distribution functions and
sequence generators based on the tools in its predecessor HMSL.

The tutorial will cover:

              creating, connecting and controlling unit generators,
              loading and queuing sample data,
              creating and queuing envelope data,
              creating complex patches,
              building hierarchies of composition objects using JMSL,
              using various algorithmic tools of JMSL,
              using the Java network API to make interactive multi-user
pieces,
              practical issues involving plugins and browser
compatibility,
              how to put an interactive algorithmic computer music piece
in a web page.


This workshop will explain how to write computer music
programs in Java that can be placed in a web page. We will use the JSyn
API to synthesize audio in real-time using a library of unit generators.

We will then use the JMSL API to create high level compositions that can

include hierarchical scheduling. Because JSyn and JMSL are Java based,
composers can combine any of the Java APIs including networking, 3D
graphics, GUI tools, etc. with these two powerful music packages.

JSyn can be downloaded for free at: http://www.softsynth.com/jsyn

Details on the workshop leaders can be found under:
http://www.softsynth.com/philburk.html
http://www.ingress.com/~drnerve/nerve/pages/nick.shtml

== Networked Realtime Sound and Graphics Synthesis with SuperCollider


This is a hands-on tutorial on creating interactive musical applications

with SuperCollider. SuperCollider is an extremely versatile object
oriented programming
environment with a large number of generators for real time sound and
graphics
synthesis. It supports both MIDI and the Open Sound Control
communication protocol.
For more info on SuperCollider see the website:
http://www.audiosynth.com

== Collaborative Composition for String Instruments and Live Electronics
(Hugh Livingston, USA)


Advancing techniques of instrumental performance while advancing
sophistication and imagination in the interface are appropriate goals of

interactive music. Up-close and hands-on access to a cellist with a wide

range of extended techniques - including nearly a hundred pizzicato
sounds -
establishes a framework for incorporating new techniques into
composition
with live electronics. The process of “extending the extended
techniques,”
suggested by composer Bruce Bennett, is examined. The objective is to
build
a library of techniques, both instrumental and technological, which are
available as a shared resource for musicmaking. Performances and
discussion
of existing collaborative efforts are offered.

Diverse directions in instrumental technology are considered, with
demonstrations of commercial electric instruments contrasted with the
impact
of materials technology on the natural instrument. Is there a palpable
difference in cellos with titanium endpins, tungsten strings, carbon
fiber
bows, wooden tailpieces, bridges with pickups? The range of expression
of
each instrument is demonstrated in order to draw up a blueprint for the
future of instrumental technology. Much of the seminar will be devoted
to
exploration of extended string techniques, their application and
versatility, with potential interface designs. Participants will receive
a
CD-ROM with audio examples of many techniques which can be used to test
a
new generation of DSP tools. Participants are encouraged to bring sample

sketches and patches (by prior arrangement) for workshop consideration.
Arrangements will be made for subsequent testing of new patches, with
the
hope that new compositions will result from the collaboration.

The workshop is designed to be interactive and to offer considerable
opportunities for audience participation. I wish to bring into close
contact
the ideas put forward by performers, composers and e ngineers, setting
up a
framework for continued interaction at future conferences. The
instrument
will serve as the essential focus. Exposure to a new idea about cello
sound
is guaranteed.

== Spatialization Techniques with Multichannel Audio (Olivier Warusfel,
FR)


This workshop will explain principles of sound spatialization based
on sound spatialization software developed at IRCAM,
using the real-time synthesis environments jMax and MAX/MSP,
and ambisonics techniques.
It will show how composers and sound designers can use these tools
to simulate the movement of discrete sound sources in virtual sound
spaces, using multichannel audio.

== Sensors for Interactive Music Performance (Yoichi Nagashima, JP)


This workshop focuses on sensor technology for interactive
music performance. Several different types of sensors as interfaces
between human and computer systems will be demonstrated,
and discussed both from a technical and from an artistic viewpoint,
giving examples from multi-media works. An introduction on the
design of sensing system will be provided, explaining how to
create such without expert knowledge of electronics.
The handling sensor information to create interactive art
will be shown based on the MAX graphical programming environment.
Starting with a number of given sample "patches" or simple
programs, we shall give a hands-on introduction to the treatment
of sensors, the programming of new algorithms and the
composition of sample works. Finally, we will discuss the
overall implications of "new human interfaces"
and interactivity in multi-media technology.

Links to material on this workshop:
http://nagasm.org/ASL/11-11/

== Composing with Algorithmic Processes (Rick Taube, USA)


A hands-on introduction to computer-based music composition
techniques, focusing primarily on random and iterative processes.
Topics include, but are not limited to, the use of noise, discrete
and continuous random selection, weighted randomness, markov chains,
looping, phasing, state machines, rewrite systems, dynamical systems.
The workshop presents the thoretical content in a non-technical
manner together with interactive, graphical demos that the
participants can use to explore the concepts presented.

=============

Panels

==Aesthetics of Computer Music


The purpose of this panel session is to foster discussion on cultural
aspects of computer musics. Main questions addressed are; Why is a good
deal of computer music marginalised? Are there new aesthetic approaches
to computer music? Are there 'schools' of computer music, and, if so,
what holds them together? How does computer music reflect today's
multi-cultural world? How might bridges be built to a broader
electroacoustic community?

==Analysis-Synthesis Techniques


The aim is to compare different analysis techniques and software
packages running on the same input material, in order to gain insights
on questions such as; What are the subtle differences between models?
Which techniques are better suited for which class of sounds? How do
different analyses trade off mutability for accuracy of resynthesis?
What are typical artifacts and trade-offs? Materials will be made
available in SDIF format accompanied by generic display tools and
example code on the web. For more information, see
http://cnmat.CNMAT.Berkeley.EDU/SDIF/ICMC2000"

==Content Retrieval of Music


This panel will discuss current issues of content processing and
retrieval of audio and music related to new international standards such

as the MPEG 7 proposal on timbre description, their impact on media
industry, music distribution, and musical creation practices.
This panel is organized in cooperation with the european
research work group CUIDAD. For more information, see:
http://www.ircam.fr/cuidad/

==Digital Audio Effects


This panel will focus on different aspects of digital audio effects :
signal processing, high level processing (extraction of features,
adaptive
effects), control, perceptual and musical aspects, implementations for
educational and musical use; it will include a presentation of the first

results of the european COST-G6 DAFx action and a discussion on the way
information on digital audio effects can be gathered and distributed
(web,
book, sounds). For more information about DAFx see
http://echo.gaps.ssr.upm.es/COSTG6/
http://www.sci.univr.it/~dafx/