[audiality] Re: New scripting engine

  • From: David Olofson <david@xxxxxxxxxxx>
  • To: audiality@xxxxxxxxxxxxx
  • Date: Wed, 19 Jan 2005 17:26:58 +0100

On Wednesday 19 January 2005 15.27, Julien Pauty wrote:
> Hi I have suscribe to this list because I know your work around
> SDL. Currently I am interested in coding some audio stuff,
> especially virtual synths. I have seen some open source synth
> around, but the code is completly uncommented. Therefore they are
> hardly reusable by someone that is outside of the project.

Yeah... That seems to go for most Free/Open Source projects that don't 
have enough users/contributors, and "one-man hacks" in particular. 
You just don't document your code all that much if you're the only 
one who messes with it.

(Actually, one shouldn't have to document *code*; only interfaces. If 
you have to explain how the code does what it does, it's most 
probably too messy and needs fixing. :-)

> Would 
> audiality be suited for soft synth ?

It *is* a soft synth. :-)

> What is the current status ?

I've been using it's predecessor (that is, an earlier version than the 
one first released under the name Audiality) for sounds and music in 
Kobo Deluxe for quite some time.

That said, it's still rather limited, especially in the real time 
domain. The mixer structure is quite powerful (especially in the 
development version, which allows pretty much arbitrary networks of 
effects), but the real time synth is just a very basic wavetable 

The off-line synth is basically a DSP toolkit of generators and 
filters ("unit generators") that are called from the scripting engine 
and process an entire waveform at a time. Very simple, but quite 
powerful already.

In future versions, the so called AGW (Algorithmically Generated 
Waveform) scripts will construct networks of unit generators instead 
of actually calling them. These networks will be usable as new unit 
generators, real time or off-line, so you can use them for rendering 
samples for wavetable synthesis (which is all you can do now), or 
"live" as instruments. EEL will also function as the glue between 
events (external, from the host application, from internal sequencers 
etc) and the synth engine.

> For example, does it support midi  for control message and note
> events ?

Yep. I wrote the songs in Kobo Deluxe by using Kobo Deluxe as a synth 
together with a sequencer. :-) The newer demo songs in the Audiality 
packages were made in about the same way, only running the included 
"synth" program instead, getting the MIDI events through the ALSA 
sequencer API.

> I know it relies on SDL and I don't remember the status 
> of SDL  for the midi part.

The SDL dependency is only because of some legacy stuff from the 
pre-Audiality days. (Kobo Deluxe needs SDL, and Audidality started 
out as a very simple sound FX player in there.)

Audiality can still use SDL for audio output (because that's one of 
the most portable audio APIs in existence), but it should soon build 
without SDL if desired. Or without any audio API at all, for that 
matter. You can still use it for rendering audio files...

> The scripting engine : is-it there only to tune some part of a
> program that exploit audiality, or is it used to completely develop
> application ?

In current versions, the scripting engine is only used for loading 
data and rendering waveforms off-line. EEL will still "only" be used 
as a scripting engine in future versions, but the parts of the C API 
and the current hardwired event handling mechanisms will be phased 
out in favor of real time scripting.

> In fact I am looking for support to enter the audio area. I have a
> good knowledge of how synth are architectured and some knowledge of
> signal theory, but I don't want to reinvent the wheel for
> everything. I was wondering if audiality  could help we on this
> way. Maybe it is a bit too young ?

Depends on what you want to do. The off-line synth is quite capable of 
rendering interesting sounds and processing samples already, and the 
real time wavetable synth "does the job", but if you want to get 
started right away, doing serious stuff, you should probably look for 
some other solution for now.

Also note that Audiality isn't really intended for low level DSP 
programming. You'll be able to do sample-by-sample processing and low 
level vector operations when the new EEL gets in, but since EEL 
doesn't compile to native code, you'll never get the kind of 
performance you need to use that for anything more than DSP 
prototyping and the occasional "special effect". The focus is on 
controlling higher level unit generators, implemented in C, so the 
"live" scripting will be mostly on the event level in your average 
Audiality song/module. (Well, that's really for users to decide, but 
that's the way I intend to use it myself, so that's what I'm 
designing for. :-)

Maybe SuperCollider will get your job done? It's been Free/Open Source 
(GPL) for a while now, and it's goals seem really rather similar to 
those of Audiality, except that Audiality is more focused on 
multimedia and games and is intended to be embedded in the final 

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
|  Free/Open Source audio engine for games and multimedia.  |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---
The Audiality Audio Engine mailing list.
Home: http://audiality.org
Archive: //www.freelists.org/archives/audiality
Unsubscribe: Email audiality-request@xxxxxxxxxxxxx w/ subject "unsubscribe"

Other related posts: