[gmpi] Re: Topic 6: Time representation

  • From: David Olofson <david@xxxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Fri, 2 May 2003 12:17:16 +0200

On Friday 02 May 2003 03.46, Chris Grigg wrote:
[...]
> How event queuing would work is an interesting topic.  Though OT
> for now.

Yes. Either way, I agree with what you wrote on this.


[...]
> >The host has to figure out what to connect where - or rather, let
> > the user decide, one way or another. Next, the host asks the
> > plugin that owns the input for a target spec. Finally, the host
> > tells the source plugin to connect the desired output, passing
> > that target spec as an argument.
>
> Gotcha.  So you're considering the message routing mechanism to be
> something other than the host.  Whereas I had been thinking of it
> as one function of the host.  Either way, no real difference to the
> plug.

Well, you can have it either way with this design. You don't *have* to 
connect plugins directly to each other, just because you can. 
Sort/merge points are the obvious exception to the direct routing 
"rule", but it's really up to the host to decide when making 
connections.


> >The queues, however, are very simple linked lists, and need no
> > real management.
>
> I like that, per above.  You can find your insert position by
> traversing and watching the ABSTIMEs.

Exactly. And unless you "deliberately" generate events out-of-order, 
you don't even have to do that; you just add the at the end of the 
list. Preliminary XAP code (basically ripped from Audiality):

static inline void xap_write(XAP_port *evp, XAP_event *ev)
{
        ev->next = NULL;
        if(!evp->first)
                evp->first = evp->last = ev; /* Oops, empty port! */
        else
        {
                evp->last->next = ev;
                evp->last = ev;
        }
}

(There are ways to eliminate the conditional, of course, but I'd like 
to hack some more real code before thinking about optimizations.)


> >Events are allocated from, and returned to, a host
> >managed pool.
>
> Really like that.  Let's avoid the need for dynamic allocation
> during a process() call as much as possible, eh?

In fact, we *have* to do that, unless we assume that every real time 
host provides a full blown real time safe memory manager. (Could put 
one in the host SDK, but the real problem is that such beasts have 
inherent issues that cannot be designed away.)


> >Each event queue operation is done with a few
> >instructions of inline code. The only time a plugin calls the host
> > is if/when the event pool is exhausted.
>
> Again, don't see any significant downside to the plug calling the
> host -- or a support  lib call -- for list mgmt.

Well, there is the performance impact, but it's probably insignificant 
to all but the simplest plugins.


> Just seems
> cleaner to provide the shared functionality in one place, rather
> than make every single plug carry that stuff.  Also if it's a host
> call, the implementation can be hidden from the plug, giving the
> host writer more freedom.

It's just that the inline code is trivial and smaller than the code 
generated for a function call. I don't really see much point in 
trying to hide that stuff, as hosts can still take over routing if 
they need to. The only difference is that of passing every single 
event through a function call vs passing lists of events via a small 
struct.


[...audio rate controls, curves etc...]
> I think we should be thinking about the multiple-timeslice picture.
> For intra-timeslice interpolation, the host has to take high-level,
> long-duration parameter changes from the sequencer and break them
> into timeslices.  A typical param change move -- probably encoded
> in the seq track as a ramp in most cases, sometimes with a curve
> shape -- has to be -- or can be -- broken down into a series of
> instantaneous values on the timeslice boundaries.  At the plug-in
> level, in each given timeslice, the plug needs to interp between
> the param's value at the start of the timeslice and its value at
> the of the timeslice.
>
> So the idea is to make parameter interpolation across the timeslice
> as easy as possible for the plug developer, by providing at each
> timeslice, for each parameter, an initial value, a terminal value,
> and a curve type; and a function for getting the interpolated value
> at any sample index in the timeslice.

Sure, but it seems to me that putting the timeslice restriction back 
in is opening a big can of worms. All of a sudden, data is 
"modulated" by the timeslice size, which seems like a very bad idea 
to me, as it makes it impossible to guarantee concistency without 
dictating the timeslice size on a per-project basis, just like the 
audio sample rate.

Besides, I don't see how this makes anything easier. If you support 
sample accurate timing for events (as opposed to quantizing events to 
the start of each block), you already have one internal timeslice per 
"skip" in the event queue. How are these internal timeslices 
different from the process() timeslice in terms of the DSP inner 
loop?


> This would be an available alternative to using timestamped events
> for parameter changes; it would give the host a way to translate
> seq track high-level long-duration param changes directly into
> something the plug can easily work with, without the work of
> reading event queues.  Provided that's how the plug and the host
> want to handle that parameter.   Plug developers who want to work
> only with events and not use this mechanism would be free to do
> that.

I don't think it's a good idea to drop ramping completely from the 
"event style", as that would effectively force everyone to use curves 
whether they want better than linear or not. As I've explained 
before, RAMP events provide a lot more information than SET events, 
and still need only one extra argument. It's also very easy to ignore 
ramping and just treat these events as SET events.

BTW, Audiality doesn't even have a SET event; controls are driven 
entirely by RAMP events whether you want to do ramping or not. Set is 
just a RAMP with zero duration. My guess is that this will work fine 
for XAP as well.


[...curves vs audio rate control data...]
> I don't see what's pointless about that at all, especially in light
> of the following drawbacks to audio-rate param buffers.  Such as:
> It would offload the calculations to the host,

The sender, actually - which might be another plugin. I don't think of 
controls as only meant for low frequency automation data.


> it would consume
> buffer space, the buffers would have to be managed,

Yes, good points.


> and it would
> cause the host 'terp function to be executed once per every single
> sample in the timeslice

Why? If you get the actual data, you don't need to ask the host to 
calculate it - that's the whole point.


> (whereas if accessed from process() via a
> call, the plug would have the option to just sample the curve every
> so often and do its own simpler [probably linear] interp between
> the samples).

Well, you'll need to downsample the data properly if you get it audio 
rate - but that applies to curves as well, if they're of equivalent 
complexity. The only advantage with the latter is that there *might* 
be mathematical shortcuts to getting an alias free rendition of the 
data at any requested sample rate. Though, considering how synth 
oscillators are implemented most of the time, I'd think these cases 
are very rare in real life.


> OTOH, having pre-digested data values would in many
> cases simplify the per-sample code in process()...

Yes... Though, if you really want control values at audio rate in the 
DSP loop, linear, quadratic or cubic curves aren't too hard to deal 
with. It's just value += dvalue; [dvalue += ddvalue; [ddvalue += 
dddvalue]] in the inner loop.


> although as you
> say below, in many cases like filters the interp'd param value
> would still have to be translated into low-level params, i.e.
> coefficients, on a per-sample basis.

Yes - and that would be controls that should not accept audio rate 
data. Why ask for something you can't really make use of?

Anyway, I agree that it's nice to allow plugins to ask for as many 
points as they need, and that's part of the reason why I like linear 
ramps with arbitrary timing. It's very easy to render those at 
whatever sample rate you like, and they *can* still carry data all 
the way up to audio rate. As always, it's really up to the plugin 
author to decide what kind of filtering to use if the data is 
downsampled.

More importantly, I don't feel particularly good about anything that 
injects timeslice dependencies and stuff like that into the signal 
path. I really like the idea of audio rate controls for that reason 
in particular, so I would like something that's more similar to audio 
rate controls than to passing parameter changes between process() 
calls.


> BTW, if it helps, consider another alternative: parameter value
> events, but the parameter is 'slew rate limited' so it smooths to
> the new value over a defined time period.

Yeah... I really don't like that. Reminds too much of MIDI control 
interpolation, which is something that's really getting on my nerves. 
If the slew rate is too low, you can only use the control for slow 
automation; not for modulation or beat sync'ed operations of any 
kind. If it's too high, you have to flood the wire with ramps of CCs 
to avoid thumps.

You could have a standardized slew rate control, of course - but then 
you might as well use linear ramp events. Cleaner and less overhead.


[...]
> >Now, for these improved approximations to be possible, the plugins
> >have to *understand* the curve shape in order to approximate it,
> >right? So, just asking the host to calculate a few values won't
> > give you a better approximation than linear ramp events with
> > slightly higher density. The host can render the *control curve*
> > for you, but it can't give you the coefficients you need for your
> > internal approximated curves.
>
> Yeah, of course.  It's just better accuracy at similar efficiency,
> and with better flexibility, if you allow the plug to make its own
> decisions about when to take samples of the 'real' curve.

Yes, but I think you can do that pretty much regardless of what 
interface is used for transmitting the data. It's basically about 
having timeslice size dependencies or a potential need for proper 
bandlimiting of control data. Both are issues, but the latter is a 
pure DSP issue, and not a matter of "My project sounds different on 
different machines. Bug!"


> >  > Don't understand what you mean about output
> >>
> >>  formats, ask again?
> >
> >Plugins have to be able to *generate* control data as well receive
> > it. This shouldn't be a problem, as you just need to make sure
> > that no plugin generates something that the host can make sense
> > of.
>
> I -know- you mean '...that the host -can't- make sense of...'

Yes, sorry.


> and I
> agree, and that can't happen because the curves are predefined in
> an enum in gmpi.h for each released version of gmpi, and the enum
> only grows over time.  Slowly, I would think.

Exactly. The receiving end is the hard part.


> >However, considering the above, I'm not sure there's much point in
> >generating anything that cannot be interpreted directly by the
> >receiving plugin.
>
> That gets into automation design questions, kind of OT, but it
> seems like a plug that generates automation should be using a
> high-level, long-term parameter change description, not a
> per-timeslice description.  Even though plugs respond to low-level,
> intra-timeslice param changes.  Otherwise plugs would be
> second-class automation sources.

I don't get this line of reasoning. Why is automation different from 
say, parameter modulation in a modular synth?

I see serious problems down the road if we make a distinction here. We 
should have *controls*, and they should be usable for both studio 
automation and more modular synth like control modulation.

Just as an example; if I want to do bass ducking (attenuate bass 
tracks when the kick drum comes in), I thin I should be able to do it 
manually via automation, as well as automatically, by hooking up a 
filter and an envelope tracker to the gain controls. Should I use 
different gain control plugins for this? Should I run into timing 
accuracy problems, because automation has a dependency on timeslice 
size...? I think not. Maybe ten years ago, but not one or two years 
ahead.


> >The big deal with timestamped events; the very reason why anyone
> > would think of using them for controlling DSP code, is that
> > they're cheaper than audio rate controls. Doing something that
> > actually makes the shortcut more expensive than the real thing,
> > seems... well, you get the idea. :-)
>
> Leaving the sampling to the plug developer is more flexible,

Yes, but what are you sampling, actually?


> simpler to program,

Maybe...


> and less compute-expensive.

Only if there's no need for proper bandlimiting when downsampling.


[...]
> >>  I see.  But unless being able to set each plug's tempo &
> >> position independently is a design requirement -- something I
> >> hadn't thought of -- why make tempo & position 'controls' of the
> >> plug?
> >
> >Well, for XAP, it just happened to be the most natural way. It
> > turned out to be dead simple, and it cuts down the number of
> > specialized interfaces. "Everything is a control" - and it's not
> > because we think it sounds cool. It's because we hate APIs that
> > you can't use without having the reference docs handy, even after
> > using it for years.
>
> Right, the clean aspect is nice.  Does the host have controls too?

Sure, if it wants to listen to something some plugins output, or send 
something to some plugins. If the sequencer is part of the host, 
that's how you'd connect plugins to the sequencer. (I prefer to think 
of the sequencer as a real plugin, but even if it's implemented that 
way, it's probably not something your average host would show to the 
user.) Of course, audio interfaces with mixers and MIDI interfaces 
would be wrapped as something with controls as well; maybe real or 
virtual driver plugins.


> >  > I've been
> >>
> >>  thinking of tempo as a sort of global parameter, a property of
> >>  managed by the host, that you go to the host to get, and that
> >> looks the same to all plugs in the graph.
> >>
> >>  Though maybe tempo-as-parameter isn't so well-liked.
> >
> >Either way, I don't really like the idea of considering tempo as
> >something special enough to be host global.
>
> What parameters would you see as special enough to be host-global?

Audio sample rate, maybe... Things that really cannot differ between 
plugins running in the same thread or context.


> Wouldn't it be cool to be able to have a VSO plug-in that can
> wobble the tempo, in any application?

Sure, but that's a different issue entirely. Regardless of the 
interface used to pass tempo to plugins, the timeline generator could 
have a control input that allows you to modulate the tempo.


> >I'm not remotely into
> >experimental music, but I've still found myself working against
> > the tempo map at times. Some things would be so much easier if
> > you could just throw in another tempo map that you can adjust as
> > needed, instead of moving events around.
>
> Can you elaborate?

It happens whenever you want to combine something like a soundscape 
with a song. Intros and endings is when I would run into this 
usually. Soundscapes generally don't have meters and stuff, but they 
still a timeline, obviously. When you want to fit that with something 
like a traditional 4/4 song, it's a matter of sliding and streting 
the timing until it sounds right. Why not just manipulate the 
timeline of the soundscape, instead of destructively editing the 
events in the soundscape track(s)?


[...]
> >...It
> >has to be able to play multiple sequences independently. If it
> > can't, you can't crossfade between background songs, you can't
> > play fanfares over the music, and you can't use sequences for
> > soundscapes and sound effects. Such limitations would strike me
> > as ridiculous.
>
> So where's the rhythmically dependent plug-in in those cases?

Anywhere you need them. If you load a long, it'll have to pull it's 
processing net in as well, and it may contain such plugins.


>  Your
> examples only have independent players being mixed together.  No
> problem.  Nothing in GMPI requires all player plug-ins to receive
> their sync from the host, does it?

Well, I hope not, because that would be an issue here.


> A MIDI file player could be a
> plug-in even without receiving MUSTIME from the host.  They run on
> their own internal timebases.

Sure, but a player would have to send timeline info to any plugins it 
may be using, or they'll "mysteriously" sync to the host's timeline 
instead of that of the song. To me, that looks like a good reason to 
do it through controls, since then it just goes with the graph, like 
everything else.


> Or, even if you did need a
> rhythmically dependent plug for each player, couldn't you set up
> two GMPI frames each with a playback seq driving whatever
> rhythmically dependent plugs it needs, then mix the two audio
> outputs?

Well, yes, but that seems like a hack to me. It also rules out some 
interesting stuff, like one sequencer driving or modulating another, 
or plugins doing weird stuff based on the interference between two 
timelines. This is about where the experimental music guys should 
take over. :-)

<rant type="completely off-topic">
BTW, in Audiality, the MIDI player plugs into a virtual synth voice 
and responds to the same events as "normal" instruments. Instant 
phrase sequencer! :-) It wasn't something I really planned for, but 
it turned out to be logical and easy enough to do. This way, I don't 
have to make a difference between plain instruments and sequences; 
both can be played the same way, from the API or the MIDI player.

Next thing to plug into virtual voices is real time scripting, though 
I have to turn my interpreter into a compiler + RT safe VM first. Any 
year now...
</rant>


//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---


----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: