[gmpi] Re: Topic 6: Time representation

  • From: Chris Grigg <gmpi-public@xxxxxxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Wed, 30 Apr 2003 17:13:29 -0700

On Wednesday 30 April 2003 01.35, Chris Grigg wrote:
[...fake timeline...]
 Of course the obvious way to do the fake timeline is to just leave
 the sample clock running every time the user presses Stop.

Well, yes, but this was assuming that we use *musical* time for event timestamps. If we use free running time (seconds, samples or whatever), it's not an issue, because musical time becomes a secondary timeline that doesn't have to be running just to deliver real time events properly.

Yep.



> > > Note also that
 >>  with the union idea, MIDI processors will be able to do the
 >> kind of simple time manipulations Todor describes.
 >
 >Yes, but that has some rather serious implications. Most
 > importantly, the host needs to deal with events enqueued for the
 > future...

Well, that's kind of what a sequencer track is anyway, yeah?

Yeah, but the event/control output of a plugin is not a sequencer track, is it? (Well, it *could* be, but then we're not talking about a real time system.)

Interesting point. I don't know where the idea that it's only OK for GMPI to be a real-time system came from. We have input and output queues of timed events, after all.



[...]
 >  A side effect of that is that plugins must
 >(obviously) send output events through the host, rather than
 > directly to receivers.

 I'd assumed it was pretty much going to work that way anyway.
 Hosts would manage event stream routing.  Was there a different
 proposal?

Well, it's probably easier to leave it to the host, but it's not the only way to do it. For XAP, we designed a system where every control output of a plugin has a "target specification", in the form of an event queue pointer and a cookie. These are created by the target plugins upon connection of controls, which means that a plugin gets to decide where the events for each input go, as well as how to generate the cookies. (Object LUT index, multidimensional indices or whatever - pick what does the job fast and easy.)

Cool. But the host still has to stitch the pins together and manage the queues, yeh?



[...]
 >So, why do ramping at all? Well, if you get an event saying "set
 >control X to Y", what are you going to do? Does the user want a
 >smooth, inaudible change, or is a click or pop actually desired?
 >
 >With an event saying "perform an approximately linear ramp of
 > control X from the current value to Y over N sample frames", you
 > don't have to guess.

 Sure, though I think that there will likely be perceptible quality
 issues beyond just de-clicking large changes.  Remember, the
 timeslice size could be really big in some hosts, and curve shape
 could become more of an issue then.

Why this obsession with time slices? (We all do agree that we should use timestamped events, right?) You can have one curve segment per sample if you like. It doesn't have to be restricted to block boundaries in any way.

Just thinking about making it easy for novice plug developers... since ::process() by definition is about one timeslice at a time, and in a typical timeslice you'll be ramping parameters all the way across the timeslice, not ending in the middle or doing complex shapes. So thinking about making that a default behavior.



 > And who knows, maybe new forms
 > of synthesis will pop up based on, like, phase relationships
between multiple parameters varying over time. Or something.

This is not an issue if ramps are driven by timestamped events. With "XAP style" ramps you could do lossless conversion from audio into streams of control events if you like. (And vice versa, obviously.)

Yep.



> All I'm really saying is that if we do something real simple we can
preserve the possibility for the future. So instead of doing this:

gmpi::setParamRampTarget( myParamID, myRampTarget );

we just do this:

      typedef enum {
          linearRamp = 1
          // Add new ramp shapes here, if we ever need them.
> } gmpiParamRampShapes;

gmpi::setParamRampTarget( myParamID, myRampTarget, myRampShape );

 That's not too horrible.  Then we can add new shapes in future -if-
 we discover the need down the road.

Well, there is one problem: What to do if a plugin doesn't support the shapes that some other plugin, or the host, generates? Do all plugins with control outputs have to support multiple output formats, or is it up to the host to insert some form of converter elements as needed?

The enums would be defined in the headers for a particular GMPI version, so if the host & plugs are the same GMPI version, there's no mismatch on the enum values. I thought that interpolation for a given shape could be provided as a host function, so in process() you could call something like myVal = gmpiHost::getInterpValue( startVal, endVal, samplesInTimeslice, curve, sampIndex). Something like this will be eventually be needed if we ever want to support nonlinear curves. Don't understand what you mean about output formats, ask again?



> > > Also, to bring ramping back to the topic of time
 >  > representation,
 >>
 >>  would it be OK/smart to present instantaneous tempo to the
 >> plug-in as an automated parameter?
 >
 >IIRC, the XAP team thinks so. :-)
 >
 >Because of potential rounding error build-up, one would have to
 >occasionally resync the plugin's internal song position counters
 > by giving them the current official song position.
 >
 >Either way, what you get is two controls; "tempo" and "position",
 >where "tempo" is (mathematically or actually) applied to
 > "position" once per sample frame.

 I was thinking along the lines of myInstantaneousTempo =
 getParameterValue( theTempParamID, mySampleTime ) so that if tempo
 was being interpolated across a timeslice and your plug cared a
 whole lot about tracking tempo, you could get the correct high-res
 value at any given sample index in the timeslice.  Or if you didn't
 care that much, you could just call that once at the start of the
 timeslice. Did you mean the same thing?

Well, our idea was to send timestamped control events to these "tempo" (for tempo changes or ramps) and "position" (for loops and jumps) controls. Plugins will have to receive these events, and apply tempo to position. It's trivial to do on a sample-by-sample basis (something like tempo += dtempo; position += tempo;)...

If the curve is anything but linear, this will give a wrong answer.



...and easy enough
if you just want to do the calculations once per block and when you
actually receive the events. Then Plugin SDK could provide some
inlines or macros for the latter, along with something similar to the
call you suggest.

I see. But unless being able to set each plug's tempo & position independently is a design requirement -- something I hadn't thought of -- why make tempo & position 'controls' of the plug? I've been thinking of tempo as a sort of global parameter, a property of managed by the host, that you go to the host to get, and that looks the same to all plugs in the graph.


Though maybe tempo-as-parameter isn't so well-liked.

-- Chris

----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: