On Wednesday 30 April 2003 01.35, Chris Grigg wrote: [...fake timeline...]Of course the obvious way to do the fake timeline is to just leave the sample clock running every time the user presses Stop.
Well, yes, but this was assuming that we use *musical* time for event timestamps. If we use free running time (seconds, samples or whatever), it's not an issue, because musical time becomes a secondary timeline that doesn't have to be running just to deliver real time events properly.
> > > Note also that>> with the union idea, MIDI processors will be able to do the >> kind of simple time manipulations Todor describes. > >Yes, but that has some rather serious implications. Most > importantly, the host needs to deal with events enqueued for the > future...
Well, that's kind of what a sequencer track is anyway, yeah?
Yeah, but the event/control output of a plugin is not a sequencer track, is it? (Well, it *could* be, but then we're not talking about a real time system.)
[...]> A side effect of that is that plugins must >(obviously) send output events through the host, rather than > directly to receivers.
I'd assumed it was pretty much going to work that way anyway. Hosts would manage event stream routing. Was there a different proposal?
Well, it's probably easier to leave it to the host, but it's not the only way to do it. For XAP, we designed a system where every control output of a plugin has a "target specification", in the form of an event queue pointer and a cookie. These are created by the target plugins upon connection of controls, which means that a plugin gets to decide where the events for each input go, as well as how to generate the cookies. (Object LUT index, multidimensional indices or whatever - pick what does the job fast and easy.)
[...]>So, why do ramping at all? Well, if you get an event saying "set >control X to Y", what are you going to do? Does the user want a >smooth, inaudible change, or is a click or pop actually desired? > >With an event saying "perform an approximately linear ramp of > control X from the current value to Y over N sample frames", you > don't have to guess.
Sure, though I think that there will likely be perceptible quality issues beyond just de-clicking large changes. Remember, the timeslice size could be really big in some hosts, and curve shape could become more of an issue then.
Why this obsession with time slices? (We all do agree that we should use timestamped events, right?) You can have one curve segment per sample if you like. It doesn't have to be restricted to block boundaries in any way.
> And who knows, maybe new forms > of synthesis will pop up based on, like, phase relationshipsbetween multiple parameters varying over time. Or something.
This is not an issue if ramps are driven by timestamped events. With "XAP style" ramps you could do lossless conversion from audio into streams of control events if you like. (And vice versa, obviously.)
> All I'm really saying is that if we do something real simple we canpreserve the possibility for the future. So instead of doing this:> } gmpiParamRampShapes;
gmpi::setParamRampTarget( myParamID, myRampTarget );
we just do this:
typedef enum { linearRamp = 1 // Add new ramp shapes here, if we ever need them.
gmpi::setParamRampTarget( myParamID, myRampTarget, myRampShape );
That's not too horrible. Then we can add new shapes in future -if- we discover the need down the road.
Well, there is one problem: What to do if a plugin doesn't support the shapes that some other plugin, or the host, generates? Do all plugins with control outputs have to support multiple output formats, or is it up to the host to insert some form of converter elements as needed?
> > > Also, to bring ramping back to the topic of time> > representation, >> >> would it be OK/smart to present instantaneous tempo to the >> plug-in as an automated parameter? > >IIRC, the XAP team thinks so. :-) > >Because of potential rounding error build-up, one would have to >occasionally resync the plugin's internal song position counters > by giving them the current official song position. > >Either way, what you get is two controls; "tempo" and "position", >where "tempo" is (mathematically or actually) applied to > "position" once per sample frame.
I was thinking along the lines of myInstantaneousTempo = getParameterValue( theTempParamID, mySampleTime ) so that if tempo was being interpolated across a timeslice and your plug cared a whole lot about tracking tempo, you could get the correct high-res value at any given sample index in the timeslice. Or if you didn't care that much, you could just call that once at the start of the timeslice. Did you mean the same thing?
Well, our idea was to send timestamped control events to these "tempo" (for tempo changes or ramps) and "position" (for loops and jumps) controls. Plugins will have to receive these events, and apply tempo to position. It's trivial to do on a sample-by-sample basis (something like tempo += dtempo; position += tempo;)...
...and easy enough if you just want to do the calculations once per block and when you actually receive the events. Then Plugin SDK could provide some inlines or macros for the latter, along with something similar to the call you suggest.
---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission.
Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe