On Tuesday 04 January 2005 15.00, ben wrote: > Events with a sample-accurate timestamp get my vote. This way you > can approach audio-rate in cases where you must. In cases where it > is an absolute necessity that the plugin apply a perfect-slope DSP > change ( algorithmic signal generation or something ? ), the host > could allow the plugin to "look ahead" into the next value(s) to > generate a higher order curve fit. This was discussed before but > I don't know if it made it into the spec. I'd leave it out > personally. You don't have to look ahead to do that, any more than you have to look ahead to interpret a buffer of audio rate control data. The difference is in the data format (structured vs sampled); not in the semantics. Anyway, for what it's worth, my vote is for ramp events. I rather like the idea of audio rate control data with some form of bandwidth control (ie only actually use the control buffers when stuff is happening) - but I have to vote against for one main reason: It's easy to render audio rate control data from a stream of ramp events, but the reverse is really rather hairy, inaccurate (or wasteful) and expensive. So if you actually want to record, edit and store real life projects with lots of control data, and don't have unlimited hardware resources, ramp events or similar structured control data seems to be the only realistic solution. Modular synthesis is another matter, since most control data tends to be generated live and streamed only within the graph. > Of course in real-time, a "ramp" or "slope" value is impractical to > implement anyway. No. Pretty much *everything* is impractical to do in real time. We just have to deal with it the best we can. > If the user is twiddling a knob, there is no way > to anticipate what or when the next value will be ... ...and that's true whether you use ramp events, audio rate controls or point events. Besides, it's also true for any input coming from anywhere into a GMPI plugin. GMPI plugins are not aware of the future; only the current buffer. > if you try to > generate a slope based on the last value, you may overshoot when > the user stops turning a knob. If the hardware doesn't provide enough data that you can basically just pass it on (that essentially means audio rate input; structured or not), you have to filter - and filtering means delay, unless latency is so critical that you can live with occasional mistakes that any predicting filters will make. [...] > In my experience, every DSP element works the same way internally: > give me a new value and I will ramp towards it at some arbitrary > rate. Yeah. That's because there's no other way of producing useful results when all control data is basically crap, no matter where it's coming from... > I think some DSP filtering processes would become unstable > if they were to track an arbitrarily changing value at audio rate. Right, but it's for the plugin author to decide whether to definitely prevent that from happening, or whether to leave it to users to decide what "side effects" are useful and which aren't. Now, if the plugin API is designed in a way that all plugins *have* to filter all control input to produce usable results at all, there is just no way to give plugin authors and/or users that choice, as implicit control bandwidth restrictions are designed into the system. > The DSP change must necessarily lag behind the operator. Only when the plugin author explicitly wants the control to behave that way, or when the control data actually carries garbage information that the *system* - not the user - puts there. "Necessarily" applies only to systems that are broken by design. I don't see how the minor and hardware/driver dependent issues with obtaining control data from external devices is an excuse to degrade the whole system to emulate the problems in that area. Do it properly within the graph, and do any "extreme" filtering only where it's actually needed, like when you're tracking MIDI input. > This > means that every DAW operator in existence is used to hearing the > effect of a plugin some time after the change shown on our monitor. Let's leave the graphics subsystems out of this. :-) Your average display subsystem has a soft real time latency of at least 20 ms, whereas a serious real time audio system has less than 5 ms. In addition, the brain adds a great deal more latency to visual impressions than to sound. Anyway... > Our brains accomodate this delay and we can make it sound "right" > without any software intervention. True; the same logic applies to the latency issue as a whole; the brain has no major problem with fixed latencies within reasonable limits. (Or it would be impossible to use anything but headphones for monitor sound... :-) BUT, it doesn't matter to users *where* this latency is introduced, or why. It *does* matter whether every single plugin has hardwired low frequency LP filters on all control inputs, or if the filtering is just a limitation of whatever hardware and protocols you're using at the time. The latter allows for a lot more freedom when you're *not* actually recording live data, and it also allows for future controllers that produce higher quality data. //David Olofson - Programmer, Composer, Open Source Advocate .- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- ---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission. Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe