[gmpi] Re: Req 76,78

  • From: Sebastien Metrot <meeloo@xxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Sat, 05 Feb 2005 02:44:43 +0100

Didier Dambrin wrote:



Didier, One of the reason we don't support FruityLoops in our plugins is just that: vst cannot permit to have timestamped automation, so FL calls us with ridiculous amounts of audio samples to process (like 7, 12 samples...) and this kills all our internal optimisations, so it's not


FL still mixes using a constant PPQ (so, variable buffer size) because it has 'internal controllers' (which you could call CV).

In no way you can have a host processing in big chunks (and anyway, talking about getting the smallest latency, and at the same time complaining that too small buffers kill processing optimizations IS ridiculous), and at the same time have an audio-to-CV on one side controlling plugins on another side if the processing chunks are too big (unless there's some latency added, but in this case the CV wouldn't control anything in the same mixer slot).


All this really depends on how much latency is acceptable. I believe that the proverbial 1 sample latency has absolutely no real world interest. Latency is important, as is Jitter, but there are limits to that. If you give the plugin a list of event, it has an opportunity to manage those events in the way it sees fit, and this gives it the potential to optimize things a LOT.


See it this way, it's like getting timestamped events, except that it's the host that slices the processing for you, and sends the events in-between. And no, personally I don't believe in that host-controlled ramping bs.

This view totally kills any optimisation opportunity for many kind of plugins. In such case we have not other option than to predelay the output and thus create artificial latency we could very well live without with a time stamped event list and reasonable buffer size. There are a lot of audio processing algorithm that just can't live with such overhead. Picture this: my main field of work is pro samplers (we do MachFive for MOTU). It is very typical for us to have many hundreds of playing voices at certain points (because we use multi layered sample programs, and some kind of orchestral music need a lot of concurent voices to sound right). In your example I have to manage those many hundreds of voices every 7, 12 or whatever samples. Each voice typically uses 2 or 3 ADSRs, 1 or 2 LFOs, filters, routings, FXs, etc. One can assume that there is at least 30 function calls per voice and per audio buffer round. Now you can make the calculus. Each time the host asks for a bunch of audio data I have to wake all these functionnalities PER VOICE. And I'm not even talking about low latency IR Based reverbs (not everybody can use the Lake Algo...). Then there is the fun game of SSE and Altivec optims which only work on datas that are alligned (altivec) and sizes that are multiples of at least 16 bytes, depending on the algo and the archi.
So I say, your reasoning may seem ok if you only think about effects, but it's completely wrong when you look at the reality of software instruments.
It strikes me as if you where asking a game to render every frame, 7 pixels at a time: the bigger the batch of rendering, the bigger the opportunity for optimisations there is.




as simple as "just send a command". I'm all in favor of GMPI being a


yes, it's as simple as 'send a command with a timestamp'

one function call host -> plugin per automation event? are you kidding?

Sebastien



----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: