[gmpi] Re: Time Summary (was *Ping*)

  • From: Paul Davis <paul@xxxxxxxxxxxxxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Fri, 16 May 2003 19:46:58 -0400

>I honestly have no idea what you are trying to say here. By
>sample-synced I me an that all participants agree that sample N is
>sample N. By buffer-synced I mean that all participants in the graph
>use the same sized buffer and all agree that buffer M is buffer M
>and that it begins on sample N.

sample-synced means more than that, as you alude to next:

>In either case, the parameter change occuring at sample N is bundled
>with the buffer containing sample N and thus gets rendered at sample
>N. No problem. 

but the parameter change happens in the real world, and is delivered
*once* to the system. if there is no agreement on who is processing
what when it is delivered, then you can end up with some graph nodes
having already processed data that covers the time at which the
parameter change is required to take place, while others have not.

worse, if node A is sending data to node B, and node A has computed
the data for time T "ahead" of time, and then a parameter change
intervenes, node B gets "incorrect" data. which takes us back to tim's
original summary: every node has to get the events for the timeslice
that its processing, and they have to be complete.

agreement on buffer size, stuff like that - that's not relevant to
me. the point is that the relationship between events in the real
world and audio processing has to be atomic: whether a node works
sample by sample or with a block of samples, the state of the world
has to appear identical to both of them while they process the data
corresponding to the same period of "output time".

>There is no real need for a graph to be buffer-synced, and real world
>graphs containing multiple hardware units connected via AES/EBU or
>lightpipe are not buffer-sync ed. Each hardware device uses whatever
>buffer size its designers gave it.

this is a red-herring. think hard about it: nobody runs multiple
digital systems without a single sample clock, and these systems do
not share an event stream, only a data stream. they are therefore
synced in the only way that they can be. there is no question that
tweaking the knob on one device should have any effect on the
processing of any other. tweaking the knee point on a digital
compressor changes its output, and in this way the change is "passed
on" to downstream units. but the knob that changes the knee point
cannot change a parameter on any downstream (or upstream unit).

if you have a MIDI control, then there is a more complex situation,
because the MIDI data *can* be routed to multiple units. at that
point, you are back in the same situation except for the rather
important issue of being limited by hardware design. MIDI can't be
distributed in the same way as the events, and so the issue is moot.

>This kind of background calculation is also necessary to spread out
>the comput ational burstiness of block processing plugins such as
>FFT-based filter banks.

what you're describing does not have much to do with event
processing. preparing FFT's for incoming audio is a process that can
happen entirely independently of the event stream. the event stream
only matters once you start *using* the result, and that part of
things had better be synchronous with every other node in the system.

--p

----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: