[gmpi] Re: Time Summary (was *Ping*)

  • From: "B.J. Buchalter" <bj@xxxxxxxxxx>
  • To: "gmpi@xxxxxxxxxxxxx" <gmpi@xxxxxxxxxxxxx>
  • Date: Sun, 18 May 2003 13:10:11 -0400

on 5/18/03 11:02 AM, Angus F. Hewlett at amulet@xxxxxxxxxxxxx wrote:

> On Sat, 17 May 2003, B.J. Buchalter wrote:
> 
> Generally, that's true, unless the input device is able to perform some
> scheduling on the events before they hit the graph. You will still have
> latency, but some classes of input device will allow you to minimize
> jitter by timestamping the events before they hit the graph.

Understood.

> Yes, but ultimately it makes little difference as to whether it takes time
> for the events to reach the device, or time for the audio from the device
> to get from the device to its destination. Audio and events are two sides
> of the same coin, and in the end your situation is analogous to the DSP
> boards that want audio "early" because of the PCI bus journeys. The graph
> compensation issues are the same.

I don't agree. See the back and forth between me an David yesterday. The
symmetry is broken in the very real case that I am talking about. It may be
beyond the scope of this effort, but it is not an illusion.

> I disagree. Within a single GMPI graph, audio and control are effectively
> the same thing and are not, and cannot be, decoupled.

I agree, with the caveat that none of the objects in the GMPI graph
interface with anything outside of the GMPI graph.

> I do see the point you're making, it applies particularly to devices that
> sit at the boundary of the GMPI and non-GMPI world (e.g. a GMPI-aware
> outboard effects processor processing a non-GMPI audio stream under GMPI
> event control)... however I'm not sure we're even trying to address those
> cases at this stage.

If so, then this discussion is definitely beyond the scope of the current
effort -- that is why I asked a couple of days ago if we could get a
consensus as to whether or not this is beyond the scope of the current
effort. I would like it to be within the scope since I feel that it would
allow a standards based approach to a very hairy problem, but I understand
that it is on the edge of what most folks may be currently interested in.


>> This means that for a certain (real) class of plug-ins
>> we need to be able to account for control latency in order to schedule
>> accurately.
> 
> Yes, and for certain classes it would also be nice to be able to account
> for audio latency (both data-latency and time-latency).

I'm not sure that this is a "nice" thing. I think it may be a required
thing.

> 
>> The only way to do this without this concept is for the proxy to add latency
>> to it's own processing and to delay the entire local graph. This is the
>> opposite of what we are striving for here.
> 
> Sorry, I don't see how it's possible any other way. We can't know what the
> dependencies are for generating the events that are going to your device.
> For example, there could be an audio->event converter within the GMPI
> graph that might be feeding control events to your device.

This is a very good point, and may be the crux of the biscuit. Although, you
still could propagate the sum of latencies through the graph and slip the
audio for such a plug (which is what you would have to do with an audio
latency). I'm still not clear that you cannot handle this aspect
symmetrically with audio delay, with the exception that my approach allows
independent audio streams proxied by GMPI to operate with minimum latency,
and the "everything is an audio latency" approach does not.

But what about this:

Audio 1 -> AudioToEventConverter -e> EventProcessor-> APWithAudioLatency ->
Out


So, the EventProcessor processes the event stream generated by
AudioToEventConverter from audio 1, but the output of APWithAudioLatency
is propagated with some latency. This means to time-align the output of the
APWithAudioLatency , the event processing terminal of EventProcessor needs
to be time-aligned and propagated back to the Audio 1 stream sent to
AudioToEventConverter. This is all within the same GMPI graph, but it is
exactly the same issue as what I am talking about -- the control stream has
some latency to the effect. Either we handle this case, or the above will
not be time-aligned, and the user will have to slip the stream.

> So the host now has additional latency on the setting of its loop points.
> You've just added latency to the whole system.

I don't understand what you mean here. Could you amplify?

> 
>> 1) Transport of events where event transport has latency (but the audio does
>>    not or has different latency).
>> 
>> But #1 is the primary case. Advanced MIDI implementations already support
>> this. I don't see why the GMPI metadata system should not.
> 
> "the audio does not or has different latency" implies that the audio is
> outside of GMPI.

Yes. Probably. But not necessarily. For example, with Firewire we can have
dedicated bandwidth for audio transport, but control transport would be
asynchronous and can have scheduling latencies different from the audio.
With large buffers this is probably not an issue, but if the GMPI host is
running with small timeslices, the control latencies could be a significant
fraction of the timeslice; this is different from PCI, and to deal with it
will probably require more additional latency than absolutely required
(unless the control latency and audio latency can be decoupled).

> "m timeslices in advance" is NOT what you're saying.. you're referring to
> a physical link, a physical dependency not a data dependency. Your device
> wants the data "m _milliseconds_ (or microseconds) in advance" because of
> the physical limitations of its datalink. I don't see how this case is
> different to an audio DSP board sitting on the PCI bus.

Actually, it really is what I am saying. I have no problem with the events
being in timeslice long chunks, nor do I have a problem with getting them
ceil(delay/timeslice) timeslices early. I just want them early enough that
the timestamp is still valid when I receive it at the remote device. I think
that what you propose I am saying is probably too complicated for hosts
which will be processing events in blocks.

Best regards,


B.J. Buchalter

Metric Halo
M/S 601 - Building 8
Castle Point Campus
Castle Point, NY 12511-0601 USA
tel +1 845 831-8600
fax +1 603 250-2451

If you haven't heard ChannelStrip yet, you don't know what you're missing!

Check out SpectraFoo, ChannelStrip and Mobile I/O at http://www.mhlabs.com/

Download a 12 day demo from <http://www.mhlabs.com/demo/>





----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: