[gmpi] Re: Topic 6: Time representation

  • From: "Vincent Burel" <vincent.burel@xxxxxxxxxx>
  • To: <gmpi@xxxxxxxxxxxxx>
  • Date: Sun, 11 May 2003 22:09:44 +0200

----- Original Message -----
From: "Mike Berry" <mberry@xxxxxxxxx>
To: <gmpi@xxxxxxxxxxxxx>
Sent: Sunday, May 11, 2003 8:09 PM
Subject: [gmpi] Re: Topic 6: Time representation


> Looking at your code, which does indeed do what you say, I see that you
> are doing something which few hosts would actually do on Windows. That
> is, you are simulating all of the processing happening on the driver
> thread at TIME_CRITICAL. This is actually a very unstable design, as
> your program shows.

i would not say unstable , but restricting. with 20ms buffer all works
better under Windows ! :-)
where i desagree is when you say "few host" , no ! most of the host does not
manage this priority since there are processing in the Event Callback of the
driver. The Other give at least a NORMAL priority to the processing thread.
And if you want to be sure to be able to timestamp all event type, then you
have to set the audio thread as Idle ! :-) and this is not serious.

> The more common way to do this is to simply do a buffer copy on the
> TIME_CRITICAL thread. After all, with 500 ms latency, whats a little
> more or less.

this is not the recommandation of the AudioBoard Manufacturers. They usually
recommand to process right now when receiving event from the audiodriver.
(because this event is already an event and not an interrupt). So it makes
no sens to copy the buffer and Set an other event to start an other thread
to process this buffer.

> The actual processing is done on a different thread which
> is HIGHEST or lower priority. This thread will now interrupt the
> processing of mouse messages to a much lesser extent.
> Second, this method allows you to process smaller chunks at one time on
> the render thread, which allows the thread to yield more often to
> whatever thread is handling the mouse messages.

i've also tried this method , cutting big buffer into small ones in order to
spread the processing in the time , is very hard to synchronize, in fact you
lose time and reliability... this is not the good way in my opinion.

And also , there is disk access, and especially if you work with big buffer
, on 24 tracks project with 500ms preload at least. How can you warrant for
example even 2ms of precision to timestamp event !?

> This aside, it would be nice if you would assume that everyone else on
> this list knows as much about audio programming as you do

yes , especially when someone ask for "what is an oscilloscope" ! :-)) can
be sure that there is only professionnal of audio signal here ! :-)

>(whether or
> not that is the case). Others may disagree with you (certainly they
> occasionally disagree with me), but this does not make them stupid,
> lazy, or any of the other invectives which you seem to cast about freely.

Ok, All that i'm talking about has been programmed, tested and often
measured. So i can try to explain my desagreement , to explain why, and
finally if you still don't understand my point of view, i can try to give
you the key to check some points, and invite you to make some
experimentation and test. But if you even don't want to do that ! what can i
do !? .except saying a small word ?. :-)

Vincent Burel







----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: