Tim Hockin wrote:
Well, since I am talking about a vested interest here, let me explain. In an audio/video editor, you always have to frame rates (A and V) in your timeline. In order to choose a time quanta that evenly divides into both, audio sample rates are not sufficient. For example, take NTSC Digital Video with a video rate of 30000 / 1001 and a audio sample rate of 48000. My time quanta needs to be a number like 1441440000 quanta per second.
Ok, I can buy the need for this. Since this is supposed to be an audio API, I would suggest that we define the audio sample rate and the sub-sample rate as sub-samples per sample. Objections?
I don't get this. How does the host know how to quantize like this. Better, perhaps to just get a BEAT event and subdivide that internally. But if plugins start relying on this, then we have to assume we send meter events when the transport is stopped, which is weird..
The meter events would only flow while transport was not stopped. The host knows its timeline. So it knows how to calculate the exact position in quanta of any event, after taking into account tempo changes. So every meter event simply contains a duration, which is the delta in time before the host is going to send the next meter event.
I still don't get the point of this. Or how the host knows to quantize a 3/4 bar into 2+1 and not 1+2.
-- Mike Berry Adobe Systems
---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission.
Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe