[gmpi] Re: Topic 6: Time representation

  • From: Chris Grigg <gmpi-public@xxxxxxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Tue, 29 Apr 2003 13:59:48 -0700

David Olofson wrote:
        * Using musical time for event timestamps means that
          plugins can't receive or sent timestamped events
          while the timeline/sequencer is stopped. This will
          require special case handling, unless you assume
          that stopping the sequencer means effectively
          killing the audio thread.

Well, sort of. In the Beatnik Audio Engine's MIDI event API we have a convention where a timestamp of 0 means 'execute immediately'. So yes, that's a special case, but it's dead easy and computationally cheap as dirt. Or, if we go with the union idea for timestamps, the selector field could have three values: sample-time, musical-time, and execute-immediately. Note also that with the union idea, MIDI processors will be able to do the kind of simple time manipulations Todor describes.



        * Every audio plugin will have to deal with "audio
          time" (ie sample count), and lots of event
          processing is of the kind "do X at time T"; ie
          no actual processing of the timestamps. For this
          kind of work, audio time for timestamps makes a
          lot of sense, as no conversions are needed.

It does seem that we need to make a choice for the fundamental time unit: clocktime (samples) vs. musical time, which is another way of saying we have to choose which side the system will be optimized for... since our choice will necessarily burden either one side or the other to some degree. OTOH, when you consider the amount of processor cycles spent processing (or synthesizing) one buffer of audio, the overhead of converting event times will be tiny by comparison (except in passages with pathologically dense events).


Since the host's timeslices will in almost all cases be directly tied to audio system output blocks, the irreducible basic system timeline should probably be based on sample-time. So this would mean optimizing for straight-line audio DSP plugs and burdening MIDI (or other music protocol) plugs -- but again, it's not an overwhelming burden. So the host would furnish efficient and high-resolution conversion services to/from musical-time for plugs that need it. For example, it should be easy to get the sample frame index (into the current timeslice) for a musical event time, so you can (for example) start rendering a note at the correct sample-accurate position.

To follow up on another comment -- though this probably belongs under the 'Automation' topic more than 'Time representation' -- I agree that there should be some API-level support for parameter ramping across a timeslice, and I would like to see it be as easy/intuitive to use as is practical. I would not necessarily assume that hard-limiting the functionality to piecewise linear interpolation will be good enough forever, though... maybe we define linear now and leave room for other curve shapes in future?

Also, to bring ramping back to the topic of time representation, would it be OK/smart to present instantaneous tempo to the plug-in as an automated parameter?

-- Chris




---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: