[gmpi] Re: Topic 6: Time representation

  • From: Chris Grigg <gmpi-public@xxxxxxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Wed, 30 Apr 2003 17:55:58 -0700

-----Original Message-----
From: Chris Grigg [mailto:gmpi-public@xxxxxxxxxxxxxx]

That seems like a philosophical argument to me, not a technical one. Would you agree that there is some sufficiently fine stepped resolution beyond which, for a given tempo, in the context of a sampled audio DAW/sequencer or a MIDI-like instrument, the difference between a float representation of a time position and a stepped time position is inaudible to humans?

-- Chris

----------------------------------------------------------------------


The technical argument is that for clarity and simplicity you should represent data with its natural data type in the chosen implementation language, in natural units. By not doing so you are unnecessarily adding extra complications.

Some of these complications are: the need to convert to ordinary units of musical time (e.g. beats) and back; the arbitrary conversion constant involved; rounding issues when performing operations on musical time values; lack of language support for fixed point; conversion between different tick bases, if they are settable; user interfaces for settability; do ticks per second need to evenly divide the sample rate? ; the video frame rate?; the buffer block rate?; what happens if we need to support new rates in the future?; does this limit the possible tempos?; possible rhythms?; etc.

It is just a lot simpler to represent musical time as musical time, rather than as musical time multiplied by a large arbitrary constant and then quantized to the nearest integer.

-Frederick Umminger

I'm sorry if I put the above at all harshly. It's probably not obvious that as far as I care, the sequencer app can use floating point musical time representation as their user-visible time representation. I'm really only concerned with what these things look like at the level of the plug-in API that we're designing here. At that level, we've already agreed that the actual even times -- that data that is going to result in everything actually audible -- are nailed down to sample numbers (or higher-resolution sub-divisions per Mike) because that's what the event time stamp encodes. So any advantages of floating point -- including resolution, efficiency of handling, and fgreedom from conversion math -- are already lost by the time anything audible is going to happen. If the plug-in wants to get some high-resolution musical event info back because of some unusual sort of processing it wants to do -- which is what we've been talking about -- then fine, but to do that with floating point would be kind of artificial. In applied computer music performance, everything gets quantized.


And could you answer my question please?

-- Chris

----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: