[gmpi] Re: 3.9 (draft) use cases and stuff

  • From: Chris Grigg <gmpi-public@xxxxxxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Mon, 23 Feb 2004 15:37:57 +0100

Phew! Not sure we haven't left the tracks here, but I think a couple responses are called for.

On Fri, Feb 20, 2004 at 04:19:34PM -0800, Chris Grigg wrote:
 >> 8. Sequencer tracks
 >> 9. Audio tracks
 >>       Is it notified of transport position in samples?
 >> <<<
 >> This would be the "score time" (as opposed to the "stream time").  Every
 >> processing frame would know the score time.
 >But each of those works better with a different unit base.  An audio track
 >(without time stretch or beat marking) would not care about the musical
 >unit.  It would have a hard time converting "Bar 40, Beat 3" into a proper
 >sample offset.  A music-oriented sequence track would have a hard time
 >converting "sample 123456" into a musical position.

 Isn't this dependent on the host style?  For a straight DAW-type host
 timeline will be linear time (either SMPTE or samples), no musical
 timeline.  Musical score time would only be relevant in
 musically-oriented hosts (in the Sonar, DP, etc. vein).  We already
 say events times are presented to GMPI plugs as (perhaps among other
 things as well) as either sample frame numbers or sample frame
 indexes, I guess the outcome of that decision determines whether the
 transport position at start of timeslice is delivered to the plug in

I'm trying to find a use case that either proves or disproves the need for transport-time in any given format (samples or musical). What about a host that is both musically and linearly oriented (isn't that a LOT of them) ? Or what about a host that is oriented to neither.

A hypothetical, pathological case which is what I use to contemplate how
things will work:

I don't think this is necessarily so pathological. It .is. a good way to explore the issues involved with allowing GMPI plugs to drive stuff that other plug APIs delegate the the host exclusively, as we've been hoping to be able to enable.


* Assume a host with no sequencer or mixer or anything built in, just GMPI,
  MIDI I/O, and audio I/O.

* Assume a plugin which streams very large audio files (from disk or

* Assume a plugin which is a piano-roll of arbitrary length.

This means music events? Is there an any assumption about what would be rendering the events? A GMPI plug-in?

* Assume a mixer plugin with some inputs, some outputs, insert and send
  effects, maybe busses, etc.

* Assume a music-master plugin which controls tempo and meter, and maybe has
  SMPTE input.

* Assume a transport-master plugin which controls playback (play/pause/stop).

You can sequence your musical data in one or more instance of the piano-roll
plugin, sending it's output to some GMPI instruments (GMPIi?).

Personally I was hoping to avoid having a different API type or plug designation for instruments vs. effects etc., though that may be over-optimistic.

You can load your audio tracks into one or more instances of the streamer

You can send the audio output of the instruments and streamers to the mixer
plugin, and the results of the mixer plugin to the host's audio output.

You can route the tempo information from the tempo-master to the
piano-rolls, any effects that care, and maybe the streamers (if they care to
do beat-mapping or whatever).

You can route the playback information to the piano-rolls and streamers (for
the sake of simplicity I will call the piano-rolls and streamers "sequencer

The outermost host has set the sample rate, and is process()ing all plugins. When the transport is stopped, the sequencer plugins are doing nothing, so no sound is being created. When the transport is played, the sequencer plugins will start to play. So everything works just like a traditional all-in-one host, right?

Except that in traditional hosts there's frequently only one possible musical timeline.

I think this case demonstrates that the primary clock is the sample clock,
and that the sample clock is all the host needs to worry about.  This model
doesn't include UST usage, but could very easily. So sample time and UST
time are the GMPI times.

So what is music time useful for at all? Read on before you answer..

There is at least one problem with the above rig.  There is no concept of
transport position!

Do you this is truly a problem, other than for the 'halfway jump' mentioned in the next paragraph?

Suppose I want to jump halfway into the track.  What is halfway?  Halfway
through each sequencer track?  What if two streamers are different lengths?
What if the streamers and piano-rolls are the same length at  one tempo, but
not at another?  Further, how does the host know how long each sequencer
plugin's data is?

I thought we covered this case before and thought it wasn't very useful: a) In many cases, track length has no meaning; b) if there are multiple chunks in the session, the end of the session is the end of the last chunk, and c) Does anyone really care about jumping to halfway anyway? Jumping to a known position (offset from start or other known, meaningful, position) is the general case; 'halfway' seems like one of those contrived use cases you're trying to avoid.

Audio file streamers fundamentally operate on sample offsets (unless they do
beat mapping/time stretching, but let's call that the same as a piano-roll,
for now).  They want to be jumped in samples.

Piano-rolls fundamentally operate on musical units.  Given a sample offset
from start, they CAN convert to musical units with some knowledge of tempo
and meter, but they require that extra knowledge.  They really want to be
jumped in musical units.

Not sure what you're saying. In a mixed musical time + samples environment, musical time is always (in my experience, at least) pulled by the sample time. GMPI isn't going to change that.

Figuring out transport position for multiple independant sequencer plugins
is not fun.

Why would you need to? Other than for the jump-halfway problem, why can't each sequencer plug-in figure out its own position? Just send them all the same locate command, for a sample time you like. Each sequencer will chase to that spot.

I think things are getting mixed up here. Suggest you distinguish between a) transport control (play, pause, stop, etc.) which has no sense of position, and b) locate (position) control, which has no sense of transport control. Then you can broadcast transport control to the sample clock and all musical timeline masters and allow each musical timeline to handle its own location respecting its own tempo & meter changes.

In fact, in principle there can be several kinds of locate control:
- Absolute locate control (jump to a given time offset from start of piece; if in sample time this is the same for all listeners, but if in musical time it has to be in reference to some one specific musical timeline, never more than one),
- Relative locate control (signed offset relative to current position; if in sample time this is the same for all listeners)
- Proportional locate control (jump to X% of the way through the piece, like your halfway example).

(Actually there are 6 kinds, not 3, because there's a sample time version and a musical time version of each one.

The only one of these that's ever problematic is proportional control, and since it's edge-case, I suggest dropping it as a use case and deciding we don't care about making this doable in GMPI..

Another problem is changes in the meter.  If the meter changes dynamically,
the sequencer plugins that care about meter have a hard problem.  What
happens to data that was sequenced at one meter but is no longer valid?  I'm
content to say that this has to be dealt with by the plugins, and it is
there problem.  But I think that is inelegant.

Unless you've solved time travel, this is insoluble. 8-)

So maybe the level of granularity in this model is too much.  It seems to me
that transport position and time signature really are properties of the

But that doesn't fix the time travel problem, does it? Don't you get to a better place by separating transport & locate as suggested above?

If you assume that transport is only applicable to something with a known
length, then you have to know the length of a sequence before you can
transport into it.  So I see two choices:
  1) Allow a transport to control multiple sequencers, in which case we need
     to build some way for the sequencers to report their sequence length to
     the transport controller.  The transport controller can use that to
     pick the longest sequence as it's basis.  Converting to/from
     sample/music time is possible.  Either unit COULD be used.  Given that
     (guessing) it will be more likely that people want to align to music
     time than sample time, musical time is what matters here.  Score-based
     music time which pauses with the playback, and jumps with the
  2) Make transport be a function of the sequencer.  Having multiple
     sequencers CAN work, but there will be no way to transport within them
     all at the same time.  Straight playback will work just fine.

If you consider that time signature fundamentally alters the behavior of a
sequencer plugin, then I see three options:
  1) Allow meter to be changed dynamically, and tell the sequencer plugin
     authors that they have to deal with it.  They can figure out heuristics
     that work for them.
  2) Make meter be a function of the sequencer.  Having multiple sequencers
     can work, but their meters are independant.  In fact, meter would
     probably NOT be something you want to do automation or events on.
     You absolutely CAN change meters, but they are known a priori.  They
     are not realtime.  Changing a meter will change the way the sequencer
     draws it's alignments and how it handles sequenced data.
  3) Make meter be external to sequencer plugins but NOT realtime.  A
     meter-master (whether it was built into the host or a plugin) would
     allow you to build a static meter-map which could be shared with
     plugins.  Sequencers could use that static meter map to do their
     alignments, and other plugins that care can do whatever with it.  It
     would NOT change in realtime.

That said, tempo could be considered the same problem as meter.  I disagree.
Tempo is simpler and more likely to change in realtime than meter is.

What this leads me to believe is that sequencer plugins should not be
micro-plugins like I started with, but monolithic plugins.  They sequence
audio and music and whatever they want.  They control the meter, and they
control the transport.

This is mainly predicated on 'If you assume that transport is only applicable to something with a known
length' which I refuted above. Without that predicate, most of your constraints fall away and the conclusion isn't needed. Also I'm not sure I agree that 'time signature fundamentally alters the behavior of a sequencer plug-in' is right, see below. Changing the reported musical time for an event occurring at a given time isn't fundamentally altering the sequencer behavior IMHO.

> try to fix invalid event times, maybe by doing a modulo against the
 current time sig and bumping the bar number, but on the other hand
 maybe invalid events should never be delivered and the host should
 signal en internal error to the user.

But what does the sequencer DO if it receives a meter-change from 4/4 to 3/4? Drop the last note? shuffle it to the next bar?

All else being equal, the actual time intervals (scaled by tempo of course) between events stored in the sequence probably need to be preserved. (Time sig is in many important ways just a matter of perception, anyway.) If you start with 4/4 time and a note on beat 1 of bar 2, and then you change the piece's meter to 3/4, the note should happen on beat 2 of bar 2.

-- Chris G.

Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: