[gmpi] Re: 3.9 (draft) use cases and stuff

  • From: Tim Hockin <thockin@xxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Fri, 20 Feb 2004 17:55:00 -0800

On Fri, Feb 20, 2004 at 04:19:34PM -0800, Chris Grigg wrote:
> >> 8. Sequencer tracks
> >> 9. Audio tracks
> >>    Is it notified of transport position in samples?
> >> <<<
> >> This would be the "score time" (as opposed to the "stream time").  Every
> >> processing frame would know the score time.
> >
> >But each of those works better with a different unit base.  An audio track
> >(without time stretch or beat marking) would not care about the musical
> >unit.  It would have a hard time converting "Bar 40, Beat 3" into a proper
> >sample offset.  A music-oriented sequence track would have a hard time
> >converting "sample 123456" into a musical position.
> Isn't this dependent on the host style?  For a straight DAW-type host 
> timeline will be linear time (either SMPTE or samples), no musical 
> timeline.  Musical score time would only be relevant in 
> musically-oriented hosts (in the Sonar, DP, etc. vein).  We already 
> say events times are presented to GMPI plugs as (perhaps among other 
> things as well) as either sample frame numbers or sample frame 
> indexes, I guess the outcome of that decision determines whether the 
> transport position at start of timeslice is delivered to the plug in 
> samples.

I'm trying to find a use case that either proves or disproves the need for
transport-time in any given format (samples or musical).  What about a host
that is both musically and linearly oriented (isn't that a LOT of them) ?
Or what about a host that is oriented to neither.

A hypothetical, pathological case which is what I use to contemplate how
things will work:

* Assume a host with no sequencer or mixer or anything built in, just GMPI,
  MIDI I/O, and audio I/O.

* Assume a plugin which streams very large audio files (from disk or

* Assume a plugin which is a piano-roll of arbitrary length.

* Assume a mixer plugin with some inputs, some outputs, insert and send
  effects, maybe busses, etc.

* Assume a music-master plugin which controls tempo and meter, and maybe has
  SMPTE input.

* Assume a transport-master plugin which controls playback (play/pause/stop).

You can sequence your musical data in one or more instance of the piano-roll
plugin, sending it's output to some GMPI instruments (GMPIi?).

You can load your audio tracks into one or more instances of the streamer

You can send the audio output of the instruments and streamers to the mixer
plugin, and the results of the mixer plugin to the host's audio output.

You can route the tempo information from the tempo-master to the
piano-rolls, any effects that care, and maybe the streamers (if they care to
do beat-mapping or whatever).

You can route the playback information to the piano-rolls and streamers (for
the sake of simplicity I will call the piano-rolls and streamers "sequencer

The outermost host has set the sample rate, and is process()ing all plugins.
When the transport is stopped, the sequencer plugins are doing nothing, so
no sound is being created.  When the transport is played, the sequencer
plugins will start to play.  So everything works just like a traditional
all-in-one host, right?

I think this case demonstrates that the primary clock is the sample clock,
and that the sample clock is all the host needs to worry about.  This model
doesn't include UST usage, but could very easily. So sample time and UST
time are the GMPI times.

So what is music time useful for at all?  Read on before you answer..

There is at least one problem with the above rig.  There is no concept of
transport position!

Suppose I want to jump halfway into the track.  What is halfway?  Halfway
through each sequencer track?  What if two streamers are different lengths?
What if the streamers and piano-rolls are the same length at  one tempo, but
not at another?  Further, how does the host know how long each sequencer
plugin's data is?

Audio file streamers fundamentally operate on sample offsets (unless they do
beat mapping/time stretching, but let's call that the same as a piano-roll,
for now).  They want to be jumped in samples.

Piano-rolls fundamentally operate on musical units.  Given a sample offset
from start, they CAN convert to musical units with some knowledge of tempo
and meter, but they require that extra knowledge.  They really want to be
jumped in musical units.

Figuring out transport position for multiple independant sequencer plugins
is not fun.

Another problem is changes in the meter.  If the meter changes dynamically,
the sequencer plugins that care about meter have a hard problem.  What
happens to data that was sequenced at one meter but is no longer valid?  I'm
content to say that this has to be dealt with by the plugins, and it is
there problem.  But I think that is inelegant.

So maybe the level of granularity in this model is too much.  It seems to me
that transport position and time signature really are properties of the

If you assume that transport is only applicable to something with a known
length, then you have to know the length of a sequence before you can
transport into it.  So I see two choices:
  1) Allow a transport to control multiple sequencers, in which case we need
     to build some way for the sequencers to report their sequence length to
     the transport controller.  The transport controller can use that to
     pick the longest sequence as it's basis.  Converting to/from
     sample/music time is possible.  Either unit COULD be used.  Given that
     (guessing) it will be more likely that people want to align to music
     time than sample time, musical time is what matters here.  Score-based
     music time which pauses with the playback, and jumps with the
  2) Make transport be a function of the sequencer.  Having multiple
     sequencers CAN work, but there will be no way to transport within them
     all at the same time.  Straight playback will work just fine.

If you consider that time signature fundamentally alters the behavior of a
sequencer plugin, then I see three options:
  1) Allow meter to be changed dynamically, and tell the sequencer plugin
     authors that they have to deal with it.  They can figure out heuristics
     that work for them.
  2) Make meter be a function of the sequencer.  Having multiple sequencers
     can work, but their meters are independant.  In fact, meter would
     probably NOT be something you want to do automation or events on.
     You absolutely CAN change meters, but they are known a priori.  They
     are not realtime.  Changing a meter will change the way the sequencer
     draws it's alignments and how it handles sequenced data.
  3) Make meter be external to sequencer plugins but NOT realtime.  A
     meter-master (whether it was built into the host or a plugin) would
     allow you to build a static meter-map which could be shared with
     plugins.  Sequencers could use that static meter map to do their
     alignments, and other plugins that care can do whatever with it.  It
     would NOT change in realtime.

That said, tempo could be considered the same problem as meter.  I disagree.
Tempo is simpler and more likely to change in realtime than meter is.

What this leads me to believe is that sequencer plugins should not be
micro-plugins like I started with, but monolithic plugins.  They sequence
audio and music and whatever they want.  They control the meter, and they
control the transport.

> >> When does a sequencer plugin cease to become a plugin, and start
> >> becoming a host? <g>
> >
> >A question I have been asking myself more and more.  Having a dynamic tempo
> >is not too bad.  Having a dynamic time signature is a LOT more complicated.
> >Maybe it is not worth it?  Or maybe those FEW plugins that care about meter
> >will figure it out on their own?
> A plug-in format that assumes time signatures never change would be 
> pretty unacceptable IMHO.  Also if the host is doing its job, a 

I didn't mean that meter can't change, just that maybe it is better if it is
a static map.  Maybe.  Just conjecturing :)

> try to fix invalid event times, maybe by doing a modulo against the 
> current time sig and bumping the bar number, but on the other hand 
> maybe invalid events should never be delivered and the host should 
> signal en internal error to the user.

But what does the sequencer DO if it receives a meter-change from 4/4 to
3/4?  Drop the last note?  shuffle it to the next bar?

Sorry this got to be long.

Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: