[gmpi] Re: 3.9 (draft) use cases and stuff

  • From: Tim Hockin <thockin@xxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Mon, 23 Feb 2004 10:53:05 -0800

Chris, I just want to say that I really enjoy getting enormously long emails
in response to my long emails.  Hopefully my reply will be not quite as
long, but who knows :)  I write a lot of details because I want to make
myself clear.  Email is too high latency to be terse :)  I'm also going to
leave enough context for this email to be usable on it's own to follow the
discussion ;)

Edit: 
After typng it - sorry it's so long - bear with me, I think we're getting
places!!  We have a couple issues being addressed all at once.   I am
certainly not clear on all the needs or solutions.  I've asked a few times
in here "do we need..." and "do we require...".  Those are the fundamental
questions.  Answering those will enlighten the rest, I think.

On Mon, Feb 23, 2004 at 03:37:57PM +0100, Chris Grigg wrote:
> Phew!   Not sure we haven't left the tracks here, but I think a 
> couple responses are called for.

Well, I think this MOSTLY focuses on the use case, but that requires the
setup cost of explaining my mental model.

> >On Fri, Feb 20, 2004 at 04:19:34PM -0800, Chris Grigg wrote:
> >A hypothetical, pathological case which is what I use to contemplate how
> >things will work:
> 
> I don't think this is necessarily so pathological.  It .is. a good 
> way to explore the issues involved with allowing GMPI plugs to drive 
> stuff that other plug APIs delegate the the host exclusively, as 
> we've been hoping to be able to enable.

That's exactly it.  A lot of things we talk about in GMPI are traditionally
the domain of the host.  Maybe they should stay there.  Maybe not.  That's
what this mental model helps me figure out.

> >* Assume a host with no sequencer or mixer or anything built in, just GMPI,
> >  MIDI I/O, and audio I/O.
> >
> >* Assume a plugin which streams very large audio files (from disk or
> >  memory).
> >
> >* Assume a plugin which is a piano-roll of arbitrary length.
> 
> This means music events?  Is there an any assumption about what would 
> be rendering the events?  A GMPI plug-in?

The assumption is GMPI plugins, but the events could be sent outside the
GMPI graph to a MIDI instrument or something.  Keeping it all GMPI is
easier.

> >* Assume a mixer plugin with some inputs, some outputs, insert and send
> >  effects, maybe busses, etc.
> >
> >* Assume a music-master plugin which controls tempo and meter, and maybe 
> >has
> >  SMPTE input.
> >
> >* Assume a transport-master plugin which controls playback 
> >(play/pause/stop).
> >
> >You can sequence your musical data in one or more instance of the 
> >piano-roll
> >plugin, sending it's output to some GMPI instruments (GMPIi?).
> 
> Personally I was hoping to avoid having a different API type or plug 
> designation for instruments vs. effects etc., though that may be 
> over-optimistic.

I don't think there is a different interface needed.  Whether you think of
this stuff as MIDI events bound for a specific port:channel (with all the
events in series) or you think of it as a bunch of virtual wires (with all
the events in parallel) the protocol is the same as any plugin->plugin
control.  Was that what you meant?

> >You can load your audio tracks into one or more instances of the streamer
> >plugin.
> >
> >You can send the audio output of the instruments and streamers to the mixer
> >plugin, and the results of the mixer plugin to the host's audio output.
> >
> >You can route the tempo information from the tempo-master to the
> >piano-rolls, any effects that care, and maybe the streamers (if they care 
> >to
> >do beat-mapping or whatever).
> >
> >You can route the playback information to the piano-rolls and streamers 
> >(for
> >the sake of simplicity I will call the piano-rolls and streamers "sequencer
> >plugins").
> >
> >
> >The outermost host has set the sample rate, and is process()ing all 
> >plugins.
> >When the transport is stopped, the sequencer plugins are doing nothing, so
> >no sound is being created.  When the transport is played, the sequencer
> >plugins will start to play.  So everything works just like a traditional
> >all-in-one host, right?
> 
> Except that in traditional hosts there's frequently only one possible 
> musical timeline.

As I said above "Assume a host with no sequencer or mixer or anything built
in.." :)  This is not a traditional host we're talking about - it's a
pathologically dumb host :)  If you set things up as I assume above, and
play from the start with no transport jups, it will "work" like a
traditional host.  The fact that there COULD BE things that are different is
not applicable right now :)

> >There is at least one problem with the above rig.  There is no concept of
> >transport position!
> 
> Do you this is truly a problem, other than for the 'halfway jump' 
> mentioned in the next paragraph?

Well, I know *I* find it useful to use the transport to jump or scan through
a piece, sometimes.  All I see is a fixed-width transport widget, so when I
click in the middle of it, I expect it will find ~ 50% of the way through
the track (whether that is 50% of the track in seconds or 50% in bars...)

> >Suppose I want to jump halfway into the track.  What is halfway?  Halfway
> >through each sequencer track?  What if two streamers are different lengths?
> >What if the streamers and piano-rolls are the same length at  one tempo, 
> >but
> >not at another?  Further, how does the host know how long each sequencer
> >plugin's data is?
> 
> I thought we covered this case before and thought it wasn't very 
> useful: a) In many cases, track length has no meaning;

For an "effects rack" kind of host or a host that just takes MIDI input and
renders it, there is no track length, and therefore there is no concept of
transport at all, right?

> b) if there 
> are multiple chunks in the session, the end of the session is the end 
> of the last chunk,

Not sure what you mean by chunk?  

>and c) Does anyone really care about jumping to 
> halfway anyway?  Jumping to a known position (offset from start or 
> other known, meaningful, position) is the general case; 'halfway' 
> seems like one of those contrived use cases you're trying to avoid.

Ok, a known position, then.  I want to look at the 4th piano-roll plugin and
jump to the spot right before some note starts playing.  With a multitude of
sequencer plugins and a transport that is independent thereof, how?  If
Transport is a function of a sequencer plugin, we can (maybe) build a simple
cross-plugin transport protocol on top of the simpler GMPI event protocol.

This is just a quick think-up:

Each sequencer plugin has a TRANSPORT output which emits transport-change
events.

The host receives transport-change events and sends them to the other
sequencer plugins.

I can then jump to any point within any sequencer plugin, and all the other
transports will sync.

The alternatives seem to be:  a) don't provide cross-sequencer transport sync;
or b) don't support sequencer plugins.

(side note: can a plugin GUI have multiple windows?  If we allow sequencer
plugins, it is almost required..)

> >Audio file streamers fundamentally operate on sample offsets (unless they 
> >do
> >beat mapping/time stretching, but let's call that the same as a piano-roll,
> >for now).  They want to be jumped in samples.
> >
> >Piano-rolls fundamentally operate on musical units.  Given a sample offset
> >from start, they CAN convert to musical units with some knowledge of tempo
> >and meter, but they require that extra knowledge.  They really want to be
> >jumped in musical units.
> 
> Not sure what you're saying.  In a mixed musical time + samples 
> environment, musical time is always (in my experience, at least) 
> pulled by the sample time.  GMPI isn't going to change that.

Let me try to clarify (maybe I am seeing an issue where there is none)

Imagine a dumb wav file player - no beat mapping or tempo sync.  Just plays
a wav file at a constant speed.  It might have time markers at certain
sample offsets.

Imagine a simple piano-roll sequencer.  It obviously lines things up on
musical boundaries.

If I ask them to jump to position "sample #12345", then the wav player can
just jump there, while the piano-roll needs to know the history of tempo
changes to get it right.

If I ask them to jump to position "bar 12, beat 3", then the piano-roll can
just jump there, while the wav player needs to know the history of tempo
changes to get it right.

That is - IFF we want to do transport sync across sequencer plugins.  If you
just have one sequencer (as in a traditional host), hen most of this is
moot.

> >Figuring out transport position for multiple independant sequencer plugins
> >is not fun.
> 
> Why would you need to?  Other than for the jump-halfway problem, why 
> can't each sequencer plug-in figure out its own position?  Just send 
> them all the same locate command, for a sample time you like.  Each 
> sequencer will chase to that spot.

Right - they all need to seek to the same position.  Whichever unit we
choose means one type of plugin has to know tempo history, right?

> I think things are getting mixed up here.  Suggest you distinguish 
> between a) transport control (play, pause, stop, etc.) which has no 
> sense of position, and b) locate (position) control, which has no 

Yes - I have been using "transport" to mean the playback position (darn
silly names).  I will from now on use "transport" to mean "playback state
(play/pause/stop)" and "locate" to mean "playback position".  Unless we have
better words?

> sense of transport control.  Then you can broadcast transport control 
> to the sample clock and all musical timeline masters and allow each 
> musical timeline to handle its own location respecting its own tempo 
> & meter changes.

This pre-supposes that tempo and meter are functions of each sequencer
plugin.  Not that I am necessarily against that, I just don't want to
pre-suppose it.  I really am starting to believe that meter and locate
are part of the sequencer (whether the sequencer is the host or is a GMPI
plugin).  I could deal with tempo being the same.

It is still a requirement that a plugin be able to change the tempo.  Is it
a requirement that a plugin be able to change the meter?  And is it a
requirement that a tempo change be enacted or is it optional (as in VST)?
How about meter - optional or not?  How about transport and locate?  Can a
plugin change the transport/locate of it's time controller?

> In fact, in principle there can be several kinds of locate control:
> - Absolute locate control (jump to a given time offset from start of 
> piece; if in sample time this is the same for all listeners, but if 
> in musical time it has to be in reference to some one specific 
> musical timeline, never more than one),
> - Relative locate control (signed offset relative to current 
> position; if in sample time this is the same for all listeners)
> - Proportional locate control (jump to X% of the way through the 
> piece, like your halfway example).
> 
> (Actually there are 6 kinds, not 3, because there's a sample time 
> version and a musical time version of each one.
> 
> The only one of these that's ever problematic is proportional 
> control, and since it's edge-case, I suggest dropping it as a use 
> case and deciding we don't care about making this doable in GMPI..

I'm fine with dropping percentage.  There is still the problem that converting
between sample time and music time requires temporal history.

Another niggle:  IF we allow multiple sequencer plugins, and IF you want to
have a global locate widget that controls them all, you need some protocol
to expose each sequencer plugin's sequence length.  Otherwise how does the
global locate widget know how to turn "I clicked at 50 of the widget" into
"12345678 samples" or "50 bars" position?

> >Another problem is changes in the meter.  If the meter changes dynamically,
> >the sequencer plugins that care about meter have a hard problem.  What
> >happens to data that was sequenced at one meter but is no longer valid?  
> >I'm
> >content to say that this has to be dealt with by the plugins, and it is
> >there problem.  But I think that is inelegant.
> 
> Unless you've solved time travel, this is insoluble.  8-)

Well, it can be contained by sane requirements limit, maybe :)

> >So maybe the level of granularity in this model is too much.  It seems to 
> >me
> >that transport position and time signature really are properties of the
> >sequencer.
> 
> But that doesn't fix the time travel problem, does it?  Don't you get 
> to a better place by separating transport & locate as suggested above?

There are two issues that I came to.

1) Meter changes that are unforseen cause ugly problems.  Do we really need
to support plugins changing the meter in real time?

2) Transport changes are hard (but not impossible) to sync across sequencer
plugins.  Do we really need to?

> >If you assume that transport is only applicable to something with a known
> >length, then you have to know the length of a sequence before you can
> >transport into it.  So I see two choices:
> >  1) Allow a transport to control multiple sequencers, in which case we 
<snip for brevity>
> >  2) Make transport be a function of the sequencer.  Having multiple
<snip for brevity>

> >If you consider that time signature fundamentally alters the behavior of a
> >sequencer plugin, then I see three options:
> >  1) Allow meter to be changed dynamically, and tell the sequencer plugin
> >     authors that they have to deal with it.  They can figure out 
<snip for brevity>
> >  2) Make meter be a function of the sequencer.  Having multiple sequencers
<snip for brevity>
> >  3) Make meter be external to sequencer plugins but NOT realtime.  A
<snip for brevity>

> >What this leads me to believe is that sequencer plugins should not be
> >micro-plugins like I started with, but monolithic plugins.  They sequence
> >audio and music and whatever they want.  They control the meter, and they
> >control the transport.
> 
> This is mainly predicated on 'If you assume that transport is only 
> applicable to something with a known
> length' which I refuted above.  Without that predicate, most of your 
> constraints fall away and the conclusion isn't needed.  Also I'm not 

You didn't refute it - there are still a lot of problems with it.  Explain
to me what it means to locate in an environment where you don;t know the
end of the data?  You either have an end-of-sequence, in which case you can
jump to any point of the sequence, or you don't have a sequence.  Or am I
being dense?  By matters of practicality if you have a sequence, you have an
end-of-sequence.

> sure I agree that 'time signature fundamentally alters the behavior 
> of a sequencer plug-in' is right, see below.

Seeing below.  Find my answer there. :)

> >But what does the sequencer DO if it receives a meter-change from 4/4 to
> >3/4?  Drop the last note?  shuffle it to the next bar?
> 
> All else being equal, the actual time intervals (scaled by tempo of 
> course) between events stored in the sequence probably need to be 
> preserved.  (Time sig is in many important ways just a matter of 
> perception, anyway.)  If you start with 4/4 time and a note on beat 1 
> of bar 2, and then you change the piece's meter to 3/4, the note 
> should happen on beat 2 of bar 2.

That's one answer.  What I am saying is that changing the meter on an
existing sequence is undefined.  There is no right answer.  In fact,
probably ALL answers are wrong for some intents.

If you consider all music events to be aligned on ticks, where ticks re
defined per beat, then changing the number of beats per measure may
invalidate the sequenced data.  If you consider all music events to be
aligned to some % of a measure, then changing the number of beats per
measure may give you incorrect timing. If you change the base note of the
timesig, you may end up with unintended chords.

So what is the answer?  It seems that it would make sense to change the
meter for other plugins.  An auto-drummer would want to know that the meter
changed from 4/4 to 7/8.  I can even believe that a plugin (such as the
plugin that analyzes realtime input for tempo) would send a meter change to
the auto-drummer.  But it just seems wrong to change the meter on sequenced
data.

So maybe the answer is for sequencer plugins to NOT receive meter change
events in realtime, but to make the meter be realtime controllable.

Maybe the answer is a semi-static tempo/meter map that is exposed by some
entity.  Changes to tempo/meter can be sent dynamically, too. Plugins can
access that map if they want a view of the meter that does not change in
realtime.  Plugins can opt to receive events if all they care about is the
current tempo/meter.  Plugins should not do both, but they could.

Tim

----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: