[gmpi] Re: 3.15 MIDI

  • From: Chris Grigg <gmpi-public@xxxxxxxxxxxxxx>
  • To: gmpi@xxxxxxxxxxxxx
  • Date: Mon, 14 Jun 2004 11:48:39 -0700

I think there are some things to say along the lines of Martijn's point that not all criticisms of MIDI are air-tight. It helps to unpack them one at a time. Keeps the heat level down.

Tim said:

Indexing on keys is, IMHO, a limitation.

It's a model. Consider stringed, non-fretted instruments, e.g. piano, harp, organs where key index is a natural representation of the instrument. This .is. a natural model for these instruments. It also allows for things like resonance on strings not being played, etc.



The semantics of the same "key"
being sent a note-on more than once are not well defined.  Callign it a
"key" index makes that operation be nonsensical, when it is really not
nonsensical at all.

For the instruments above, it's nonsensical to have more than one note-process per key. So you have some good cases for the model, and some bad ones.



You really want to tell a soft instrument about voices, not keys.

Depends on the kind of instrument you want to have, again see above examples -- sometimes what you say is true, other times not. Besides, 'Voice' itself is quite a limiting concept if what you're interested in creating the kind of complex overall behavior that interesting real-world physical instruments tend to exhibit. Interaction between notes, body resonance, buzzing strings, etc.



And
even then, the soft instrument may be set up in ways you can't know a
priori.  Maybe it is set up to be mono - each ne voice kills the previous
voice.  Maybe it is set up to not map it's sound to traditional 12-tone
"keys" at all.

This is not unique to MIDI, it applies to any conceivable control protocol, so were you trying to get to another point here?



A proper upright bass (or any stringed instrument) does
not really have the notion of a key - it has an infinite number of
pitches, but only one voice ver string.

As I said, indexing on keys is one model. Here's a case where it's not a natural model. There are other cases for which it is. All this shows is that no single model does well in all cases. This is one of those 'different is not necessarily the same as better' conversations.



MIDI just does not map to some of these things.  You can force it to map
by futzing with pitch-bend, but even that is not perfect.  I want to be
able to control a number of parameters per-voice.  I just can't do that in
MIDI without setting up a channel for each voice.

So set up a channel for each voice. What's the problem, since your objective isn't prohibited, just slightly cumbersome?



> could be supported using some other protocol than MIDI. That doesn't
mean MIDI should no longer be supported.

Why put MIDI into every plugin? Why not let the hosts be *REALLY* good at MIDI and let the plugins just not worry about it. Every plugin gets simpler by that little bit.

That work has already been done by what, hundreds of plug developers, so for them there is no savings. You'd have to do work to take it out.


-- Chris G.

----------------------------------------------------------------------
Generalized Music Plugin Interface (GMPI) public discussion list
Participation in this list is contingent upon your abiding by the
following rules:  Please stay on topic.  You are responsible for your own
words.  Please respect your fellow subscribers.  Please do not
redistribute anyone else's words without their permission.

Archive: //www.freelists.org/archives/gmpi
Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe

Other related posts: