You're losing me, Tim. I can't tell where you mean the messages the
controller is sending vs. where you mean how the plug is interpreting
those msgs. And I think focusing so much on the plug side is
obscuring the practical use case. So can you please tell me the
story from the performance side?: Assuming the guitar controller
case is addressed by sending each string on a separate channel, then
within any given channel, is the controller supposed to be able to
send in different modes (like 1-VPID-per-note vs.1-VPID-per-channel)
at different times, or only in one mode? If yes, then how does the
controller know when to go into each possible mode? If no, please
describe the one mode.
Also, I don't understand why it would be good for real voices to
continue to run after being divorced from any VPID, that seems more
like a problem than an opportunity. If it's only being done to allow
the same VPID to refer to 2 separate sequential notes, then that also
sounds like a problem since it seems to defeat the initial purpose of
allowing the controller to explicitly command monophonic behavior for
those 2 notes.
-- Chris G.
On Fri, Dec 02, 2005 at 04:16:13PM -0800, Chris Grigg wrote:>you mena the VPID allocation scheme (host) or the real voice-process >allocation scheme (plugin)
Isn't the plug's internal voice allocation scheme relative to VPIDs kind of orthogonal to this? All the plug can do is either obey or ignore VPIDs. This is talking about the case where the plug obeys them.
You say obey and ignore, but I want to make an aspect of my mental model clearer: it's not about obey or ignore. A VPID is simply a handle that the host uses to address a process (see, new terms :) on the plugin. The plugin chooses how to map VPIDs <-> processes. On to your question.
>Can you clarify for me?
Consider an imitative vibes patch (or piano with sustain pedal held down, or cymbals), with the controller sending messages using 1-voice-per-note addressing/allocation. At the synth end, the oscillator of the old note continues to run past the time the next note is hit, so it's still occupying a voice. (This is true even if the pedal's not held, it's just that the overlap period is shorter.) More than one voice is consumed in total, even though they were all triggered from the same guitar string on the physical controller side. Select an imitative guitar patch and things go all to hell.
Sure. This works in either reuse-VPID or never-reuse-VPID models. In the reuse-VPID model, the voice-process can still keep playing after the VPID has been re-used. You just can't address it anymore. You can not send it any more events. Consider the voice-process to be detached from the VPID handle, but still running it's tail.
The plugin can choose to operate this way when you choose the vibes patch.
If you were using the never-reuse-VPID model, the first VPID would still be attached to the voice-process, and you could send further events.
By contrast, consider an imitative guitar patch, with the controller sending messages using 1-voice-per-string addressing/allocation. At the synth end, every time a new note is hit, it replaces the previous note because by definition it uses the same oscillator. Only 1 voice is consumed in total per string, and the behavior of each synth voice closely mirrors the acoustic behavior of the corresponding string of the physical guitar controller -- good for imitative guitars. But select a vibes patch and things again go all to hell.
Sure. This works in either reuse-VPID or never-reuse-VPID models. In the reuse-VPID model, the voice-process is stopped when the VPID is re-used.
The plugin can choose to operate this way when you choose the guitar patch.
If you were using the never-reuse-VPID model, the plugin would need to know that the new VPID was intended to replace the old VPID. It could use the floorf() of the pitch, or it could use the channel number or something else. All it needs to do is recognize that VPID(2) is killing VPID(1).
MIDI 1.0, but then again I'd suggest the controller might well want to know about that, too, since in the GMPI world it has VPID's to send.
We *might* want the controller to know the model that the plugin wants, but I don't think it is needed. You can operate in both modes above as well as true mono mode completely internally.
Hosts can always operate in the simplest mode: never re-use VPIDs until you really wrap 32 bits. Plugins can do what is right for them internally, whether that is per-patch or not.
Does that make my intentions clearer? It's important to see the decoupling between a VPID and the actual process. What I really need is an animation. :) Maybe I'll whip somethign up in flash, if this doesn't clarify.
---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission.
Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe
---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission.
Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe