On Thu, 2003-10-09 at 23:43, Tim Hockin wrote: > > - How multiple voices are handled in a synth plugin? > > GMPI: We're not there yet. We're gathering requirements. Do you have a > requirement in this area? > > ME: Voices are not really part of the API. They are internal to the synth. > > > - Is there an agreement on what a "channel" is? > > GMPI: We're not there yet. We're gathering requirements. Do you have a > requirement in this area? > > ME: 'Channel' is so overloaded. What do you really mean? I didn't explain very well...sorry. I meant: I know that the word "channel" is used with different meanings; did you agree on a definition for it? Also did you agree on way to handle poliphony? > I've envisioned something akin to channels being used for this, too. I > called them modules. Each plugin can define N module templates. Upon > request, it can instantiate those modules. You can use a module for each > MIDI channel. You can make a mixer that has mono, stereo, and 5.1 module > types, and load as many of each as you want. But we're not NEARLY there > yet. If I understood well this is quite similar to what I intended, though my plugin structure is little more flexible. The goal of my definition of channels, states and group of parameters is to allow multiple voices for a synth, just like multiple...ehm, channels (the level controls in an hardware mixer), with the same paradigm. The goal of such an approach is simplicity, flexibilty and consistence. If you think it could be useful I could post how I imagine the stucture of a plugin. > > - Will network plugins be allowed (plugin that contain other plugins in > > a graph. in that case the host could be a thin layer around a network > > plugin)? > > GMPI: Nothing stops that from happening. This is different from saying that the API is designed with that goal in mind. In this last case the API may need "interfaces" to allow plugins that are well written from the point of view of usability and features they provide. > ME: It should be easy to take the GMPI SDK and make a plugin that is a > GMPI-host. It should also be easy enough for a host to store a sub-graph of > plugins as a preset and re-load that. We may decide to formalize such a > subgraph preset as part of GMPI. Yes, I agree. And in my view, that is accomplished with state saving/restoring. This approach seems to me much more simple and consistent than defining a special interface for that. > > - Will sequencer plugins be allowed (on the same graph level of the > > plugins it controls)? > > GMPI: Nothing stops that from happening. > > ME: That is the 'right' way to do it. Agree. For me an extra requirement is to allow on-the-fly recording from a generic plugin to a sequencer plugin that is in the same graph. > > - Will shared resources be allowed (for example a wave, an envelope, a > > song or a graph-configuration shared between many plugins)? > > I don't understand the question - explain? With "resource" I static (though it may be changed by plugins) that is a accessed by a plugin. It's a definition (and an implementation) that can handle sample wave for a sampler plugin as well as envelopes, or songs for a sequencer plugin as well as graphs for a network plugin. They all could be part of the state of a plugin, but if you want them to be shared by many plugin (as are waves or envelopes in most trackers) they must be a separate entity. > > - Is the distiction between audio data and control data realy needed? > > (I consider audio and control just *hints* for a parameter) > > - How will memory be available to a plugin (a function, an event, > > or...)? > > We hadn't argued this tooo much yet. It's very paradigmatic. If audio and > control are essentially indistinguishable, you have some advantages and some > disadvantages. Ditto for the reverse. I think everyone has tentatively > agreed that event-driven, just-in-time delivery of atomic (and maybe ramped) > control changes is the sanest approach TODAY. Ok, but in some cases you may want a parameter to be controlled also by a stream of data. What I'd really like is to avoid proliferation of plugins doing the same thing but with different input method. I mean, in LADSPA when you have a plugin with a control and want to make the relative parameter to be controlled by a stream of values you have to write another plugin (for example sine plugins: sine_faaa 1063, sine_faac 1064, sine_fcaa 1065 and sine_fcac 1066). This complexity could be internal to the plugin; it could "say": this parameter can be controlled by an event list, but also by a stream (do you remember old days of modular analog synths?). The advantage is that when you want to change this type of connection (from event list to stream of values) you don't need to destroy your plugin and create another one instead. Also this is easy to implement. In this sense I prefer much more the word "property" (or another one) instead of parameter (I don't like to call "parameter" an audio input). > 2nd: Connection changing > > GMPI: We're not there yet. We're gathering requirements. Do you have a > requirement in this area? > > ME: We're not there yet. I personally think you should be able to change > all that on-the-fly, but it doesn't REALLY matter. For the purposes I'm thinking of, it matters. Imagine a scenario where you're playing live music and want to add a plugin without stopping music (for example in an improvisation perfomance). Thanks for your answers. Ciao, Marco ---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission. Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe