On Wednesday 24 December 2003 19.17, gogins@xxxxxxxxxxxx wrote: > "Just sufficient" depends on purpose. It also depends on the > elegance of the interface. Protocols found insufficient: MIDI, VST. > Protocols long in use yet still not found insufficient: SQL92, > TCP/IP, XML. A simple interface with a high degree of data > abstraction is what we need. This is not inconsistent with > efficiency. > > Again, I feel I started this discussion down the wrong path by > complaining about inadequate implementation assumptions instead of > incomplete musical use cases and vague requirements. > > How do you propose satisfying the use cases I have mentioned: > plugins that generate accompaniments for recorded sequences with > "feel", If they need too see into the future to do their job properly, I'd say they're in that "time machine" domain, where the normal, real time oriented API must be bypassed or extended somehow. Solution A: Provide access to tempo maps and other data, so that these plugins can get the information they want. It's possible to see the whole song at any time, but there is also the possibility of things changing over time. (Edit during playback, real time input affecting the sequencer etc.) This interface can only be simulated to some extent in full real time systems, so plugins relying on this will most probably only work with "traditional" sequencers. Solution B: Let plugins ask for look-ahead input, so they can get some of their audio and/or event input ahead of time. In a full real time system, this will increase the total latency, but with prerecorded or generated data, it's usually possible to look ahead without affecting the system latency. > plugins that process and generate streams of events in > synchronization with the host, I'm assuming you mean the musical timeline, since that's the only local thing a plugin could possibly not be in sync with. For that, you need a way to get or calculate the musical time for each sample frame. If you need to look into the future for some reason, that's the same problem as usual; it just doesn't mix with real time processing. See above though; the proposed TEMPO control can be combined with Solution B, which will work fine as long as the time line is not synchronized with an external device, or otherwise unpredictable. > and controllers that are bound to > specific notes of specific voices? Control addressing. Nothing stops us from having both channel and voice controls. However, it just doesn't make sense trying to force all controls into being both. When you design a synth, you make some things per-channel and some things per-voice, to keep a sane balance between flexibility and system requirements. (And if you have MIDI somewhere in the system, you probably make most things per-channel, as you can't control them per-voice anyway... :-/ ) Note that voice addressing gets a bit tricky with dynamic voice allocation - but that's nothing new, really. You need to be able to address playing voices for stopping them anyway, and once that's solved, you may as well use the same solution for voice control. Solution? Well, the XAP team has it all figured, as usual. ;-) Virtual Voice IDs. That's OT now, though. //David Olofson - Programmer, Composer, Open Source Advocate .- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- ---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission. Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe