>Well ... upto now, people understood very well that the reaction rate of >their system was depending on the buffering... who understood this? why did they think that? anyone who thought this was, i'm afraid, the victim of short-cut software design and/or bad advice. its simply not true. > So why going into the >opposite way !? Especially when you know how it's difficult to handle and >synchronise different data stream on non real-time O/S like windows / Linus >/ OSX. its tricky, but no more tricky than most audio (host) programming. >Just an example : the time of a MIDI event coming from a serial device >cannot be fixed with less than 1ms of precision (just because 1ms is the >time to transmit 3 bytes à 32000 bauds so one regular MIDI event - note that you can timestamp each individual byte, and the overall message, if you choose to do so. there are a few significant single byte MIDI message (MIDI clock, for example), and several 2 byte messages. their time of arrival is very well-defined. >3ms is the time to communicate a 3 note chord) . add to this 1ms the event >handling of the O/S and the higher priority process and explain me how you i can timestamp MIDI data on my linux system to within about 5 usec of its arrival at the interface, assuming that interface interrupts the CPU promptly. with the latest linux kernel (or a patch to 2.4), i can schedule its delivery to without about 10 usec of a given UST time. from what i read, OS X is about as good, or better. --p ---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission. Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe