Re: [yoshimi-user] 0.061-pre7, optimization issues perhaps

  • From: cal <cal@xxxxxxxxxxxx>
  • To: yoshimi-user@xxxxxxxxxxxxxxxxxxxxx
  • Date: Thu, 23 Sep 2010 07:20:02 +1000

On 23/09/10 07:03, Will J Godfrey wrote:

[ ... ]
Psssttt!
Just don't *anyone* tell him we're having fun :)

Me too actually :-).

Incidentally there's been a minor thread running in jack-devel with
someone trying to convince the jack devs that jack should be modified
to solve the jack midi timing issues in zynaddsubfx. I found Paul Davis'
ultimate (and somewhat annoyed) summary interesting. I had to smile,
because it addresses those precise issues I've been trying to address
in the current mess - I quote:

if you deliver the MIDI to the synth in some separate thread then it
has to figure out how (and when) to modify what its doing in the
synthesis thread. but the two threads are different and not running
synchronously with each other. this is the design that ALSA MIDI is
fundamentally built around, and its certainly the way that several
ALSA MIDI apps have been written - one thread picks up data from the
ALSA MIDI layer and then, somehow, make it affect what the audio
synthesis thread does the *next* time it generates audio. when this is
done badly, its precisely the "somehow" part that leads to the jitter
that so bothers you.

it doesn't help if you try to collect MIDI data in the same thread as
audio synthesis and treat it all as if it arrived at the same time.
this also causes jitter, and there are apps that do this too.

no, you need (a) timestamps on every MIDI message and (b) you need to
only change the synthesis thread's behaviour when the timestamp
becomes valid. because of the block-processing nature of current
realtime audio, that generally means delaying for a fixed (preferably
small) interval after the data is actually received.

JACK (and ALSA MIDI) provide the timestamps. but the apps that are
bothering you DO NOT use them correctly.

3. After waiting some time which depends on of timestamp value you just
send the same midi event to proper soft-synth.
JACK (or more precisely, the ALSA/JACK-MIDI bridges) are already doing
this. ALSA doesn't do this.

Is it so hard to do?
its not hard to do when its done in the right place. But the right
thing has to happen *in* the synth. Even you yourself write "you just
send the same midi event to proper soft-synth" ... but where does it
arrive in the synth? what does the synth do with it? how does it
arrival affect the behaviour of the thread that does audio synthesis?
what is the timebase? when applications/clients get this right,
things all work. JACK MIDI is designed to more or less force people to
do it right. ALSA MIDI (like ALSA in general) is designed to make
almost anything possible, with the result being that lots of
developers simply get it wrong.

i'm not going to explain this again. its very, very irritating to have
someone simultaneously say "i don't know enough to do this myself" and
"i'm going to explain to you how this should be done", especially when
you just don't understand the basic model for real time audio on any
modern computer system.

cheers!


Other related posts: