1. If I understood the code correctly, the mixer itself converts buffers twice. First each incoming buffer gets converted to an internal format (float) and in the mixing thread they get reconverted to the format the driver needs (int16 in my case/regular?). This attempt is easy to use, but I think it is over headed. Shouldn't it be better, if only one conversion is done during the mixing? Of course this means, that one needs to have a lot of methods for converting everything, but I guess that an unoptimized attempt can be done using templates for this, while the optimized routines can still be used as specifications.
I am against premature optimizations, but that's just my opinion. First question is: is it a bottleneck? If not do not optimize it. If it is, does the optimized version gain a substantial speedup so that the (probably) more complicated (to understand and maintain) algorithm is justified? Again just IMHO, that should not hinder you to do it anyway. Regards, Michael