>>> Everybody seem to agree on this and still I'm not sure. Having just vectorized a lot of DSP code lately (mostly Altivec but also MMX/SSE/SSE2) I notived that often my job could have been a lot easier with interleaved buffers. <<< I second that point. Especially when dealing with surround formats, there is great benefit to having an interleaved representation, since the any SIMD based code can pull the channel data directly in parallel, without "swizzling." Furthermore, the choice of whether to intelreave or not should consider locality of reference. Do separate mono streams keep the CPU cache warmer than interleaved? (I did some dumb benchmarking with stereo and saw no discernable difference. This may differ with higher channel counts.) Another potential complication with non-interleaved is how to handle plugins that take, say, a two 5.1 input stream, plus one mono side-chain stream. How can you effectively make heads or tails of this, even with a "hints" structure. Some channels naturally come in groups and may need to processed that way. Finally, let's not forget the kinds of driver models out there. ASIO is more amenable to mono streams, since that's the driver API. But wave, DirectSound and WDM seem to favor interleave. I'm not a driver guy, but it would interesting to know which of interleaved vs. not is the more natural representation for hardware. I'm thinking we should should allow interleaved streams, but have the standard GMPI library include standard packing and unpacking components. ---------------------------------------------------------------------- Generalized Music Plugin Interface (GMPI) public discussion list Participation in this list is contingent upon your abiding by the following rules: Please stay on topic. You are responsible for your own words. Please respect your fellow subscribers. Please do not redistribute anyone else's words without their permission. Archive: //www.freelists.org/archives/gmpi Email gmpi-request@xxxxxxxxxxxxx w/ subject "unsubscribe" to unsubscribe