yikes!!! I thought of other issues after sending in the first message... If we ditch binary compatibility, don't we lose our ability to test and implement the modules independently? Right now, we assume that we have R5 as a standard to test against. You can replace kits piecemeal and test them against the known good binaries. But without binary compatibility, won't all the new modules rely on each other instead? You could reduce the damage in most of the modules by making only documented API calls. It's unlikely the existing kernel would have any problems with this. But trying to implement a new kernel is much more difficult -- you can't guarantee the current R5 binaries aren't making any number of un-documented calls -- which means the new version will just fail when you try to "plug it in" to an existing R5 base. Even the other modules might have problems. Perhaps making only documented calls won't get you the full functionality. Of course, they could make use of new functionality, if any, in the new kernel. But then you're going down the road of tying all the new modules together in a web of dependency. Eeeek!!! Is there no way out? A way to keep a solid testing foundation with piecemeal module development and yet not require binary compatibility, And if you require binary compatibility, is it doable in a non- ridiculous time frame? >In a way, source compatibility is a better goal in the sense that it >lets you implement things in whatever way works best. The programming >API is the same, but the underlying code is as slick as you are capable >of making it. > >Still, it's a deviation from the original charter. Should we re-think >this? >