[openbeos] Re: binary middle ground

  • From: "Daniel Reinhold" <danielr@xxxxxxxxxxxxx>
  • To: openbeos@xxxxxxxxxxxxx
  • Date: Sat, 01 Sep 2001 12:27:14 CDT

Yes, but is this testing matrix worth the burdens necessary to create 
it? If binary compatibility turns into a monstrous effort, then the 
benefits of having it become extremely questionable. I worry about its 
effect on the project timetables and developer morale. You don't want 
version 1.0 pushed back by another 3 to 4 years. You don't want 
programmers dropping off the list because they are tired of spending 
month after month tearing their hairs out for little discernable 
progress.

Originally I supported the notion of binary compatibility for the 
obvious reasons. Perhaps it was naive on my part, but I basically 
assumed that Be's published API told us pretty all we needed to know 
about the system. It didn't matter if the algorithms and data 
structures we used to implement the API matched the original code or 
not (indeed, it's almost certain they would be different), just so long 
as equivalent functionality was met.

But Eugenia's post on OSNews about her discussions with Be engineers 
spooked me considerably. I have no reason to doubt that she's telling 
the truth. If the internals of the kernel and libraries are riddled 
with undocumented stuff, we have a task on our hands of monumental 
proportions.

Consider what the developers have to deal with. For example, we have to 
implement function 'do_skippy'. Its interface and functionality is well 
documented. No problem -- you implement and test to the spec. Now we 
come across undocumented routine 'do_ufef_a'. What the hell does this 
routine do? Eek, bring in the disassembler. Hmm, it seems to be doing 
X, Y, and Z. Implement this... test... crash. Oh, well there must be 
more to it than that. Oh, and there's another routine 'do_ufef_b'.  Is 
it related? Does it share functionality, or is one of them an obsolete 
routine that was never removed. Perhaps they rely on each other and 
share a data buffer. If so, we better figure out the size and format of 
the buffer exactly. We might re-implement the functionality of these 
two routines exactly, but if we fail to properly set the data buffer 
(wrong offsets, slightly different data or misaligned data...) then the 
routines still crash and we're not sure what exactly we are doing 
wrong. Keep hacking away at it.. weeks pass... months pass...

This might be an exaggerated example. Maybe things won't be so bad. But 
it doesn't look unreasonable to me. Multiply this scenario by the 
number of undocumented routines we encounter and... well, you get the 
picture.

I like the idea of the testing  matrix. But is it sacrosanct? It is 
more important than anything else? Perhaps we can do just as well with 
another testing method (no, I don't have one right now, but I'll think 
about it). 

Of course, ditching binary compatibility also affects being able to run 
old apps. But I do have an new idea in this regards. I'll send another 
mail describing it as soon as I've gotten it all worked out.


>
>Binary compatability is good for the end users. It allows them to use 
tons of apps that
>are no longer being developed. It is good for us (as developers) 
because we can test. 


Other related posts: