(Sorry, sent the previous message before completing it by mistake.) Ingo Weinhold wrote:
The difference *IS* impressive. Before these changes compiling under Haiku was similar to compiling under ZETA (to be fair, it was faster under Haiku but not, say, 2 times faster) and now it is similar to compiling under Linux (although, as expected, Linux is still faster).Well, I won't complain, I just have no clue, why. :-) BTW, do you build with -j4 or something?
I was not doing that but I tried it again with -j4 and the performance increase is mind blowing. But I remember that running with more than one concurrent job eventually fails at the last steps (I don't remember where exactly) due to what seems like a job being run that depends on some other job that didn't finish yet. I will run it again at work (I also have 4 cores here, but in 2 processors with 2 cores each) and will let you known where it fails.
In any case, you did manage to fix the random mimeset error I usually got (General OS Error) and that is good enough for me. :)This one puzzles me even more. :-)
When i started digging it down, the bug was way deep the libbe used for the build tools. I would guess that somehow the pattern that was resulting in the bug you found was also happening when there. It was difficult to track because it would only show up very late in the build process and, if you ran jam again, it would actually pass that point but then fail again in another point very quickly. Always when running mimeset in a file and always with a "General OS Error" message.
Anyway, take a look at bug #2142 as this is where it is stopping now (I compiled Haiku 4 times in a row just to be sure and it stopped at the same point all the time).Yep, that's a dup of #1991 and the reason is well understood: Jam thinks our command line limit is 40KB, but it is only 16KB ATM. Hence it can produce command lines that get truncated and thus fail. Our bash might have an even stricter limit.
I think it is a bit more involved than that. Our bash seems to be a bit broken. I used the following script to try to figure out the command line maximum size: #!/bin/sh i=0 teststring="AAA" teststring_len=0 echo "Testing maximum command line length ..." while (test "X"`echo "X$teststring"` = "XX$teststring") >/dev/null 2>&1 && new_result=`expr "X$teststring" : ".*" 2>&1` && teststring_len=$new_result do i=`expr $i + 1` teststring=$teststring$teststring done echo $teststring_len When running this under Linux, I get: bga@librarian:~$ ./testcmdline.sh Testing maximum command Line Length ... 393217 When I tried to run it under Haiku, the first time bash crashed but with a useless stack crawl (huge and with lots of undefined symbols). Then I tried it again and it did not crash but it did not also completed. It was obviously running (not hang up) but would not print the result even after more than one minute (it takes seconds to run). Then I added a line to print the current teststring_len inside the loop just after the "do" and it would result in something like this: ->./testcmdline.sh Testing maximum command line length ... 4 7 13 25 49 97 193 385 769 1537 3073 6145 12289 16383 16383 16383 16383And it kept repeating like that. Each 16383 would take more time to be printed.
So it looks like we accept around 16 Kb for the command like *BUT* we are not really returning an error when this limit is exceeded and it seems we should.
Any bash gurus around?
You could try to build a jam with 16 KB limit (can be set in jam.h), but then the build will probably fail nevertheless, since we might indeed have rules that require longer lines. We'll have to adjust that limit in the kernel and maybe find a better way to deal with the parameters. The syscalls for exec*() and load_image() could for instance already provide a flattened representation of the buffer, so that the kernel has less work copying it.
Will try this later today.
I got another one in net timer. Check bug #2143. I am pretty sure this is related to your recent changes as, if I am not mistaken, you did change how timers work.That's only indirectly related. The net stack timer thread uses a semaphore for timing out. The problem could, of course, still be related to my changes, but I wouldn't be so sure.
Got it. -Bruno