[haiku-development] Re: OS & WebKit builds

  • From: Ingo Weinhold <ingo_weinhold@xxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Tue, 02 Jun 2009 13:41:25 +0200

On 2009-06-02 at 08:27:14 [+0200], David Himelright 
<david.himelright@xxxxxxxxx> 
wrote:
> On Tue, Jun 2, 2009 at 12:59 AM, Ryan Leavengood 
> <leavengood@xxxxxxxxx>wrote:
> 
> > On Tue, Jun 2, 2009 at 12:27 AM, David Himelright
> > <david.himelright@xxxxxxxxx> wrote:
> > >
> > > 1) What's a likely safe minimum memory for building haiku with gcc4 
> > > under
> > > haiku?
> >
> > Probably at least 512 MB, though that might be pushing it and 768 MB
> > or 1 GB would be better.
> >
> 
> Hrm. Under virtualization the haiku-alpha-gcc4 guest OS had 512mb of real
> memory and was living on a 4gb disk image with an additional 512mb virtual
> memory enabled. I was still seeing process fork errors with jam.

512 MB are fine as long as you have enough virtual memory. It should work with 
even 
less RAM, but at some point performance will seriously degrade. With 512 MB RAM 
it 
should still be almost as fast as with unlimited RAM.

> The same
> build tools compiled for ubuntu running in the host OS cut through that
> source code like a hot chainsaw through a pile of leaves. I don't think I
> can recompile the linux kernel quicker on this machine, and that's written
> in c for pete's sake.

The Haiku kernel still needs quite a bit of optimization. Even with debug code 
disabled building Haiku from the sources still takes almost 4 times as long as 
under 
Linux. Under virtualization things seem to be dramatically slower -- at least 
that's 
what I see with VMware.

> I just guessed that memory usage and speed must have something to do with
> filesystem performance under haiku because that seemed to be the source of
> earlier performance gripes (offhand comment from the google tech talk a
> couple of years ago, my subjective experience with the emulated os).

BFS is not as fast as ReiserFS, but with enough RAM caching will mitigate quite 
a 
bit of the difference. I haven't tried in a while, but using a non-indexed BFS 
partition for compilation might speed up things a bit.

> I'll try to boot a haiku partition with grub and see if I get better
> results.

Definitely!

> > >   b) and what sorts of profiling tools are Haiku developers using?
> >
> > I don't know much about this myself, though I will probably learn
> > about it soon. I know vaguely that Ingo has been putting interesting
> > tracing and profiling stuff in the kernel, and I know various efforts
> > have been made to profile the app_server. Hopefully others will answer
> > this question in more detail.
> 
> Please do! I normally do apps development in Java and I'm a bit spoiled when
> it comes to easy to use profiling and instrumentation tools, also a bit
> dependent upon them (my write it quick and dirty now, optimize later
> approach).

The only profiling tool I know of is the "profile" command line program. It 
uses a 
sampling-based approach. It can also generate output files in valgrind format 
which 
can be analyzed in KCachegrind (under Linux, though).

> > > Has any kind of universal binary package format
> > > situation seen any consideration by planners?
> >
> > Like Apple universal binaries? I know some of us are intrigued by the
> > Apple application bundle idea, mostly in terms of drag and drop
> > installs, though I suppose it has this purpose as well. I can't say
> > much thought has been made here yet though.
> 
> I was referring more to the "son of universal binaries" tools that came out
> around the time of the Intel transition, for packaging x86 and ppc binaries
> into Mach-O library. It seemed to work pretty well, but placed a lot of
> burden on the vendor because a lot of developers cross compile for both
> architectures and don't necessarily bother to apply the same testing
> resources to both. It's a little hairy, but it works okay.
> 
> See: lipo
> http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/lipo.1.html
> 
> I was thinking that it might be relatively straightforward to write a linker
> that selects for the appropriate compiler architecture on Haiku, though I
> have to admit I don't even know what object format you're using at this
> point or what the specific incompatibility issues are. Uh, ask me again in 5
> minutes...

Haiku uses ELF. ATM there's no real need for any kind of FAT binary format, 
since 
the only fully supported architecture for Haiku is x86. For the two compiler 
versions we support, we do simply include libraries for both, so that programs 
built 
with either compiler run. When we go 64 bit, we'll probably use the same 
approach.

> > > 4) Any suggestions about an oss IDE for applications development that
> > runs
> > > under a gcc4 build?
> >
> > Paladin seems pretty neat, though I have not used it extensively
> > myself. I'm not sure if it runs under gcc4 yet, or can compile gcc4
> > code. Here is the BeBits page:

I haven't tested Paladin yet, but generally on a gcc2/gcc4 (or vice versa) 
hybrid 
Haiku you can run applications for either compiler. Moreover the "setgcc" 
script 
allows you to persistently select the gcc version to be used for compilation. 
It has 
been introduced quite recently, so it might need some more testing.

CU, Ingo

Other related posts: