[haiku-development] Re: Outsourcing more command line apps to optionalpacakges

  • From: Ingo Weinhold <ingo_weinhold@xxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Wed, 13 Oct 2010 20:41:22 +0200

On 2010-10-13 at 18:46:13 [+0200], scott mc <scottmc2@xxxxxxxxx> wrote:
> 2010/10/13 Jérôme Duval <korli@xxxxxxxxxxxxxxxx>:
> > 2010/10/13 Stephan Assmus <superstippi@xxxxxx>:
> >>> With vi, did it compile out of the box or does it still require
> >>> patches? IMO if packages compile out of the box, then they can be
> >>> moved out of our repository. If they cannot, having them in our svn
> >>> repository is a very safe patch storage mechanism.
> >>
> >> I share this opinion. Also don't forget that it makes the life easier for
> >> whoever works on other target architectures.
> >
> > Same here. From the ones listed above, I know coreutils requires
> > patchs which we probably need to maintain in our repository.
> 
> The problem I see with putting the patched files into our tree is that
> the patches are then not so easy to locate.  Then when the package
> gets updated there's no patch file to look at since the patch was
> integrated into the source in Haiku.  The way patches are handled at
> HaikuPorts is that they are kept as patch files until they are
> accepted and applied upstream, at which point our chances of the next
> revision/version being released, being able to "just build" on Haiku
> are a greater.

I totally agree. HaikuPorts is suited way better to deal with ports. 
Furthermore having the ports in the Haiku repository isn't for free either. 
Besides that this stuff increases the weight of the repository as well as the 
build times significantly, it's also more work and more error prone to 
maintain and update the ports, since the integration with our build system 
and the manually maintained config.h files is simply not how the ports are 
meant to be built. That has caused problems more than once in the past.

Moreover most software packages come with test suites, which are blatantly 
ignored when we just build things in the Haiku build system. When ports are 
updated at HaikuPorts the tests are actually run and failures are being 
tracked.

> As for making it easier for those working on other target
> architectures?  Maybe in the longer run it would, but first they need
> to get the core parts to build correctly, and if it's required that
> they have to also build all these extra parts that may slow them down
> in the shorter term wouldn't it?

Well, one fact is that besides x86 there's isn't a single port that is 
complete enough to boot to the point where the initial shell is started and 
which has all the support (in kernel, runtime loader, and libroot) for a 
working userland.

As for which solution is easier for people porting to other architectures, 
yes, it is certainly convenient to have everything in the Haiku repository 
and build for their architecture, but to get it to build is work that has to 
be done in the first place and keeping things building for every obscure 
architecture is also time that has to be invested.

I believe pretty much all the core components/ports (like bash, the various 
*utils) should have build systems that support cross-building. So the most 
sensible thing is to provide a system to ease the cross-building for those 
components (at HaikuPorts!). I suppose this way one even saves time, since 
the stuff has to be built only once (respectively when something changes that 
actually has an effect on a component) and not everytime jam detects that an 
included header has been touched.

> I'm not saying we should outsource all of them, just ones where it
> makes sense.  And to start that we'd need to know which ones
> can/cannot be moved out.

I guess most actually can be outsourced. The only one I can think of ATM that 
wouldn't be trivial to is gdb, since it uses private APIs. But I was hoping 
to get rid of it anyway once Debugger is in a usable shape.


On 2010-10-13 at 18:54:06 [+0200], Adrien Destugues 
<pulkomandy@xxxxxxxxxxxxxxxxx> wrote:
> 
> > As for making it easier for those working on other target
> > architectures?  Maybe in the longer run it would, but first they need
> > to get the core parts to build correctly, and if it's required that
> > they have to also build all these extra parts that may slow them down
> > in the shorter term wouldn't it?
> ICU was "outsourced" this summer and the optional package was available
> only for x86, leading to PowerPC build not working anymore. It was
> really difficult to build a PowerPC build of ICU, because we have no
> running PowerPC system yet. So someone had to hack around the build
> tools to get some way of cross compiling.
> The more we move things away from trunk, the more problems like this may
> happen and prevent building for a particular platform.

I haven't followed the PPC build issue, but the problem of bootstrapping is 
one that cannot be avoided when porting the system to a new architecture. By 
outsourcing more, we simply move the point when cross-building has to be done 
to an earlier point. Anyway, as long as none of the non-x86 ports can't even 
boot to the point where the initial shell is started, there's little point in 
building anything besides the boot loader and the kernel. It just wastes 
people's time.

> It also complicates changes of compiler (more packages need to be
> rebuilt) for the gcc4 version.
> 
> A solution would be to allow to build the packages automatically from
> source as an option, and not use the binary packages. This needs more
> jamfile work, however.

Actually, how I would envision it, is a meta build system at HaikuPorts that 
simply builds all the packages cleanly in the right order with only the 
required dependencies. For the subset of "core components" also supporting 
cross-building.

CU, Ingo

Other related posts: