[haiku-development] Re: A modest (FatELF) proposal

  • From: Landon Fuller <landonf@xxxxxxxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Sun, 18 Nov 2012 19:09:48 -0500

On Nov 18, 2012, at 6:34 PM, Paul Davey wrote:

> 
> Fat binaries make it easy for the OS vendor, user, and software developer to 
> support new architectures through gradual migration, by pushing the cost of 
> doing so onto the development toolchain. Rather than compile and distribute 
> something N times for N different architectures, your build process can run 
> the compiler with the appropriate -arch flags, and compile a binary that runs 
> on all the available architectures. When hosting plugins, the hosting 
> application can automatically 'downgrade' to the lowest common denominator 
> ABI by relaunching itself, thus allowing for gradual transitions between old 
> ABIs and new ABIs -- this was done by Apple in System Preferences when the 
> entire OS switched over to x86-64, allowing System Preferences to switch back 
> to i386 in the case where the user still had legacy plugins installed.
> 
> Does just adding extra -arch flags actually work for any significantly large 
> piece of software?

For native platform software, it tends to work surprisingly well, by virtue 
being able to rely on a relatively consistent API surface, and 
compiler/SDK-provided defines for things like endianness and host architecture. 
For my company's Mac OS X and iOS work, for example, adopting new architectures 
hasn't required any changes other than enabling their support in the build -- 
if you discount slow paths taken in cases where, eg, we targeted NEON, and the 
code itself had to be updated for the new architecture.

For UNIX software, it depends on how much the author relied on autoconf to 
determine information about the host architecture. MacPorts, a Mac OS X ports 
system, deals with tricky cases by automatically running the 
configuration+build once for each architecture, and then merging in the 
binaries. The alternative is simply tweaking config.h directly -- this works 
fine for maintaining software included in the base distribution of an OS (eg, 
we did this in the BSD team at Apple), but for general use, it's handy to be 
able to build unmodified OSS automatically.

There are certainly edge cases for UNIX software; for example, emacs used to 
rely on "unexec" to load up its initial state, then dump core and produce an 
executable that starts up with its initial state already available. This sort 
of thing was a hassle to deal with within the Mac OS X build processes when we 
were shipping emacs, as it makes cross-compilation a pain, even before you 
factor in the issue of producing a universal build. So far in MacPorts we've 
been able to produce universal x86/x86-64 software reliably by automating the 
configure+build+merge process.

> Most that I know of at the very least do a configure step to find out what 
> arch they are on and use this for building the rest of the application.
> 
> Also how does this interact with anything that needs different libraries on 
> different platforms or existing build systems like cmake?
> It seems to me that while this is a nice feature to have there would be a 
> significant amount of work making some software actually build as fat 
> binaries.

Just from my experience in MacPorts, it was straight-forward enough to automate 
for any OSS/UNIXy build system that already supported cross-compilation; native 
platform software more so.

Cheers,
Landon

Other related posts: