> >> I can't, today, just go build the ARM port for a new ARMv7 ABI or > >> modified > >> calling conventions -- I have to do a bunch of manual heavy lifting to > >> bootstrap an OS environment that I can then use to build the required > >> packages, instead of being able to focus on the actual problem at-hand. > > > > We don't encourage custom builds of Haiku for all different CPU > > instruction > > sets variants out there. We chose some requirements that make sense, and > > we > > make sure to keep binary compatibility as long as it makes sense. So, this > > doesn't need to be an easy task. > > Isn't this besides the point? It's easy to build Haiku for the supported architectures. If some dev wants to go out of that way, I can see two cases : - Adding support for a new CPU architecture. This does not happen very often, so I'm ok with letting it be a slightly longer task to perform, if we can get packages built with their own buildsystem. The advantages were already states, but let's list them again : * It avoids introducing bugs because our jam build system is not identical to the upstream one * It makes it easier for us to upgrade these packages (rebuild, test, zip instead of download, merge, fixup jam scripts, fix build issues, commit) * It makes it possible to post bug reports upstream, as we use unmodified sources (forking, even for build system changes, leaves us on our own) * Packages can be built "the right way", including pkgconfig files, headers, documentation, and other stuff we didn't bother to include in the jamfiles * It makes the Haiku codebase itself smaller, faster to checkout and build, and easier to manage. An example of this is our coverity scan results where half of the detected issues are from 3rd-party sources. - The second case is when you want to build packages optimized for your own CPU (such as enabling SSE2 in ffmpeg or some similar stuff). The officials build of Haiku require just a plain Pentium MMX and should boot on that. I believe the use of extended CPU instruction set is still possible, with a runtime detection of the CPU type as we do for the image scaling code in ShowImage. Not all these 3rd-party packages support this but it only makes sense for some of them (ffmeg, can't think of anything else) and should be upstreamed if we get to do it. If someone still want to do a custom build of Haiku for a particular unsupported CPU or compiler, well, that's unsupported. > > >> This doesn't seem worth it for the 20 or so core dependencies that are > >> necessary to build+run the OS, especially since most of the world to > >> integrate them was already done, and now more work is being invested in > >> lifting them back out of the source tree before genuinely automated tools > >> to deal with them exist. > > > > Even more time is spent answering mails here. > > No reason to be rude. It appears to me, Landon is investing some serious > effort into Haiku and is genuinely interested in how an aspect of the work > flow can be improved. I would be hoping that the least we draw out of this > discussion is a possible improvement for his situation, even if > externalized packages stay that way. All I have heard so far is that tools > would eventually support his use-case, but the fact remains, they don't > right now. Haikuporter already makes it quite easy to rebuild these packages. This is where I would look for improvements. Adding support for cross-compiling in haikuporter would benefit in a lot of other cases as well. This can be done with some easy changes to our tools : - Allowing our build-cross-tools script to put new toolchains in a place where setgcc can find them (so you can run an ARM toolchain on an x86 haiku easily) - Allowing haikuporter to build things against something else than the running system (to look for headers and libs on the target system) Making these changes will also help building a gcc4 application with haikuporter running on a gcc2 system, and eventually building x86 packages on x86_64 systems. I think they will help with the package management system as well. -- Adrien.