I actually predicted this about a year ago, and wrote a blog series about how to set one up yourself with generic scripts (later collected into an ebook here http://hunterdavis.com/build-your-own-distributed-compilatio...). It was featured a few times here on HN. Cross compilation can actually be quite speedy, but speed isn't the only reason to use such a machine, especially in a business situation.
I suspect because it is an unusual case and doesn't get as much attention as the native gcc code. That is compounded by the need to "compute like an ARM" for the constants, leading to some emulation.
But the worst part is those damned autoconf scripts. They very cleverly probe the attributes of your x86 by compiling and running code during the build process and then make decisions about how the code should run on your ARM. They are a never ending sink of human effort. Best to just build on a machine where they will get the right answer without you fiddling with them.
It would be slow in terms of fixing the oceans of OSS code that doesn't cleanly configure/compile in a cross-compilation environment, rather than the compilation process itself.
Not sure, but compiling packages can be i/o bound rather that CPU bound, and is fairly easy to split up.
Rather than one fast multicore server, this solution gets them a lot of separate systems each with dedicated disk, memory, etc. Also, the reboot and wipe each time has security benefits.
The alternative equivalent solution would be a bunch of VM's on a host, which would probably result in memory or i/o bandwidth contention quickly.
It's like calling a bunch of minicomputers sending packets to each other over lines leased from AT&T a "network". Any fool can see that that's just a use of AT&T's network, not a network in itself. Networks are made of long lines interconnected with crossbar switches.
So before building each package the board PXE boots and installs the OS onto the USB-attached hard drive? That seems inefficient. Why not use an overlay filesystem and just throw away the changes after each build? Is there even a need for local storage, or could the nodes run off an NFS export?
For one thing each package needs a different build environment, so the overlay filesystem wouldn't necessarily help much. For another, can you say with confidence that the build process (which typically runs some steps as root) will not affect any system state outside of the filesystem?
[+] [-] huntergdavis|15 years ago|reply
[+] [-] sausagefeet|15 years ago|reply
[+] [-] jws|15 years ago|reply
But the worst part is those damned autoconf scripts. They very cleverly probe the attributes of your x86 by compiling and running code during the build process and then make decisions about how the code should run on your ARM. They are a never ending sink of human effort. Best to just build on a machine where they will get the right answer without you fiddling with them.
[+] [-] nupark2|15 years ago|reply
[+] [-] zdw|15 years ago|reply
Rather than one fast multicore server, this solution gets them a lot of separate systems each with dedicated disk, memory, etc. Also, the reboot and wipe each time has security benefits.
The alternative equivalent solution would be a bunch of VM's on a host, which would probably result in memory or i/o bandwidth contention quickly.
[+] [-] mentat|15 years ago|reply
[+] [-] zdw|15 years ago|reply
[+] [-] mathgladiator|15 years ago|reply
[+] [-] joshu|15 years ago|reply
[+] [-] kragen|15 years ago|reply
Right?
(Disclaimer: I wrote the Beowulf FAQ.)
[+] [-] sciurus|15 years ago|reply
[+] [-] JoshTriplett|15 years ago|reply