top | item 2647544

Canonical creates a custom 40-processor ARM build machine

68 points| zdw | 15 years ago |thetanktheory.squarespace.com | reply

21 comments

order
[+] huntergdavis|15 years ago|reply
I actually predicted this about a year ago, and wrote a blog series about how to set one up yourself with generic scripts (later collected into an ebook here http://hunterdavis.com/build-your-own-distributed-compilatio...). It was featured a few times here on HN. Cross compilation can actually be quite speedy, but speed isn't the only reason to use such a machine, especially in a business situation.
[+] sausagefeet|15 years ago|reply
Why would cross compiling be seriously slow?
[+] jws|15 years ago|reply
I suspect because it is an unusual case and doesn't get as much attention as the native gcc code. That is compounded by the need to "compute like an ARM" for the constants, leading to some emulation.

But the worst part is those damned autoconf scripts. They very cleverly probe the attributes of your x86 by compiling and running code during the build process and then make decisions about how the code should run on your ARM. They are a never ending sink of human effort. Best to just build on a machine where they will get the right answer without you fiddling with them.

[+] nupark2|15 years ago|reply
It would be slow in terms of fixing the oceans of OSS code that doesn't cleanly configure/compile in a cross-compilation environment, rather than the compilation process itself.
[+] zdw|15 years ago|reply
Not sure, but compiling packages can be i/o bound rather that CPU bound, and is fairly easy to split up.

Rather than one fast multicore server, this solution gets them a lot of separate systems each with dedicated disk, memory, etc. Also, the reboot and wipe each time has security benefits.

The alternative equivalent solution would be a bunch of VM's on a host, which would probably result in memory or i/o bandwidth contention quickly.

[+] mentat|15 years ago|reply
Yeah, this made no sense to me. There's no way a native build solution will be comparable to a high end many core, large cache x86 solution.
[+] mathgladiator|15 years ago|reply
This is way cool, but mostly because it clued me in on pandaboard.
[+] joshu|15 years ago|reply
calling this one 40 processor machine is much like calling a 42u rack filled with dells an 80 processor machine.
[+] kragen|15 years ago|reply
It's like calling a bunch of minicomputers sending packets to each other over lines leased from AT&T a "network". Any fool can see that that's just a use of AT&T's network, not a network in itself. Networks are made of long lines interconnected with crossbar switches.

Right?

(Disclaimer: I wrote the Beowulf FAQ.)

[+] sciurus|15 years ago|reply
So before building each package the board PXE boots and installs the OS onto the USB-attached hard drive? That seems inefficient. Why not use an overlay filesystem and just throw away the changes after each build? Is there even a need for local storage, or could the nodes run off an NFS export?
[+] JoshTriplett|15 years ago|reply
For one thing each package needs a different build environment, so the overlay filesystem wouldn't necessarily help much. For another, can you say with confidence that the build process (which typically runs some steps as root) will not affect any system state outside of the filesystem?