I was amazed to learn that the Linux kernel supports 1,400 distinct 32-bit ARM targets!
That's ... a scary amount, and it's easy to see that automated testing might be a good thing, there.
I think the combinatorial explosion happens at least in part because even though there's a limited number of actual ARM cores, the peripherals which Linux needs to support are often vendor-defined and thus different for each system-on-a-chip (or at least different for each device series from a particular manufacturer). I didn't dig through the sources to verify this, but I've heard of the problem before, ARM doesn't define a standard way for the CPU core to learn about its peripherals at run-time.
Keep in mind that's mainly because the ARM world is not standardized the way the PC world is. In practice the differences between most of these targets will be the very early initialization code and things like the clock hierarchy but beyond that most of the code will be shared across many variants.
Imagine having to use a different kernel every time you upgrade your desktop, that's basically how the ARM world works so far.
Personally, I find it amazing that a project the size of Linux has relatively few automated tests and testing done by its maintainers, leading to projects such as this (and the LTP, etc.) to come about to actually ensure ongoing quality.
How many other major projects the size of Linux have as little upstream testing?
This project could use a downloadable script which would automatically compare "some machine you have which runs linux" to hardware configurations currently available within the CI platform to see if it would be a useful contribution.
That could be really interesting. Grabbing system configuration from lshw should be relatively simple - what could be more interesting is the backend that would tell you your machine is an interesting one or is already covered.
Whack in a build/regression test for the NVidia driver against the latest kernel sources. I spent so long today trying to get those compiled on Debian against the latest sources, and I feel dumber as a result. Which is probably fair enough, also had a few beers.
It is all as it was foretold. The mighty Beowulf cluster reawakens, summoned by its true calling. Servants of the Dark File, bring forth your abandoned and dying devices, that they may be blessed with an IP in this new Mecca of logic and crystal.
[+] [-] unwind|9 years ago|reply
That's ... a scary amount, and it's easy to see that automated testing might be a good thing, there.
I think the combinatorial explosion happens at least in part because even though there's a limited number of actual ARM cores, the peripherals which Linux needs to support are often vendor-defined and thus different for each system-on-a-chip (or at least different for each device series from a particular manufacturer). I didn't dig through the sources to verify this, but I've heard of the problem before, ARM doesn't define a standard way for the CPU core to learn about its peripherals at run-time.
[+] [-] simias|9 years ago|reply
Imagine having to use a different kernel every time you upgrade your desktop, that's basically how the ARM world works so far.
[+] [-] gsnedders|9 years ago|reply
[+] [-] gsnedders|9 years ago|reply
How many other major projects the size of Linux have as little upstream testing?
[+] [-] broodbucket|9 years ago|reply
You can bet pretty much every hardware vendor is running some kind of kernel CI internally.
[+] [-] XorNot|9 years ago|reply
[+] [-] rbanffy|9 years ago|reply
[+] [-] rsendv|9 years ago|reply
[+] [-] Clownshoesms|9 years ago|reply
[+] [-] devrandomguy|9 years ago|reply
[+] [-] rbanffy|9 years ago|reply
[+] [-] itburnslikeice|9 years ago|reply