I don't want to predict the future, but if HP thinks it can win over customers with 100% marketing and no technical details, that's not a good sign.
I make the IT purchasing decisions for my company, and I know what an ARM CPU is, and why it's a good idea. Therefore, I know that HP's approach doesn't make sense: ARM hasn't been proven in this market. Therefore, customers and vendors (like HP) need to work very closely and there needs to be openness about design issues, performance tuning, etc.
HP's starting off on the wrong foot. Maybe there's enough money in ARM that they'll figure it out. But if I have to make a prediction, I'm of the opinion the ARM server market is going to take off so fast that HP will be left behind.
Thank you, I looked at very link on the HP page, twice, and learned nothing about what they actually were going to sell. I googled "Redstone Server Development", and I found some more recent articles: http://www.crn.com/news/data-center/231902061/hp-intros-low-...
It's against HP's immune system to sell machines without Windows. They already try hard not to sell their HP-UX and NonStop boxes, but clients insist on relying on them.
No kidding. Clearly they're using IIS, which has the equivalents of mod_rewrite built-in. Why would they keep using such awful URLs? The only other major company still using cruft like that I can think of is IBM. Totally unnecessary.
Most are overlooking the change in thinking with these servers.
Think of servers with ARM processors in the pipeline which will come with 64+ cores, available for cellphones. The biggest problem in a datacenter is not space or processing power, it's energy consumption and heat dissipation. Walking into a datacenter gives you the feeling that the place looks empty with plenty space to fit 6x more servers. Today this can't be done because there's no capacity for more air conditioning to cool more servers in the building.
Also, the way of processing has changed in the last years, for example with map reduce, which makes having many cores way more useful than a single server with a massive 5 ghz core. Actually today, many servers are IO bound, not CPU bound. There's exceeding cpu capacity.
Think of having a server, with 64 ARM cores and and array of SSD's. This won't heat up as much as mechanical disks or today's cpus, with very small IO constraints due to SSDs speeds and far more parallel processing power.
Our solution to infrastructure/performance problems which dates back a fair way is to simply throw more hardware at problems.
Lately we are starting to hit a limit - a power limit. We are actually limited in the data center we use not by space, but by our power consumption, so we now have a pseudo KPI of "reduce our power usage". It's a good goal, but certainly not one I had ever contemplated.
A (future) stumbling block is our reliance on x86. Would love to be able to move to ARM =\
The sales pitch for the Redstone systems, says Santeler, is that a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.
A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m.
Hmm, let's see. It's about 7-8 grands per Xeon server, something like HP Proliant DL360R07 (2 x 6-core Xeons at 2.66GHz). It's 3 times as many cores as Redstone, clocked at 2.66 times greater frequency each, and doing more instructions per clock tick, too. And that's without hyperthreading.
Am I missing something big, or is Redstone solution neither cost-effective nor energy-effective?
You assume the application is compute limited and that the extra performance on the Xeon translates into extra performance on a given application. That's probably not a good assumption for this kind of workload.
Even if you triple the number of Redstone machines, you'll still use just ~30% of the energy and 7.5% of the cabling.
And each 4 ARM cores have their own memory channels and I/O ports, vs every 6-12 on the Xeon [corrected] (point being that CPU speed is not the only variable here).
By my calculations the Redstone config has 6400 cores and the traditional one has 4800 cores. But discussing such vague claims is pretty pointless anyway.
I find it strange that they're using Cortex-A9 CPUs. I would have expected anyone going for the server market with ARM cores to use Cortex-A15, which has 40 bit addressing with PAE.
I think this is a highly significant move by ARM. It's amazing when you speak to datacentre people and they tell you how much of your server charges go on electricity and cooling. My recent example was £200 extra/year for an additional Opteron 6128 and £400 extra/year for the increased power usage from that processor!
There is an obvious gap in the market for low power, low heat generating, high memory throughput server processors. I'd just like to see a reference Linux distro which supports 16 ARM cores as well as a reference server card...
No virtualization ability. No addressibility of more than 4G is a killer for some apps. CPU horsepower density isn't quite as high as they say: at 72 quad-core A9's per rack unit vs. 6.4 Xeons in a comparable 10U blade server. A Nehalem clocks about 3x faster and runs about 1.5-2x faster per clock than the A9 for "random server logic" workloads, so this appears to be higher by only a little bit.
Power consumption isn't clear. I see no peak load wattage numbers, which worries me for a product marketed expressly as a low-power option.
One advantage this architecture does have is density of memory bandwidth. They have 72 DDR3 channels per rack unit vs. 25.6 for a blade server filled with 4-channel Westmere EXs (the Intel boards will stack the DIMMs up on the same channel). So you might want to look seriously at it as a hosting platform for a very parallel in-memory data store.
According to
http://www.theregister.co.uk/2010/08/25/arm_server_extension...
there's an extension for 32-bit ARM processors that allows them to address 40-bit memory (1TB RAM).
This before 64-bit processors (that should arrive in 2014)
Btw I dont know if this option is available in Calxeda/HP solutions.
You can get details on the underlying server architecture from Calxeda's pages describing the system on chip, and the quad server boards that are in the initial HP design:
Just yesterday I was dreaming of small server racks composed of Raspberry PI's and BeagleBoards. Wish I had a few million lying around… or a cheap dedicated link at home.
From the Register article: "The hyperscale server effort is known as Project Moonshot, and the first server platform to be created under the project is known as Redstone, after the surface-to-surface missile created for the US Army, which was used to launch America's first satellite in 1958 and Alan Shepard, the country's first astronaut, in 1961."
[+] [-] atlbeer|14 years ago|reply
http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_...
[+] [-] sounds|14 years ago|reply
I make the IT purchasing decisions for my company, and I know what an ARM CPU is, and why it's a good idea. Therefore, I know that HP's approach doesn't make sense: ARM hasn't been proven in this market. Therefore, customers and vendors (like HP) need to work very closely and there needs to be openness about design issues, performance tuning, etc.
HP's starting off on the wrong foot. Maybe there's enough money in ARM that they'll figure it out. But if I have to make a prediction, I'm of the opinion the ARM server market is going to take off so fast that HP will be left behind.
[+] [-] scott_s|14 years ago|reply
[+] [-] antpicnic|14 years ago|reply
[+] [-] rbanffy|14 years ago|reply
[+] [-] lukeh|14 years ago|reply
[+] [-] endlessvoid94|14 years ago|reply
[+] [-] hmottestad|14 years ago|reply
HP EliteBook 2760p
HP EliteBook 8560w
[+] [-] lreeves|14 years ago|reply
[+] [-] Hoff|14 years ago|reply
For this product, http://hp.com/go/moonshot works
[+] [-] yycom|14 years ago|reply
That URL looks fine to me.
[+] [-] checoivan|14 years ago|reply
Think of servers with ARM processors in the pipeline which will come with 64+ cores, available for cellphones. The biggest problem in a datacenter is not space or processing power, it's energy consumption and heat dissipation. Walking into a datacenter gives you the feeling that the place looks empty with plenty space to fit 6x more servers. Today this can't be done because there's no capacity for more air conditioning to cool more servers in the building.
Also, the way of processing has changed in the last years, for example with map reduce, which makes having many cores way more useful than a single server with a massive 5 ghz core. Actually today, many servers are IO bound, not CPU bound. There's exceeding cpu capacity.
Think of having a server, with 64 ARM cores and and array of SSD's. This won't heat up as much as mechanical disks or today's cpus, with very small IO constraints due to SSDs speeds and far more parallel processing power.
[+] [-] malbs|14 years ago|reply
Lately we are starting to hit a limit - a power limit. We are actually limited in the data center we use not by space, but by our power consumption, so we now have a pseudo KPI of "reduce our power usage". It's a good goal, but certainly not one I had ever contemplated.
A (future) stumbling block is our reliance on x86. Would love to be able to move to ARM =\
[+] [-] rogk11|14 years ago|reply
Not huge but if you add multiple of these cards in a server, the power adds up.
[+] [-] jsn|14 years ago|reply
The sales pitch for the Redstone systems, says Santeler, is that a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.
A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m.
Hmm, let's see. It's about 7-8 grands per Xeon server, something like HP Proliant DL360R07 (2 x 6-core Xeons at 2.66GHz). It's 3 times as many cores as Redstone, clocked at 2.66 times greater frequency each, and doing more instructions per clock tick, too. And that's without hyperthreading.
Am I missing something big, or is Redstone solution neither cost-effective nor energy-effective?
[+] [-] tmurray|14 years ago|reply
[+] [-] ricardobeat|14 years ago|reply
And each 4 ARM cores have their own memory channels and I/O ports, vs every 6-12 on the Xeon [corrected] (point being that CPU speed is not the only variable here).
[+] [-] wmf|14 years ago|reply
[+] [-] lgeek|14 years ago|reply
[+] [-] wmf|14 years ago|reply
[+] [-] jbellis|14 years ago|reply
[+] [-] wmf|14 years ago|reply
[+] [-] dman|14 years ago|reply
[+] [-] Donch|14 years ago|reply
There is an obvious gap in the market for low power, low heat generating, high memory throughput server processors. I'd just like to see a reference Linux distro which supports 16 ARM cores as well as a reference server card...
The specs:
http://www.calxeda.com/products/energycore/ecx1000/techspecs
only refer to 32-bit memory addressing as well (ie. <4GB of memory). Seems like the wait will be for the ARMv8 64-bit processors to be integrated.
Interesting times!
[+] [-] ajross|14 years ago|reply
Power consumption isn't clear. I see no peak load wattage numbers, which worries me for a product marketed expressly as a low-power option.
One advantage this architecture does have is density of memory bandwidth. They have 72 DDR3 channels per rack unit vs. 25.6 for a blade server filled with 4-channel Westmere EXs (the Intel boards will stack the DIMMs up on the same channel). So you might want to look seriously at it as a hosting platform for a very parallel in-memory data store.
[+] [-] Ecio78|14 years ago|reply
[+] [-] trebor|14 years ago|reply
I hope it succeeds, just to give Intel a run for their money. I really think that ARM is the future of computing (including the desktop).
[+] [-] ceejayoz|14 years ago|reply
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] joshu|14 years ago|reply
[+] [-] stuntprogrammer|14 years ago|reply
http://calxeda.com/products
[+] [-] ricardobeat|14 years ago|reply
[+] [-] zhemao|14 years ago|reply
[+] [-] shimon_e|14 years ago|reply
It could be a killer sales point if these servers need no cooling.
[+] [-] jwatte|14 years ago|reply
[+] [-] stuntprogrammer|14 years ago|reply
[+] [-] spydum|14 years ago|reply
[+] [-] wmf|14 years ago|reply
And wasn't RLX technically a success since it was bought by HP and then canceled?
[+] [-] rovar|14 years ago|reply
[+] [-] antoinehersen|14 years ago|reply
[+] [-] m0wfo|14 years ago|reply
[deleted]
[+] [-] unknown|14 years ago|reply
[deleted]