top | item 11468446

The Future Google Rackspace Power9 System

136 points| jonbaer | 10 years ago |nextplatform.com | reply

85 comments

order
[+] nl|10 years ago|reply
TheNextPlatform is a pretty bad site. They rehash - badly - information available elsewhere, and add a hyperactive spin on it all.

Here's the truth:

Google uses lots of compute power (insightful!)

Google isn't shifting to Power.

Google does have an active R&D program looking at Power.

TheNextPlatform misses the whole point here: That Zaius board has 32 DDR4 slots (commercially available servers from eg Dell max out at 24) and it has 2 NVLINK slots! (!!)

Those NVLINK slots are what Intel should be worried about, because that's where Google is prepared to pay money. They are building computers that lock themselves into NVidia and doing it gladly.

Intel better find a way to compete with NVidia on deep learning.

[+] voltagex_|10 years ago|reply
What's the current state of Power development in the Linux kernel like? I thought it was only IBM holding the fort (via ozLabs) but this could be a big boost.
[+] cpeterso|10 years ago|reply
Why does Facebook's Open Rack use a nonstandard rack size? That seems like an obvious barrier for adoption of hardware that was designed to be a commodity.
[+] DiabloD3|10 years ago|reply
Racks were designed to fit telecom hardware originally. The Open Rack size is designed around common computer hardware sizes.

It isn't a barrier for adoption because swapping racks out of a datacenter is easy, and they fit on standard datacenter floor tiles.

What is a barrier is that damned 48V.

Disclaimer: I run a hosting company.

[+] bluedino|10 years ago|reply
The Open Rack’s equipment bay has been widened from 19 inches to 21 inches, but it can still accommodate standard 19-inch equipment. A wider 21-inch bay, however, enables some “interesting configurations”, like installing three motherboards or five 3.5-inch disk drives side-by-side in one chassis. The outer width of the rack has remained a standard 24 inches to accommodate standard floor tiles.
[+] ksec|10 years ago|reply
They are going up against the coming Xeon E5 Broadwell + FPGA. Power9 do offer more memory per Rack. But I dont see how Intel cant adopt with better memory controller.

To simply put, what are the incentive to switch over to Power9 platform?

[+] dman|10 years ago|reply
Having viable options to Intel would be very helpful for Google when negotiating bulk rates for the Xeon processors.
[+] petra|10 years ago|reply
So this raises all sort of questions: Can Intel can be fast enough in integrating Altera(sw+hw+corporate...) ? What is the better FPGA development environment, with more developer share, etc ? FPGA's can be cannibalistic to Intel's business - will they have an incentive problem ? Do some companies(say in china) prefer an open processor, like POWER, and this will create some ecosystem advantage ? Are there any advantageous startups to buy like kandou-bus(faster interconnect) and who will buy them ?

So it's not certain Intel will win.

[+] cm3|10 years ago|reply
If one has the financial option to diversify, one would be wise to use x86, ARM and POWER at the same time. There aren't many examples where monoculture has been beneficial to anyone but the artificially selected culture.
[+] bogomipz|10 years ago|reply
Broadwell + FPGA? Is there an Intel design that has both? Can you clarify?

Also what is the issue with the memory controller?

[+] vegabook|10 years ago|reply
would be interested to hear more about this IO issue. Am I right to assume that because of the genesis of the X86 architecture in desktop computing, it is not optimized for server-class IO, and that this permeates the design (ie difficult to catch up with a ground-up server architecture)? If that's true then this is a big deal for Power. Certainly my big-data workflows are usually memory-IO bound, not compute bound.
[+] virtuallynathan|10 years ago|reply
I wonder if the inclusion of NVLink in Power 8+ will cause Power to excel in ML applications. It could well be quite a bit faster than x86 just due to the memory/interconnect bandwidth.
[+] PeCaN|10 years ago|reply
NVLink and CAPI[1] both have huge potential for machine learning. However, a lot of the benefits of NVLink for ML come from GPU-to-GPU NVLink, which doesn't require CPU support.

1. CAPI doesn't seem to get mentioned to much around here, but imagine an FPGA directly accessing some shared system memory. It's neat.

[+] bluedino|10 years ago|reply
IBM reps love to throw around the "Google is switching to IBM" line. Can they possibly compete with IBM on price? Why isn't AMD trying to reach this market?
[+] rdtsc|10 years ago|reply
AMD would still have an AMD64 architecture though? Or are you thinking they should come up with a new competing architecture.
[+] xiaopingguo|10 years ago|reply
Wasn't Google at one point all about commodity/consumer level hardware for their servers? Seems a huge turnaround.
[+] transfire|10 years ago|reply
I am surprised. I thought 64-bit ARM was the newness headed to the server farms.
[+] wyldfire|10 years ago|reply
ARM instruction set is pretty mature but the system architecture for servers is less so, IMO. I think there's little commonality for bootstrapping the various SoCs.
[+] DannyBee|10 years ago|reply
It will be there eventually. It is definitely not there now, despite what some may have you think :)
[+] bogomipz|10 years ago|reply
Is there enough juice in ARM chips to power servers and compete with Intel offerings like Skylake/Haswell?
[+] mozumder|10 years ago|reply
Kinda amazing that they can fit 2 Power9's as well as 2 FHFL PCIEx16 slots, along with 15 drives and 2TB of memory in 1 rack unit.
[+] nickpeterson|10 years ago|reply
Hey Google, sell these to other companies :)
[+] crudbug|10 years ago|reply
It will be great if Dell / HP / Cisco / Lenovo others start forging some POWER gear ..

For a platform to succees, it should provide low barrier of entry - may be a low capacity P9 system at a low cost ~ 1K ? Will be a better strategy for OF.

I know Power is targeting cloud computing applications, but IBM should consider low cost entry level gear for getting some market share at the lower level which can transition to higher margin markets .