They've also formed a consortium to promote this processor, of which Google is a flagship member (http://openpowerfoundation.org/). The expectation (or hope, or fear, depending on your point of view) is that Google may be designing their future server infrastructure around this chip. This motherboard is some of the first concrete evidence of this.
The chip is exciting to a lot of people not just because it offer competition to Intel, but because it's the first potentially strong competitor to x86/x64 to appear in the server market for quite a while. By the specs, it's really quite a powerhouse: http://www.extremetech.com/computing/181102-ibm-power8-openp...
I think that rumor also stated something about ARM, so Google may not be done designing its chips yet.
I'm glad they are finally doing this, not so much because I care about what happens in the server world, but because so many product chip decisions at Google have been political (by choosing Intel chips) simply because Otellini was on their board. Hopefully this will signal a change from that.
However, the "Google model" of computation involves a huge amount of cheap "light" servers, instead of a few "big" servers (on which the Power model was based)
Well, the Power architecture had some success in Apple products, but ended with the inability of IBM to scale production and produce parts that consumed less power
So Presumably, Google will manufacture their own POWER8 CPU. But Who made them? TSMC? GloFo? Not IBM since IBM will be exiting Fab business in the near future.
I am going to guess this Dual CPU variant will be aiming at Intel Xeon E5 v2 Series. The 10 - 12 Core version cost from anywhere between $1200 - $2600. Although Google do get huge discount for buying directly from Intel and their volume.
Assuming the cost to made each 12 Core POWER8 to be $200, that is a potentially cost saving of $1000 per CPU, and $2000 per Server.
The last estimate were around 1 - 1.5 Million Servers at google in 2012 and 2M+ in 2013. May be they are approaching 3M in 2014/15. Even with most of those are low power CPU for storage or other needs. One million CPU made themselves could be savings of up to a billion.
Could this, kick start the server and Enterprise Industry to buy POWER8 CPU at much cheaper price? And Once there are enough momentum and software optimization ( JVM ) it could filter down to Web Hosting industry as well.
In the best case scenario, this means big trouble for Intel.
This could also mean big trouble for AMD since they're pushing high-density ARM64 in the data center, one could even go so far as assume that Google isn't taking AMD seriously.
Another possibility, with the power being open and having a market in servers is easic which offers chip manufacturing technology that is fit for lower volumes, with easier customizability option, to offer a customizable server processor.
In the places this fits, it could offer substantial improvement. for example 10-100x performance/cost+power for in-memory cache servers.
And they're working on making this tech programmable while still keeping this same cost levels.
And all this in the context of moore's law grinding to a halt. So definetly ,intel will have a hard time ahead.
EDIT: it appears that the power8 support an open extension interface to other chips(CAPI). Which means will see such accelerators sooner than later.
I wonder if POWER8 based servers will be available for the mass market? I'm not sure whether Google is interested in commoditizing POWER8 servers or just participates in the OpenPOWER foundation to ensure that POWER-based servers will suit their needs. The fact that Google is open about their new motherboard hints at the former, but it's not much.
I wonder how non-Google-scale developer could even potentially get to use POWER-based servers. Will they be available from the regular dedicated server hosting companies? What OS could they run? RHEL does support POWER platform, but for a hefty price: https://www.redhat.com/apps/store/server/ CentOS doesn't, presumably because all the POWER hardware CentOS developers could get is either very expensive or esoteric. That likely means I don't have to consider using POWER-based servers for at least 3 years, right?
Can someone explain the benefits of POWER8 as compared to Intel? I though the volume of POWER8 chips being low (as compared to the exceedingly powerful Intel and Arm chips) would mean that innovation in that area would be low as well.
I used to work on Google's indexing system, and I guess for lot of Google's workloads, total machine performance-per-Watt is the key metric. Many server workloads are largely coordinating the disk controller and the network controller, with maybe some heavy integer and a bit of fp processing in the middle, so your normal pure CPU performance-per-Watt benchmarks can be misleading. ("A supercomputer is a device for turning compute-bound problems into I/O-bound problems." --Seymour Cray, possibly apocryphal) POWER8 uses a lot of Watts, but it also has high I/O throughput, so for many server workloads it could beat Intel chips in full system performance-per-Watt. I would be really interested in seeing an I/O-per-Watt CPU benchmark.
Many of Google's workloads are embarrassingly parallel. I used to work on Google's indexing system, and one of the binaries I worked with had a bit over a million threads (across many processes and machines) running at any moment, with most of those threads blocked on I/O. POWER8 has a fair number of hardware threads per core, which should help with the heavily multithreaded style of programming used in many Google projects.
Google's datacenters use enormous amounts of power. Several of their locations are former Aluminum smelting plants, because Aluminum smelting also uses enormous amounts of power so the power lines are already in place. My team luckily happened to sit next to one of the guys who designs Google's datacenters and we happened to overhear him say something over the phone about not being able to make sense about enough power to power a small town just disappearing from our usage. One of the guys on my team asked him when this power reduction occurred, and if it had happened before. We worked out that the huge swings in power usage happened when we shut down the prototype for the new indexing system. Most of Google's datacenters are largely empty space because they are limited by the power lines and cooling capacity.
In another instance, we discovered a mistake had been made in measuring the maximum power drawn by one of the generations of Google servers. Google had a program designed to max out the systems, and they plugged a server into a power meter wall-wart and ran this program for a while. The maximum power usage was under-estimated due to a combination of the office where the measurement was made being cooler than a datacenter (electrical resistance of most conductors has a positive temperature coefficient in the range of temperatures found in working servers), the machine not being allowed sufficient time to warm up, and/or the indexing system being more highly tuned than the program designed to maximize server utilization. (I like to think it was mostly the latter, but I suspect the first two were the main contributors.) The end result was that cooling was under-provisioned in one of the datacenters. During a heatwave in the area occupied by the datacenter used for most of the indexing process, the datacenter began to overheat, so one of the guys on my team was getting temperature updates every 10 to 15 minutes from a guy actually in the datacenter, and adjusting the number of processes running the indexing system up and down accordingly in order to match indexing speed to the cooling capacity. When you're really truly maxing out that many machines 24/7, some machines will break every day, so the indexing system (as most Google systems) are tolerant of processes just being killed either by software fault, hardware fault, or the cross-machine scheduler.
Through a combination of realizing smart phones were really going to take off and they were all ARM powered and realizing how important total machine performance-per-Watt is to some major server purchasers lead me to invest in ARM Holdings in mid 2009. (This does not constitute investment advice. The rest of the market has now realized what I realized in 2009 and I don't feel I have more insight than the market at this point.)
Ridiculous parallelism. A POWER8 chip has 12 cores, and each core can handle 8 threads. As a result, these chips can keep the pipeline pretty much always full, and provide massive performance boosts to things like database servers.
Chip design is somewhat like software, adding more people or resources doesn't necessarily make your product superior. Look how long tiny AMD has been putting up a fight.
One benefit is that IBM is allowing a direct link to the CPU via their CAPI (Coherence Attach Processor Interface). Currently, Intel has frozen everyone out of using their QPI. This resulted in NVIDIA no longer being able to make chipsets like the Ion.
An NVIDIA chipset and GPU would be able to go well beyond what NVIDIA is able to do with Intel chips (limited to PCI hooks).
Two things. First, slightly off topic: is there anyway this could be a negotiating position with Intel, on price?
Second: while many CPU cores (with enough IO) is great for large Borg map reduce jobs, I am curious to see if Google will develop/use better software technology for running general purpose jobs more efficiently on many cores. Properly written Java and Haskell (which I think Google uses a bit in house) help, but the area seems ripe for improvement.
A layout like this means you can use both sides of the motherboard for I/O slots. So in a 1/2u box, you can get more than one or two expansion cards in place. The PCI slots themselves seem to be hammock connectors, which I was curious about too, and googling doesn't seem to have any info, unless it's too early/late and I'm missing the obvious.
It's a huge difference between a custom made server board and a mainstream CPU for general purpose use in desktops or commodity servers.
ARM is definitely taking market share from Intel on the low end, but AMD is just about the only viable competition intel has right now in the part of the market where they make the bulk of their income and margin.
I think currently there are few big customers that can afford the overhead of porting their code and dependencies to a different architecture.
For example I don't expect cloud providers to have a huge marked soon for non x86 architectures. Well there are JVM or other VM users which in theory could not care, as long as you don't need some native library.
In the past the battle with intel had to be played by providing an alternative implementation of the x86 instruction set for precisely the same reason: legacy.
The mobile market proved you can achieve good performance with ARM and especially better power per performance. I really can't wait to see some more fights in this arena.
250W TDP in a package that size.. as the article correctly states, it's about how many FLOPs you can get inside a rackmount case. that TDP alone is going to mean that you wont be able to put that many in a single case.
a dual socket board, 500W on CPUs, 600W with everything else.. the power supply would have to be something special, but the biggest challenge there would be getting the energy (ala heat) back out of the box..
GPUs have similar TDPs and issues - that's why the HSFs on top of them are so massive (and hence GPUs have a bit of an advantage here - they have the entire PCIE board to fit their cooling hardware on)
finally, 4.5ghz? what the hell? in one clock cycle, a beam of light wouldn't even get half way across the board (EDIT: not chip). branch/cache/TLB misses may literally kill any reasonable performance you might hope to get out of it. intel get around this by having years of market leading research in branch predictors, caching models, etc. and it's going to be no mean feat to match that.
i know IBM aren't exactly new to this game. but AFAIK x86 has always been faster, clock for clock, than POWER.
that said, i hope my concerns are misplaced. i'm hoping intel get some competition in the server room. it will be of benefit to everyone.
Light would travel about 660 millimetres in 0.22 nanoseconds, and the chip is about 25 millimetres on the side, so a beam of light could run a few laps around the chip in one clock cycle, or bounce off the sides 20-30 times. Maybe you wanted to say across the motherboard?
I don't think 4.5 GHz is somehow ridiculous when 3 GHz is routine (and POWER7 was 4.2 GHz). Hundreds of cycles of latency when accessing anything off the chip is now routine - that's the world we live in now. I think that the biggest problem is that IBM is not able to make the investments (especially in semiconductor manufacturing) to match Intel's rate of bringing technology to market. The current POWER7 is a 45-nm device if I remember correctly, and this 22-nm POWER8 is not yet on the market. Intel has been selling 22-nm Haswells for how long now? And of course the POWER7 chips have been up against next-generation semiconductors for most of their life.
EDIT: I see that IBM started selling POWER8 systems a few days ago. That's close to a year later than Haswell, and what's more, this chip is likely to compete against 14-nm processors for most of its lifetime.
Aren't they unique in using DRAM for the lowest level of onboard cache, and therefore have a lot of it since those are just tiny cells that store a charge for a while?
Yeah, starting with the POWER7. POWER8 has 96 MiB of eDRAM (e for embedded).
The Centaur memory controllers also have 16 MiB of eDRAM, max them out at 8 and you get 128 total at L4.
Compared to Intel's current offerings, the L1 data cache and L2 unified cache are twice as big. Don't know about timings, though.
The biggest Intel Ivy Bridge Xeon server CPUs have slightly more transistors (100 million), but on a much smaller die, 31% less area. Look at the ones with 12 and 15 native cores: https://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)... they list at $2336 to $6841.
Between people shifting from pc to arm-powered phones and major data-center users doing their best to cut costs this is shaping up to be a tough decade for Intel.
In all seriousness, I would not want to be leading Intel right now as I can't imagine what they could actually do to escape this.
Hindsight makes Itanium look like even more of a disaster, when that energy in that era should have gone into evolving the x86 platform for the future. Without AMD doing what they did (x86-64) I wonder where Intel would actually stand in the server market today.
IBM was pretty close to cutting off most funding to future development, and they just closed the factory in Minnesota that made the servers. They are pretty much the last survivor of the Unix wars.
There are major customers using this stuff and scaling up, but the industry as a whole is shifting to scale out. There might be a good story here with POWER8, but can you trust that the platform will be around?
So they're saying it's easier to use a brand new incompatible little endian Linux personality, with associated new toolchains and new ports of low level stuff etc compared to the standard Linux PPC64 stuff...
Sounds kind of surprising even if IBM did some of the bringup work ahead of time, but maybe they've got little endian assumptions baked in many internal protocols/apps.
Linux has supported little-endian POWER for several years. It makes porting userspace software tremendously easier since that the major architectures in use with Linux (x86_64, ARM, MIPS) are frequently LE.
The big news here is official support for KVM on POWER. Use all your existing automation, Openstack, etc, unchanged.
Would these be too pricey as hypervisors for cloud compute? It seems to me to be ideal for CPU thread intensive applications like databases, on-demand transcoding.
What are some use cases for a server like this for Google? I'd love to see these available in the IBM Cloud (SoftLayer) but I think they will be too pricey and reserved for enterprise.
You can also logically partition these beasts into multiple real servers. Who needs a hypervisor when you can have 96 "real" servers sharing the same hardware?
I think its interesting that they didn't include the "traditional" mouse/keyboard/VGA ports. Not particularly surprised since this is a server motherboard, but still interesting. I think I do see an HDMI connector in the lower right next to a tall silver port (possible USB connector).
It's actually quite impressive that google would open up this much of their secret sauce, a lot can be gleaned from looking at this board. You can bet that this is not exactly revision one (and you can bet as well that this is likely not their latest and greatest, no need to show off more than you have to, competitive edges are pretty thin).
When I see stuff like this it is painfully clear that from a technological perspective a company like duck-duck-go has a huge amount of defensible moat to cross before they can begin to be a serious contender. Think about it for a second: the company that you're trying to compete with is operating at such economies of scale that it can afford to have its own custom motherboards + non-standard expansion boards made.
[+] [-] nkurz|12 years ago|reply
They've also formed a consortium to promote this processor, of which Google is a flagship member (http://openpowerfoundation.org/). The expectation (or hope, or fear, depending on your point of view) is that Google may be designing their future server infrastructure around this chip. This motherboard is some of the first concrete evidence of this.
The chip is exciting to a lot of people not just because it offer competition to Intel, but because it's the first potentially strong competitor to x86/x64 to appear in the server market for quite a while. By the specs, it's really quite a powerhouse: http://www.extremetech.com/computing/181102-ibm-power8-openp...
[+] [-] higherpurpose|12 years ago|reply
I'm glad they are finally doing this, not so much because I care about what happens in the server world, but because so many product chip decisions at Google have been political (by choosing Intel chips) simply because Otellini was on their board. Hopefully this will signal a change from that.
[+] [-] raverbashing|12 years ago|reply
Well, the Power architecture had some success in Apple products, but ended with the inability of IBM to scale production and produce parts that consumed less power
[+] [-] ksec|12 years ago|reply
I am going to guess this Dual CPU variant will be aiming at Intel Xeon E5 v2 Series. The 10 - 12 Core version cost from anywhere between $1200 - $2600. Although Google do get huge discount for buying directly from Intel and their volume.
Assuming the cost to made each 12 Core POWER8 to be $200, that is a potentially cost saving of $1000 per CPU, and $2000 per Server.
The last estimate were around 1 - 1.5 Million Servers at google in 2012 and 2M+ in 2013. May be they are approaching 3M in 2014/15. Even with most of those are low power CPU for storage or other needs. One million CPU made themselves could be savings of up to a billion.
Could this, kick start the server and Enterprise Industry to buy POWER8 CPU at much cheaper price? And Once there are enough momentum and software optimization ( JVM ) it could filter down to Web Hosting industry as well.
In the best case scenario, this means big trouble for Intel.
[+] [-] hershel|12 years ago|reply
http://www.extremetech.com/computing/181102-ibm-power8-openp...
[+] [-] ihsw|12 years ago|reply
[+] [-] solarexplorer|12 years ago|reply
[+] [-] hershel|12 years ago|reply
In the places this fits, it could offer substantial improvement. for example 10-100x performance/cost+power for in-memory cache servers.
And they're working on making this tech programmable while still keeping this same cost levels.
And all this in the context of moore's law grinding to a halt. So definetly ,intel will have a hard time ahead.
EDIT: it appears that the power8 support an open extension interface to other chips(CAPI). Which means will see such accelerators sooner than later.
[+] [-] listic|12 years ago|reply
I wonder how non-Google-scale developer could even potentially get to use POWER-based servers. Will they be available from the regular dedicated server hosting companies? What OS could they run? RHEL does support POWER platform, but for a hefty price: https://www.redhat.com/apps/store/server/ CentOS doesn't, presumably because all the POWER hardware CentOS developers could get is either very expensive or esoteric. That likely means I don't have to consider using POWER-based servers for at least 3 years, right?
[+] [-] bhouston|12 years ago|reply
[+] [-] KMag|12 years ago|reply
Many of Google's workloads are embarrassingly parallel. I used to work on Google's indexing system, and one of the binaries I worked with had a bit over a million threads (across many processes and machines) running at any moment, with most of those threads blocked on I/O. POWER8 has a fair number of hardware threads per core, which should help with the heavily multithreaded style of programming used in many Google projects.
Google's datacenters use enormous amounts of power. Several of their locations are former Aluminum smelting plants, because Aluminum smelting also uses enormous amounts of power so the power lines are already in place. My team luckily happened to sit next to one of the guys who designs Google's datacenters and we happened to overhear him say something over the phone about not being able to make sense about enough power to power a small town just disappearing from our usage. One of the guys on my team asked him when this power reduction occurred, and if it had happened before. We worked out that the huge swings in power usage happened when we shut down the prototype for the new indexing system. Most of Google's datacenters are largely empty space because they are limited by the power lines and cooling capacity.
In another instance, we discovered a mistake had been made in measuring the maximum power drawn by one of the generations of Google servers. Google had a program designed to max out the systems, and they plugged a server into a power meter wall-wart and ran this program for a while. The maximum power usage was under-estimated due to a combination of the office where the measurement was made being cooler than a datacenter (electrical resistance of most conductors has a positive temperature coefficient in the range of temperatures found in working servers), the machine not being allowed sufficient time to warm up, and/or the indexing system being more highly tuned than the program designed to maximize server utilization. (I like to think it was mostly the latter, but I suspect the first two were the main contributors.) The end result was that cooling was under-provisioned in one of the datacenters. During a heatwave in the area occupied by the datacenter used for most of the indexing process, the datacenter began to overheat, so one of the guys on my team was getting temperature updates every 10 to 15 minutes from a guy actually in the datacenter, and adjusting the number of processes running the indexing system up and down accordingly in order to match indexing speed to the cooling capacity. When you're really truly maxing out that many machines 24/7, some machines will break every day, so the indexing system (as most Google systems) are tolerant of processes just being killed either by software fault, hardware fault, or the cross-machine scheduler.
Through a combination of realizing smart phones were really going to take off and they were all ARM powered and realizing how important total machine performance-per-Watt is to some major server purchasers lead me to invest in ARM Holdings in mid 2009. (This does not constitute investment advice. The rest of the market has now realized what I realized in 2009 and I don't feel I have more insight than the market at this point.)
[+] [-] Sanddancer|12 years ago|reply
[+] [-] zurn|12 years ago|reply
For example their System/360 compatible mainframe CPUs are doing 5.5 GHz now. https://en.wikipedia.org/wiki/IBM_zEC12_(microprocessor)
Chip design is somewhat like software, adding more people or resources doesn't necessarily make your product superior. Look how long tiny AMD has been putting up a fight.
[+] [-] thrownaway2424|12 years ago|reply
[+] [-] rwmj|12 years ago|reply
[+] [-] protomyth|12 years ago|reply
An NVIDIA chipset and GPU would be able to go well beyond what NVIDIA is able to do with Intel chips (limited to PCI hooks).
[+] [-] cdi|12 years ago|reply
They've masked all the chips with something black. Are they hiding chips they are using, or is this something for thermal dissipation?
[+] [-] mark_l_watson|12 years ago|reply
Second: while many CPU cores (with enough IO) is great for large Borg map reduce jobs, I am curious to see if Google will develop/use better software technology for running general purpose jobs more efficiently on many cores. Properly written Java and Haskell (which I think Google uses a bit in house) help, but the area seems ripe for improvement.
[+] [-] sp332|12 years ago|reply
[+] [-] mrweasel|12 years ago|reply
I know Google don't have a standard rack setup, but still, it would make seens to have all the expantion ports the end of the board... No?
[+] [-] nuriaion|12 years ago|reply
[+] [-] Sanddancer|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] z3phyr|12 years ago|reply
[+] [-] msiebuhr|12 years ago|reply
Edit: Also, it's notable that both Xbox One and PS4 switched to use x64.
[+] [-] vidarh|12 years ago|reply
AmigaOne's from A-Eon (http://www.a-eon.com/) and A-Cube (http://www.acube-systems.biz/index.php?page=hardware&pid=7) both running AmigaOS 4, and optionally Linux (at least for the ones from A-Eon, not sure about the ones from A-Cube).
[+] [-] justincormack|12 years ago|reply
There are some embedded machines, but not many now.
Apple G5s are still serviceable and supported by modern Linux distros, and are almost free...
[+] [-] frozenport|12 years ago|reply
[+] [-] fh973|12 years ago|reply
[+] [-] jacquesm|12 years ago|reply
ARM is definitely taking market share from Intel on the low end, but AMD is just about the only viable competition intel has right now in the part of the market where they make the bulk of their income and margin.
[+] [-] ithkuil|12 years ago|reply
For example I don't expect cloud providers to have a huge marked soon for non x86 architectures. Well there are JVM or other VM users which in theory could not care, as long as you don't need some native library.
In the past the battle with intel had to be played by providing an alternative implementation of the x86 instruction set for precisely the same reason: legacy.
The mobile market proved you can achieve good performance with ARM and especially better power per performance. I really can't wait to see some more fights in this arena.
[+] [-] thescrewdriver|12 years ago|reply
[+] [-] foxhill|12 years ago|reply
a dual socket board, 500W on CPUs, 600W with everything else.. the power supply would have to be something special, but the biggest challenge there would be getting the energy (ala heat) back out of the box..
GPUs have similar TDPs and issues - that's why the HSFs on top of them are so massive (and hence GPUs have a bit of an advantage here - they have the entire PCIE board to fit their cooling hardware on)
finally, 4.5ghz? what the hell? in one clock cycle, a beam of light wouldn't even get half way across the board (EDIT: not chip). branch/cache/TLB misses may literally kill any reasonable performance you might hope to get out of it. intel get around this by having years of market leading research in branch predictors, caching models, etc. and it's going to be no mean feat to match that.
i know IBM aren't exactly new to this game. but AFAIK x86 has always been faster, clock for clock, than POWER.
that said, i hope my concerns are misplaced. i'm hoping intel get some competition in the server room. it will be of benefit to everyone.
[+] [-] yaakov34|12 years ago|reply
I don't think 4.5 GHz is somehow ridiculous when 3 GHz is routine (and POWER7 was 4.2 GHz). Hundreds of cycles of latency when accessing anything off the chip is now routine - that's the world we live in now. I think that the biggest problem is that IBM is not able to make the investments (especially in semiconductor manufacturing) to match Intel's rate of bringing technology to market. The current POWER7 is a 45-nm device if I remember correctly, and this 22-nm POWER8 is not yet on the market. Intel has been selling 22-nm Haswells for how long now? And of course the POWER7 chips have been up against next-generation semiconductors for most of their life.
EDIT: I see that IBM started selling POWER8 systems a few days ago. That's close to a year later than Haswell, and what's more, this chip is likely to compete against 14-nm processors for most of its lifetime.
[+] [-] hga|12 years ago|reply
Yeah, starting with the POWER7. POWER8 has 96 MiB of eDRAM (e for embedded).
The Centaur memory controllers also have 16 MiB of eDRAM, max them out at 8 and you get 128 total at L4.
Compared to Intel's current offerings, the L1 data cache and L2 unified cache are twice as big. Don't know about timings, though.
The biggest Intel Ivy Bridge Xeon server CPUs have slightly more transistors (100 million), but on a much smaller die, 31% less area. Look at the ones with 12 and 15 native cores: https://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)... they list at $2336 to $6841.
[+] [-] ISL|12 years ago|reply
[+] [-] cliveowen|12 years ago|reply
[+] [-] fidotron|12 years ago|reply
Hindsight makes Itanium look like even more of a disaster, when that energy in that era should have gone into evolving the x86 platform for the future. Without AMD doing what they did (x86-64) I wonder where Intel would actually stand in the server market today.
[+] [-] Spooky23|12 years ago|reply
IBM was pretty close to cutting off most funding to future development, and they just closed the factory in Minnesota that made the servers. They are pretty much the last survivor of the Unix wars.
There are major customers using this stuff and scaling up, but the industry as a whole is shifting to scale out. There might be a good story here with POWER8, but can you trust that the platform will be around?
[+] [-] zurn|12 years ago|reply
Sounds kind of surprising even if IBM did some of the bringup work ahead of time, but maybe they've got little endian assumptions baked in many internal protocols/apps.
[+] [-] hapless|12 years ago|reply
The big news here is official support for KVM on POWER. Use all your existing automation, Openstack, etc, unchanged.
[+] [-] rbanffy|12 years ago|reply
[+] [-] sp332|12 years ago|reply
[+] [-] teepo|12 years ago|reply
What are some use cases for a server like this for Google? I'd love to see these available in the IBM Cloud (SoftLayer) but I think they will be too pricey and reserved for enterprise.
[+] [-] jameshk|12 years ago|reply
[+] [-] huslage|12 years ago|reply
[+] [-] Corrado|12 years ago|reply
[+] [-] jmnicolas|12 years ago|reply
[+] [-] jacquesm|12 years ago|reply
When I see stuff like this it is painfully clear that from a technological perspective a company like duck-duck-go has a huge amount of defensible moat to cross before they can begin to be a serious contender. Think about it for a second: the company that you're trying to compete with is operating at such economies of scale that it can afford to have its own custom motherboards + non-standard expansion boards made.
[+] [-] peterfisher|12 years ago|reply