top | item 1916765

10 petaflop supercomputer in the making, will take US back to 1 on top500

29 points| eerpini | 15 years ago |ncsa.illinois.edu | reply

42 comments

order
[+] mortenjorck|15 years ago|reply
This is going to sound somewhat shallow, but I hope this new installation gets some decent visual design. Look at some of the classic supercomputers, such as the fantastic Thinking Machines CM-1: http://www.mission-base.com/tamiko/cm/CM-1_500w.gif

Now look at the current reigning champ in 2010: http://www.blogcdn.com/www.engadget.com/media/2010/10/mod-65...

Sure, the visual design of a supercomputing installation doesn't have any bearing on its actual utility. But having a certain presence commands respect—even if the researchers using it know it's just stagecraft, there's an undeniable attraction to working on something that isn't just a great engineering achievement, but also looks the part.

[+] wmf|15 years ago|reply
I don't think IBM does blinkenlights or Cray/SGI-style colors. It's all about the black monoliths.

The inside of the PERCS nodes look pretty cool though, with all the copper pipes and fibers.

[+] brc|15 years ago|reply
I'll put my vote in for it looking something like the computer in Oceans Thirteen. That looked quite cool as a hollywood representation of computing power.
[+] zitterbewegung|15 years ago|reply
I went to a presentation at my school about this computer. There are two main goals to the project. One is to be a usable supercomputer meaning that it will be much more userfriendly to actually use to write software for. Another is the goal of actually getting 1 petaflop of sustained performance. Whats interesting is they were able to get the building under budget and ahead of schedule due to the housing crisis. They also want to make the computer much more energy efficient and they aim to be more efficient than modern datacenters.
[+] eerpini|15 years ago|reply
yes, they seem to be using the Power 7 architecture by IBM , which is quite energy efficient compared to most other architectures, and also they are planning to use UPC(Unified Parallel C) or similar languages for applications running on the system, that a good step towards better usability. But that does not matter as a super computer is always(mostly) used by a small group of highly qualified scientists.
[+] coffeenut|15 years ago|reply
You know, they start waking up at around 1 Petaflop.
[+] eerpini|15 years ago|reply
I quite did not understand what "waking" up means, but it seems that Blue waters will indeed have a sustained performance of more than a petaflop with adapted applications !
[+] hartror|15 years ago|reply
Bluewaters sounds exactly like a name from a scifi novel too . . .
[+] revertts|15 years ago|reply
For anyone wondering about the "back to 1" part: the Chinese very recently (this week?) took the spot with their 2.5 petaflop Tianhe-1A. It's made from Xeons, Fermis, and Feiteng-100s. Right now they're having trouble actually utilizing the machine; very little of their software takes advantage of more than a modest number of cores.
[+] eerpini|15 years ago|reply
mostly the problems with utilizing the system completely are the same ones a common programmer faces when trying to use GPUs for general purpose programming, the problem of transferring the data to be handled from the main memory to the GPU's memory. Though Tianhe-1A has a very high peak performance, the sustained performance seems to be comparatively low !
[+] charlesju|15 years ago|reply
What are some real world applications for this bad boy? Is this a for-profit endeavor where real companies can rent this computer out to do stuff with it?
[+] InclinedPlane|15 years ago|reply
These things are most useful for research via numerical simulation. The uses that would come closest to aiding for-profit endeavors would probably be various simulations of systems still in the design-phase. For example, simulation of a car crash, simulation of aerodynamics of an automobile or airplane, simulation of the fluid dynamics of a jet engine, simulation of hydrodynamics of a new super-tanker design, etc.

Such simulations are fairly common today, although generally super-computers of this magnitude aren't needed for them.

[+] nkassis|15 years ago|reply
Talking about HPCs anyone at SC10? If you are I'd like to meet up with other HNers at the conference. I'm stuck in a booth until 3pm (3445 ciena demoing some brain imaging app over high bandwidth links) but it be cool to have a hn meetup afterwards in some bar. Only issue will probably be choosing a bar from NO large selection.
[+] dstein|15 years ago|reply
It is still kind of mindblowing that the entire Amazon cloud shows up as 231st of the top 500 computing clusters (top500.org). Are all these other systems just sitting around crunching proteins? They should start hooking these bad boys up as EC2 mega-instances.
[+] nostrademons|15 years ago|reply
That list claimed that all of EC2 had only 7000 cores, which seems absolutely ridiculous to me. I'm more inclined to believe that the list is full of shit than to think that EC2 has so much less computing power than these other supercomputers.
[+] eerpini|15 years ago|reply
well that is easier said than done, all the machines on the amazon cluster are machines with common architecture and traditional interconnects, most of these supercomputers have a custom architecture( here I mean just the type of processors and the inter processor connect on the same node) and interconnect, the cost of running programs on these machines would outweigh their usability for the cloud.
[+] Jabbles|15 years ago|reply
Does anyone know why there's a ten-fold difference between peak and sustained power? I would have thought it'd be more like 50%, but I don't work with HPCs.
[+] eerpini|15 years ago|reply
The performance is measured with LinPack which mostly does FP operations and is optimized to run on the architecture, when normal programs are run, it is tough to run them at full utilization, this is because, a lot of time might be spent in data transfer or I/O apart from the computation ... there would be various other factors !
[+] bbgm|15 years ago|reply
Also, with GPUs, because of the additional compute power on the server, the network quickly becomes a bottleneck.
[+] temugen|15 years ago|reply
The buildings on our campus keep their lights off for the majority of the day, probably to make room on the grid for this machine's energy usage.
[+] jmtame|15 years ago|reply
i was really excited when they started construction on the building that houses those machines. it's finished now, but i was able to attend the less than glamorous pre-open house* =]

* http://bit.ly/dkJiaZ, http://bit.ly/9mEo2e

[+] sparky|15 years ago|reply
It's much cooler now, but access is more restricted. I think you can still get tours if you arrange ahead of time http://www.ncsa.illinois.edu/AboutUs/tour.html . If you do, if you look really hard in the far corner of the room you might spy the little Top500 machine I worked on this fall.
[+] pjscott|15 years ago|reply
The networking hardware can read and write directly to L3 cache. That's just cool.
[+] jacques_chester|15 years ago|reply

  The key component is the hub/switch chip.
  The four POWER7 chips in a compute node are
  connected to a hub/switch chip, which serves
  as an interconnect gateway as well as a
  switch that routes traffic between other hub
  chips. The system therefore requires no
  external network switches or routers,
  providing considerable savings in switching
  components, cabling, and power.
Sounds a bit like the design SiCortex had for their machines (before cashflow interruption killed them).
[+] sparky|15 years ago|reply
Most supercomputers with multi-socket nodes have had something like this for similar reasons. The earliest reference I can find is here http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefi... , but more modern examples of the idea are the Cray SeaStar http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.150... or even SeaMicro's I/O virtualization ASIC if you squint the right way http://www.seamicro.com/?q=node/38 .

SiCortex had some neat ideas, it was a shame to see them go.

[+] Devilboy|15 years ago|reply
Anyone want to guesstimate the date of the first exaflop computer?