This is going to sound somewhat shallow, but I hope this new installation gets some decent visual design. Look at some of the classic supercomputers, such as the fantastic Thinking Machines CM-1: http://www.mission-base.com/tamiko/cm/CM-1_500w.gif
Sure, the visual design of a supercomputing installation doesn't have any bearing on its actual utility. But having a certain presence commands respect—even if the researchers using it know it's just stagecraft, there's an undeniable attraction to working on something that isn't just a great engineering achievement, but also looks the part.
I'll put my vote in for it looking something like the computer in Oceans Thirteen. That looked quite cool as a hollywood representation of computing power.
I went to a presentation at my school about this computer. There are two main goals to the project. One is to be a usable supercomputer meaning that it will be much more userfriendly to actually use to write software for. Another is the goal of actually getting 1 petaflop of sustained performance. Whats interesting is they were able to get the building under budget and ahead of schedule due to the housing crisis. They also want to make the computer much more energy efficient and they aim to be more efficient than modern datacenters.
yes, they seem to be using the Power 7 architecture by IBM , which is quite energy efficient compared to most other architectures, and also they are planning to use UPC(Unified Parallel C) or similar languages for applications running on the system, that a good step towards better usability. But that does not matter as a super computer is always(mostly) used by a small group of highly qualified scientists.
I quite did not understand what "waking" up means, but it seems that Blue waters will indeed have a sustained performance of more than a petaflop with adapted applications !
For anyone wondering about the "back to 1" part: the Chinese very recently (this week?) took the spot with their 2.5 petaflop Tianhe-1A. It's made from Xeons, Fermis, and Feiteng-100s. Right now they're having trouble actually utilizing the machine; very little of their software takes advantage of more than a modest number of cores.
mostly the problems with utilizing the system completely are the same ones a common programmer faces when trying to use GPUs for general purpose programming, the problem of transferring the data to be handled from the main memory to the GPU's memory. Though Tianhe-1A has a very high peak performance, the sustained performance seems to be comparatively low !
What are some real world applications for this bad boy? Is this a for-profit endeavor where real companies can rent this computer out to do stuff with it?
These things are most useful for research via numerical simulation. The uses that would come closest to aiding for-profit endeavors would probably be various simulations of systems still in the design-phase. For example, simulation of a car crash, simulation of aerodynamics of an automobile or airplane, simulation of the fluid dynamics of a jet engine, simulation of hydrodynamics of a new super-tanker design, etc.
Such simulations are fairly common today, although generally super-computers of this magnitude aren't needed for them.
Talking about HPCs anyone at SC10? If you are I'd like to meet up with other HNers at the conference. I'm stuck in a booth until 3pm (3445 ciena demoing some brain imaging app over high bandwidth links) but it be cool to have a hn meetup afterwards in some bar. Only issue will probably be choosing a bar from NO large selection.
It is still kind of mindblowing that the entire Amazon cloud shows up as 231st of the top 500 computing clusters (top500.org). Are all these other systems just sitting around crunching proteins? They should start hooking these bad boys up as EC2 mega-instances.
That list claimed that all of EC2 had only 7000 cores, which seems absolutely ridiculous to me. I'm more inclined to believe that the list is full of shit than to think that EC2 has so much less computing power than these other supercomputers.
well that is easier said than done, all the machines on the amazon cluster are machines with common architecture and traditional interconnects, most of these supercomputers have a custom architecture( here I mean just the type of processors and the inter processor connect on the same node) and interconnect, the cost of running programs on these machines would outweigh their usability for the cloud.
Does anyone know why there's a ten-fold difference between peak and sustained power? I would have thought it'd be more like 50%, but I don't work with HPCs.
The performance is measured with LinPack which mostly does FP operations and is optimized to run on the architecture, when normal programs are run, it is tough to run them at full utilization, this is because, a lot of time might be spent in data transfer or I/O apart from the computation ... there would be various other factors !
i was really excited when they started construction on the building that houses those machines. it's finished now, but i was able to attend the less than glamorous pre-open house* =]
It's much cooler now, but access is more restricted. I think you can still get tours if you arrange ahead of time http://www.ncsa.illinois.edu/AboutUs/tour.html . If you do, if you look really hard in the far corner of the room you might spy the little Top500 machine I worked on this fall.
The key component is the hub/switch chip.
The four POWER7 chips in a compute node are
connected to a hub/switch chip, which serves
as an interconnect gateway as well as a
switch that routes traffic between other hub
chips. The system therefore requires no
external network switches or routers,
providing considerable savings in switching
components, cabling, and power.
Sounds a bit like the design SiCortex had for their machines (before cashflow interruption killed them).
If you believe that the trend-line that has held surprisingly steady for the last 17 years ( http://top500.org/lists/2010/11/performance_development ) will keep going, than by my crude eyeballing it looks like about 2019. Granted, there are a lot of good reasons to think it won't quite hold.
[+] [-] mortenjorck|15 years ago|reply
Now look at the current reigning champ in 2010: http://www.blogcdn.com/www.engadget.com/media/2010/10/mod-65...
Sure, the visual design of a supercomputing installation doesn't have any bearing on its actual utility. But having a certain presence commands respect—even if the researchers using it know it's just stagecraft, there's an undeniable attraction to working on something that isn't just a great engineering achievement, but also looks the part.
[+] [-] blhack|15 years ago|reply
Stuff like this: http://www.google.com/images?hl=en&q=Marenostrum&gbv...
is just really cool.
[+] [-] wmf|15 years ago|reply
The inside of the PERCS nodes look pretty cool though, with all the copper pipes and fibers.
[+] [-] brc|15 years ago|reply
[+] [-] zitterbewegung|15 years ago|reply
[+] [-] eerpini|15 years ago|reply
[+] [-] coffeenut|15 years ago|reply
[+] [-] eerpini|15 years ago|reply
[+] [-] hartror|15 years ago|reply
[+] [-] revertts|15 years ago|reply
[+] [-] eerpini|15 years ago|reply
[+] [-] charlesju|15 years ago|reply
[+] [-] InclinedPlane|15 years ago|reply
Such simulations are fairly common today, although generally super-computers of this magnitude aren't needed for them.
[+] [-] nkassis|15 years ago|reply
[+] [-] dstein|15 years ago|reply
[+] [-] nostrademons|15 years ago|reply
[+] [-] eerpini|15 years ago|reply
[+] [-] Jabbles|15 years ago|reply
[+] [-] eerpini|15 years ago|reply
[+] [-] bbgm|15 years ago|reply
[+] [-] temugen|15 years ago|reply
[+] [-] jmtame|15 years ago|reply
* http://bit.ly/dkJiaZ, http://bit.ly/9mEo2e
[+] [-] sparky|15 years ago|reply
[+] [-] pjscott|15 years ago|reply
[+] [-] jacques_chester|15 years ago|reply
[+] [-] sparky|15 years ago|reply
SiCortex had some neat ideas, it was a shame to see them go.
[+] [-] Devilboy|15 years ago|reply
[+] [-] sparky|15 years ago|reply