top | item 11259392

The future of computing: After Moore's law

93 points| martincmartin | 10 years ago |economist.com | reply

64 comments

order
[+] Animats|10 years ago|reply
The use of highly parallel GPUs as general purpose compute elements is a major trend. Most graphics boards have more compute power than the main CPU they serve. The Titan and Summit supercomputers have most of their compute power in the GPUs. The limits of GPU parallelism haven't been reached yet. Machine learning can be done on GPUs, and that's the biggest CPU hog problem right now that is getting mainstream use. That technology isn't near a ceiling.

As machine learning/AI/neuron emulation becomes more useful, we'll see hardware specialized for that. It's not yet clear what shape that hardware will take. Look for "works fine, but is slow on GPUs" results to lead to such hardware, rather than "build it and they will come" projects like the Human Brain Project.

There's still more improvement possible in storage devices. With the 20TB SSD drive expected in 4 years, things are looking good in that area. For compute elements, cooling limits transistor density. Storage tends not to be heat dissipation limited.

[+] pixl97|10 years ago|reply
>With the 20TB SSD drive expected in 4 years,

4? Samsung is shipping (a very few select partners) 16TB SSDs now! I suspect in 4 years we'll be closer to 50TB.

[+] ghaff|10 years ago|reply
It's a pretty good piece. The main thing that it does skip over is what the economic implications might be to no longer declining transistor costs. Sure, Google can add a more or less arbitrarily large number of servers but what are the implications of the likely reality that those servers aren't improving in price/performance to the degree that they once were.

At Hot Chips a couple of years back Robert Colwell, who was director of the microsystems technology office at DARPA at the time, had a very interesting presentation on where things were going. One of the things that stuck with me at the time was his contention that there are lots of ways to improve performance etc. over time but CMOS was really pretty special.

"Colwell also points out that from 1980 to 2010, clocks improved 3500X and micro architectural and other improvements contributed about another 50X performance boost. The process shrink marvel expressed by Moore’s Law (Observation) has overshadowed just about everything else."

http://bitmason.blogspot.com/2016/01/beyond-general-purpose-...

[+] jacobr1|10 years ago|reply
Even if we don't produce smaller circuits, we still might produce cheaper ones. There is still plenty of room to go to reduce fab costs. Especially since we basically replace them every few years. Consider if intel solely spent time iterating on cost. We also have made headway on power-reduction. I could see further improvements there. In aggregate I could imagine even if we don't see greater chip density we could see, say, AWS compute power/cost continuing Moore's trend for some time.
[+] pedalpete|10 years ago|reply
I was recently thinking about Moore's Law and if it is truly coming to an end. My initial reaction is that it doesn't matter in itself, if the number of transistors doubles, what might matter is that our compute power is doubling.

This led me to think that maybe Moore's Law is looking at the wrong metric, and is there a more fundamental law regarding increased capacity. Some proof of this is in the drop in prices of cloud computing and increases in network capacity. If chips stopped getting more powerful, in theory, we may not notice as the capability of the entire network continues to increase at the familiar rate.

However, this begs the question, did this increased capacity actually begin with Moore's Law and computer processing in general? Or is there an overarching law of progress which has always existed. To prove this, I'd need to go back and look at the growth of industrialization. I suspect as the capabilities of the factories slowed, our network speed (rail and sea, then road and then air) increased. As the network speed reached a plateau, we began to find places where we could manufacture more cheaply, thereby increasing the output per cost. Does the same growth exist in agriculture? What major industry does not fit?

There is some evidence that a law similar to Moore's law exists not only for technology. What effect does that have in our understanding of why things grow?

[edit: this is was an except from my YC application to the question regarding what have you discovered]

[+] mtdewcmu|10 years ago|reply
I think technological progress is more like a logistic curve, with exponential-like growth at the beginning and then a leveling off. Look at technologies that have already had time to mature. The speed of airplanes grew tremendously while jet engines and wing shapes were undergoing heavy refinement, but then it leveled off and hasn't really budged in decades.
[+] wolfram74|10 years ago|reply
While we're reaching the physical limits of classical chip design, do we have any ideas what the limits are on the algorithm side of things? As much speed up has come from software as hardware according to a few reports.

http://www.johndcook.com/blog/2015/12/08/algorithms-vs-moore...

[+] ars|10 years ago|reply
> do we have any ideas what the limits are on the algorithm side of things?

I've always believed that humans do not have the ability to program things smarter than themself, because we do not understand our own intelligence, so we have no way to reproduce it.

At the time, I said the only alternative I can think of is make random permutations and pick the best one, and go from there. But I said this as a ridiculous suggestion that no one would actually do.

But then Google actually did exactly that with their Go machine!

So, that's what I think: The future of computing will be based on randomness and the job of the programmer will be to guide it, but not program it directly. (Can you imagine programming a webpage this way? Or writing a book this way?)

[+] gizmo686|10 years ago|reply
It is worth remembering that just because we are reaching some physical limits of "classical" chip design. For example, both x86 and arm date back to the 80s. While I have no doubt that the implementation of these architectures has improved to reflect modern manufacturing capabilities, this still suggests that there is room for architectural improvements in performance.

Beyond simple architectural improvements, we could still move beyond basic transistor based computing. The most common example is quantum computing (which offers an asymptotic improvement in some cases), however I can imagine there beyond other classical devices that can compute certain functions more efficiently than a pure transistor based solution can.

[+] nickpsecurity|10 years ago|reply
A ton of it did. There's many subtopics for algorithms, microarchitecture tricks, hardware accelerators, I/O schemes, improvements for RTL/transistor optimization, and so on. Each have enough papers it can be hard to fibd stuff. Most of best stuff gets patented and controlled by dominant companies.
[+] dev1n|10 years ago|reply
I like PG's idea [1] of trying to write a compiler that can utilize code to run on multiple cores, as if the cores were running in series, not parallel (think batteries).

[1]: http://paulgraham.com/ambitious.html

[+] seiji|10 years ago|reply
That's the holy grail of The Cloud: just write a description of what you want and what you want happens. DWIM programmatic casting.

I think pg originated his "sufficiently smart compiler" startup idea in his pycon talk. You can find it online somewhere. The other take away from his talk was: just lie to customers about it being automated, manually farm out the parallelize-all-the-code tasks to works/interns/turks while saying it's "automatic," then eventually figure out how to automate it yourself later so you don't need pesky humans in the loop.

[+] tim333|10 years ago|reply
While PGs idea is likely too hard to be doable there is an interesting practical approach with things like Elixir which make multicore fairly easy using functional programming:

"""

Other languages skirt these issues by running on a single CPU with multiple processes to achieve concurrency; however, as Moore's Law pushes us towards increasing multicore CPUs, this approach becomes unmanageable. Elixir's immutable state, Actors, and processes produce a concurrency model that is easy to reason about and allows code to be written distributively without extra fanfare.

"""

[+] danjoc|10 years ago|reply
While "cloud" may be a big part of the future of computing, I think the relationship painted between that and the end of Moore's law is tenuous at best.

I believe a more plausible link exists between the end of Moore's law and the rise of open hardware as explained in this article:

http://www.eetimes.com/document.asp?doc_id=1321796

TL;DR

If eight year old hardware is almost as fast as today's hardware, there is ample time to reverse engineer competitive open hardware.

[+] bcrack|10 years ago|reply
The article argues a very interesting point and there might definitely be an opportunity for more competitive open hardware. At the same time, it feels kind of sad that it would take a technical constraint for this to happen; that is, rather than a change in culture.
[+] kumarski|10 years ago|reply
Limitless exponential growth doesn't happen in the physical world, it happens on S-curves.

Black phosphorus anyone?

[+] graycat|10 years ago|reply
Research by K. Ebcioglu on very long instruction word (VLIW) shows that 24 way VLIW can get 9:1 speedup on ordinary code.
[+] kevinnk|10 years ago|reply
Compared to what? What does ordinary code mean?

VLIW has been around for a while (Itanium is probably the most famous "general purpose" example) and has failed to gain traction outside of GPUs and DSP (ie not "ordinary code").

[+] sounds|10 years ago|reply
The article is click bait. Moore's law may slow down, but it will not be predicted in the Economist.

For example, a quote from the article:

"Moore’s law was never a physical law, but a self-fulfilling prophecy—a triumph of central planning"

The physics and triumphs of engineering were all about physical law; "The end of Moore's law" was always just around the corner because of physics.

No central planning led Intel to invest their billions in R&D.

Central planning can neither force nor halt additional refinements in transistor density or alternate ways to compute.

[+] Retric|10 years ago|reply
Moore’s has been slowing down for a long time. In 1965 he wrote a paper predicting a doubling every year. In 1975 it was revised to double ~every 2 years. With 18 months as suggested by someone else being the accepted target for quite a while.

Intel failed to keep up with every 2 years back in 2012.

CEO of Intel, announced that "our cadence today is closer to two and a half years than two.” This is scheduled to hold through the 10 nm width in late 2017.