Note - The article is about probabilistic chips. The use of analog is incredibly misleading. As can be seen in the interesting but nonetheless unrelated discussions on various types of analog computers.
This type of chip which will save a lot of energy and still be correct in the sense exact answers are not needed. Non-deterministic computations invloving sampling for example. This particular chip is better for many types of bayesian and or generative machine learning algorithms. Not the multi-layer perceptrons most posts are referring to.
Interestingly, if you look at it the other way around, "probabilistic" is a good way of describing analog output. Something I read many years ago on computer history claimed that one of the reasons digital technology displaced analog was the inherent lack of "exactness" in analog.
> By definition, a computer is a machine that processes and stores data as ones and zeroes.
No, of course thats only true for digital computers. Otherwise, what would be the point of the headline? Not a great article but interesting news.
I could imagine specialized computer for artificial neural networks (ANN) being commercially successful in the future. Not sure if this is what the people in this DARPA project are working on.
As far as I can tell, there are a lot of breakthroughs in ANN's at the moment, especially in the realms of pattern (e.g. image) recognition:
Ng: http://www.youtube.com/watch?v=ZmNOAtZIgIk
Hinton: http://news.ycombinator.com/item?id=4403662
"The only modern, electronic ternary computer Setun was built in the late 1950s in the Soviet Union at the Moscow State University by Nikolay Brusentsov […]
IBM also reports infrequently on ternary computing topics (in its papers), but it is not actively engaged in it."
> I could imagine specialized computer for artificial neural networks (ANN) being commercially successful in the future.
There are a few companies already commercializing this, CogniMem[0] is one that comes to mind. However, I haven't heard of any commercial scale "success" stories despite their first parts being built in 2008.
When I was younger and less of a technophile, I often wished analog computing would somehow overtake digital computing. I suppose from my naïve perspective I found it unsettling that our model of computing is discrete while the space-time we exist in is continuous.
Being much more into technology today than I was 10 years ago (but equally naïve :-), my perspective seems to have flipped. I question whether reality is really continuous or whether it is just our limited perception of it that makes it seem that way. Obviously, my growing interest in digital technology has had a profound impact on how I view the world in which we live.
In any case, I'm glad DARPA is looking more seriously into analog computing. I think there is a lot to be learned from revisiting the issue in a field that is still very young.
"By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman."
Funny, I came to the exact opposite conclusion:
It's unsettling our models of physics are based on continuous functions while the space-time we exist in is obviously discrete. Considering them as continuous is a simplification to make the math easier, but not correct.
I still think it would be cool if analog computing was a thing, but I understand now more than ever why digital computing rules. It's the same as why synchronous computing rules- it is so difficult to design an asynchronous or analog system, that you cannot even hope to catch up, let alone overtake, synchronous digital computing.
This comes up every few years. Please, guys, before you conclude that probabilistic computers are the future, think about what it would be like debugging a program running on one. Speaking as someone who has implemented a probabilistic version of 'git bisect'[1], I think it would be hard.
That's not to say that the idea is completely dumb. A probabilistic GPU would be useful, although it would go against the trend of being able to use them for general computation.
"Please, guys, before you conclude that probabilistic computers are the future, think about what it would be like debugging a program running on one."
Most of today's software is already probabilistic. Most bugs occur because the software is run on the wrong hardware. Therefore, once we have probabilistic hardware, most existing software can easily be ported, and most bugs will instantly disappear.
Check out http://pruned.blogspot.com/2012/01/gardens-as-crypto-water-c... to see a Soviet water-based computer. I don't know the details of it, but it appears to be able to solve PDE's by mapping the PDE you're solving to a hydraulic system. Since the flow of water is governed by PDEs, once you've mapped the problem to the computer you can just make some analog measurements of water flow to get the answer you're looking for. (Is this right?)
I have to wonder, thought, whether this is can really be called a computer. If you do, don't you also have to consider a simple integrator circuit one?
Error correction must also be somewhere between a total PITA and impossible, right?
> “One of the things that’s happened in the last 10 to 15 years is that power-scaling has stopped,” he says. Moore’s law — the maxim that processing power will double every 18 months or so — continues, but battery lives just haven’t kept up.
Has everyone just given up now on the original Moore's Law about transistor count, and just decided that the law is about computing power (per the David House quote)?
Moore's law is now commonly understood as referring to the best transistor density we can do. But in the original paper Moore was talking about the density of the least cost process.
With the original meaning Moore's law stops not necessarily when the density stops increasing, it can also stop because the new finer process remains forever more expensive than the previous one.
With the old definition some expect Moore's law to stop at 28nm. Some even say it already stopped at 40nm (which is still less expensive than 28nm, so we'll have to see).
The difference matters, because of economics. New fabs are always more expensive, but up to now the resulting chips were both better and cheaper. So everybody moved to newer process eventually, and the addressable market also increased. If the original Moore's law stop due to increasing chip cost, you can still get better perfs, but now it's more expensive.
As a result, all the applications where performance is good enough and price matters more (embedded applications, ...) will stay on the now cheaper bigger processes. So the new fabs will still be more expensive but will address a shrinking high perf market. This will have consequences.
Probabilistic chips will be the next generation of computing; it's great to see initiatives on this front. PCMOS (I assume this is the same thing) itself is patented, so I'm not sure how much it affects the progress. Original and more comprehensive article of the PCMOS technology (2009) http://phys.org/news153398964.html
Interesting, I would have to say though I can't really imagine how you begin programming an analog cpu. I'm assuming is is some type of neural network right (b/c of the metioning that transistor states are not pure on/off)? any help here? it sounds fascinating but I have never read anything on the topic or worked on an analog cpu.
Also how does analog use less energy (i.e. how can aynthing compute at a reasonable speed without some sort of power charge)? Obviously, my pc has a lot of power dissipation compared to any microcontroller on the market, which according to the article is the main reason they're going analog. In a standard case, I would have a multi-core (for less dissipation) and need to program multi-threaded. Are there any paradigms for analog? Or is this something completely new (at least since the 50's).
I don't have a picture of general anolog programming, but a concrete example is devising a strategy to hit some balls in a game of pool. Like in a neural net, this can take the form of turning a few knobs until you get the right result.
The reasons it can require less energy includes error tolerance, as mentioned in the article. The foundation of digital computer is error correction, so this trade-off could be the fundamental difference between the two.
I find myself unable to offer an opinion, let's find out.
NOTICE: maxs points out I missed a x1000 on the barrel count, the following is off by three orders of magnitude:
In 2008 the world consumed 5269 barrels of jet fuel per day. 42 gallons in a barrel, 6.8 pounds per gallon, 0.45kg/pound, kerosene has 43MJ/kg… that is 29 terajoules (or I made a mistake). [1] [2]
Spread 29TJ out over the day and that is 29TJ /24/60/60 and you get 337 megawatts (or I made a mistake, this feels low, but I'll go with the calculation). [3]
Google's data centers drew 260 megawatts in 2011. [4]
So, there you go. Google's data centers alone use three quarters as much energy as all jet airplanes combined.
When I was an EE major I distinctly remember designing an APU and thinking what a convoluted cluster f* the whole thing was -- taking electron flows, pouring then over specific transistor arrangements (transistors are not binary, they are amplifiers) that take time to flow over a transitor set for each binary digit, and design it so it had the right answer at the exact time of the next clock pulse. I wanted to just take two signals and let them superimpose and let nature to the instant addition. Of course the "context" between the analog and electron stram with a clock "digital" just wasn't possible on the process of the day. Or today. Maybe someday in all optical computing. Or something else...
This is not what the article was talking about, but back in the 1800s, Charles Babbage designed a completely mechanacal computer. There are currently plans to actually build it, http://plan28.org/
As a side note, this machine also doesn't work in binary, it works in decimal. I think this is to reduce vertical space requirements.
Not really described as the future of computing. It's targeted at applications where a few incorrect bits are OK, like image processing on a drone. What if they could add some redundancy or parity though so we could achieve 100% accuracy?
"brand-new way of doing computing without the digital " - Analog computers have been around for 70 years+. They are based on OP-AMPS and, at that time, were made out of tubes.
[+] [-] Dn_Ab|13 years ago|reply
This type of chip which will save a lot of energy and still be correct in the sense exact answers are not needed. Non-deterministic computations invloving sampling for example. This particular chip is better for many types of bayesian and or generative machine learning algorithms. Not the multi-layer perceptrons most posts are referring to.
[+] [-] billswift|13 years ago|reply
[+] [-] FrojoS|13 years ago|reply
No, of course thats only true for digital computers. Otherwise, what would be the point of the headline? Not a great article but interesting news.
I could imagine specialized computer for artificial neural networks (ANN) being commercially successful in the future. Not sure if this is what the people in this DARPA project are working on. As far as I can tell, there are a lot of breakthroughs in ANN's at the moment, especially in the realms of pattern (e.g. image) recognition: Ng: http://www.youtube.com/watch?v=ZmNOAtZIgIk Hinton: http://news.ycombinator.com/item?id=4403662
[+] [-] ralfd|13 years ago|reply
Not even that. Computers with ternary instead of binary logic are possible:
http://en.wikipedia.org/wiki/Ternary_computer
Quote:
"The only modern, electronic ternary computer Setun was built in the late 1950s in the Soviet Union at the Moscow State University by Nikolay Brusentsov […] IBM also reports infrequently on ternary computing topics (in its papers), but it is not actively engaged in it."
[+] [-] bane|13 years ago|reply
http://www.youtube.com/watch?v=rVOhYROKeu4
http://www.youtube.com/watch?v=T_xhMykK5Hw
http://www.youtube.com/watch?feature=endscreen&NR=1&...
[+] [-] mbell|13 years ago|reply
There are a few companies already commercializing this, CogniMem[0] is one that comes to mind. However, I haven't heard of any commercial scale "success" stories despite their first parts being built in 2008.
[0] http://www.cognimem.com/index.html
[+] [-] burke|13 years ago|reply
[+] [-] boboblong|13 years ago|reply
[+] [-] co_pl_te|13 years ago|reply
Being much more into technology today than I was 10 years ago (but equally naïve :-), my perspective seems to have flipped. I question whether reality is really continuous or whether it is just our limited perception of it that makes it seem that way. Obviously, my growing interest in digital technology has had a profound impact on how I view the world in which we live.
In any case, I'm glad DARPA is looking more seriously into analog computing. I think there is a lot to be learned from revisiting the issue in a field that is still very young.
[+] [-] FrojoS|13 years ago|reply
"By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman."
http://longnow.org/essays/richard-feynman-connection-machine...
[+] [-] toolslive|13 years ago|reply
[+] [-] sliverstorm|13 years ago|reply
[+] [-] ajb|13 years ago|reply
That's not to say that the idea is completely dumb. A probabilistic GPU would be useful, although it would go against the trend of being able to use them for general computation.
[1] That is, one that looks for probabilistic bugs, not one that runs on a probabilistic CPU. See https://github.com/Ealdwulf/bbchop
[+] [-] ThaddeusQuay2|13 years ago|reply
Most of today's software is already probabilistic. Most bugs occur because the software is run on the wrong hardware. Therefore, once we have probabilistic hardware, most existing software can easily be ported, and most bugs will instantly disappear.
[+] [-] scarmig|13 years ago|reply
I have to wonder, thought, whether this is can really be called a computer. If you do, don't you also have to consider a simple integrator circuit one?
Error correction must also be somewhere between a total PITA and impossible, right?
[+] [-] SimHacker|13 years ago|reply
[+] [-] bornhuetter|13 years ago|reply
Has everyone just given up now on the original Moore's Law about transistor count, and just decided that the law is about computing power (per the David House quote)?
[+] [-] yaantc|13 years ago|reply
[+] [-] Geee|13 years ago|reply
[+] [-] jfaucett|13 years ago|reply
Also how does analog use less energy (i.e. how can aynthing compute at a reasonable speed without some sort of power charge)? Obviously, my pc has a lot of power dissipation compared to any microcontroller on the market, which according to the article is the main reason they're going analog. In a standard case, I would have a multi-core (for less dissipation) and need to program multi-threaded. Are there any paradigms for analog? Or is this something completely new (at least since the 50's).
[+] [-] andreasvc|13 years ago|reply
The reasons it can require less energy includes error tolerance, as mentioned in the article. The foundation of digital computer is error correction, so this trade-off could be the fundamental difference between the two.
[+] [-] option_greek|13 years ago|reply
[+] [-] jws|13 years ago|reply
NOTICE: maxs points out I missed a x1000 on the barrel count, the following is off by three orders of magnitude:
In 2008 the world consumed 5269 barrels of jet fuel per day. 42 gallons in a barrel, 6.8 pounds per gallon, 0.45kg/pound, kerosene has 43MJ/kg… that is 29 terajoules (or I made a mistake). [1] [2]
Spread 29TJ out over the day and that is 29TJ /24/60/60 and you get 337 megawatts (or I made a mistake, this feels low, but I'll go with the calculation). [3]
Google's data centers drew 260 megawatts in 2011. [4]
So, there you go. Google's data centers alone use three quarters as much energy as all jet airplanes combined.
EOM
[1] http://www.indexmundi.com/energy.aspx?product=jet-fuel&g...
[2] http://large.stanford.edu/courses/2010/ph240/glover2/
[3] http://en.wikipedia.org/wiki/Watt
[4] http://www.nytimes.com/2011/09/09/technology/google-details-...
[+] [-] andreasvc|13 years ago|reply
[+] [-] sroussey|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] gizmo686|13 years ago|reply
As a side note, this machine also doesn't work in binary, it works in decimal. I think this is to reduce vertical space requirements.
[+] [-] andreasvc|13 years ago|reply
[+] [-] logn|13 years ago|reply
[+] [-] lurker14|13 years ago|reply
[+] [-] crististm|13 years ago|reply
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] FixThisPOS|13 years ago|reply
[deleted]