Takeaway: Analog computers are limited in precision and by "analog noise"; the precision of the components used determine the precision of the output. Usually no more than 3 or 4 decimal places are possible, at least with the tech that was used in their heydey. I would say that is still close to the case even today. Of course, one could do things like cryogenic cooling or such, but it becomes a cost factor at that point.
Something else that wasn't mentioned:
The ADALINE/MADALINE memister technology is an analog component, and is a hardware component equivalent to a McCulloch–Pitts neuron model (perceptron).
The memister is NOT to be confused with the memristor, which is a different technology; the memister was a 3-terminal device (memory transistor):
The high point of analogue computing for control systems may have been Concorde. It normally operated in fly-by-wire through an analogue interconnection: the so-called synchro/resolver system, which is a AC servo control system. The flight computers (mostly but not entirely analogue) provided autothrottle and autostabilisation.
> Usually no more than 3 or 4 decimal places are possible
By that do you mean accurate to 1 part in 100 (3dp) or 1000 (4dp) or what? Since the scale of a representation is arbitrary, I'm not sure what dp means here.
I seem to recall there were clever but well-known techniques in analog to get higher accuracy than that of your actual components, through negative feedback IIRC. So why is it correct to say that the precision of the components is what limits output precision? Wouldn't the technique potentially make a difference? (and yeah I know accuracy != precision but I'm using them loosely... the distinction doesn't seem relevant here)
I have fond memories of building an analog computer as a project from Popular Electronics that simulated a lunar lander mission. At reset you had fuel, altitude, horizontal and vertical velocity. Your input was an angle and a thrust knob (two potentiometers) and a comparative that latched when altitude reached 0 based on your velocities being less than 1m/s. It was tremendous fun to play but no graphics, just some mA meters to tell you your status.
If any of you guys are into analog computing, just as an FYI, there's a sub-reddit dedicated to the topic (full disclosure, I started this particular one). analogcomputing.reddit.com
There's not a ton of content there yet, but please feel free to add anything you come across. I'm a fan of the idea and suspect, like the author of this piece, that there is "something there".
He got me to write some code to compile schematic diagrams into the XML config files. He's also done something similar now to compile from LaTeX, and he ported the controller from a Mac Mini to a Raspberry Pi.
Pertecs is being used to control an artificial lung, which is used for research into obstructive sleep apnoea (snoring).
The problem is, he's retiring, and I'm probably the only other person in the world who knows how to use his program. I would move back there, but the immigration policy got more difficult (minimum salary of $75k), so I'm seriously wondering whether I should stay in Taiwan longer and try to naturalise here.
I believe the $75k minimum salary is for non-skilled jobs (the point being, if you're in a non-skilled but 'high' paying role we're still interested in you). The minimum salary for skilled jobs is $50k [1][2]
That and I doubt you would find many people working at F&P Healthcare that earn less than $75k.
It's strange that the article does not mention anything about hydraulic macroeconomics and MONIAC, they were once widely used to verify theories in economics.
A related topic - running digital logic at subthreshold voltages, where transitions usually (but not always) happen correctly. It can be useful if you're attempting to measure something probabilistically anyway
The author certainly has a point, but I wonder if analog computers (in the sense of DA/AD + ICs that can be plugged into general purpose systems) suffer from economies of scale? Maybe energy + depreciation costs of general purpose computers are still lower than ordering a minimum of 10~100k units of some custom analog computer w/ good enough quality control.
Apparently something similar to FPGA exists for analog signals [1], I wonder how popular/practical it is.
I got some FPAA chips I assume you're referring to, the ones I've got are Anadigm ones, I need to get round to using them. The downside is the ones I've got, simply make use of switched capacitors for creation of filters, so there will be some form of discretization in the temporal domain I guess.
I'm hoping that analog computing will come back in the form of photonic analog computing. This would be more powerful than quantum digital computing (you know the things that people are wasting time on).
Note the fun sentence "If real computation were physically realizable, one could use it to solve NP-complete problems, and even #P-complete problems, in polynomial time. ".
We are on a wrong evolutionary branch of computing. Bits are lame-o-rama, whereas differentiable signals are pure unadulterated flavortown.
What you're describing is generally accepted to be physically unrealizable. In fact, the sentence that follows your quoted sentence cites two commonly known physical limitations that prevent the existence of your "computational class above and beyond Turing".
Whether or not there exist physically realizable computations that are not computable by a turing machine is an open question, but most physicists and computational complexity theorists seem to believe there does not exist such a class.
> Luckily, with today’s electronic technology it is possible to build integrated circuits containing not only the basic computing elements but also a crossbar that can be programmed from an attached digital computer, thus eliminating the rat’s nest of wires altogether.
This is the most important paragraph in the entire article.
Analog Computers can be made very small. It'd take an ASIC, but the 741 OpAmp was less than 100 transistors. A more modern OpAmp might be under 1000 transistors... although noise issues would be abound.
Bernd Ulmann has developed a methodology that performs non-trivial computations (such as: http://analogparadigm.com/downloads/alpaca_4.pdf), but its still hand-programmed by connecting wires together.
If it were digitally programmed with a digital crossbar switch (consisting of CMOS Analog Gates instead), then it'd be controllable by a real computer.
I think what Ulmann is arguing here... is to use analog computers as a "differential equation accelerator". Perform a lot of computations in the digital world, but if you need to simulate a differential equation, then simulate it on an analog circuit instead.
And there are a large number of interesting mathematical problems that are described as differential equations.
-----------------
The main issues, as far as I can see, would be the multiplier, logarithm, and exponential functions. IIRC, these are created using a bipolar transistor... and modern manufacturing doesn't really mix BJT with MOSFET.
I mean, IGBT transistors exist, but modern computers are basically MOSFET all the way down. MOSFETs would be able to make a lot of things though: digital potentiometers / variable resistors... the crossbar switch, capacitors, resistors, and OpAmps.
And all of those can simulate addition, subtraction, derivatives and integrals. More than enough to build a "differential equation accelerator" that the author proposes.
>What causes this difference? First of all, the brain is a specialized computer, so to speak, while systems like TaihuLight and TSUBAME3.0 are much more general-purpose machines, capable of tackling a wide variety of problems.
That's why digital computers have so far been so much more successful than analog computers. While analog computers may have the potential to be "better", they'd require massive global hardware and software changes to become the next big thing. People are lazy, and there's no way they'd want to port everything over to a completely different paradigm just for better energy efficiency and speed.
Analog computing makes a lot of sense in the context of genetic algorithms and "deep learning" and I wouldn't be surprised if there's already some ASICs under design using those principles.
One big challenge is that the design kits from the foundries aren't likely to include all the analog computer cells that would be needed (but perhaps for example a current mirror into a MIM capacitor could make for an integrator?).
>The human brain is a great example – its processing power is estimated at about 38 petaflops, about two-fifths of that of TaihuLight.
Huh? So we now have computers more powerful than the human brain? I thought that was still some decades off. And how would one even measure such a thing? In the apples-to-apples comparison, a stupid human trick floating-point calculation savant might manage 1 flop/s.
In speed yes, in terms of continuous parallel computing power at low energy levels, technology hasn't come even close. In terms of sensory input and processing, not even close.
I find it amusing that there is much hype about computer systems beating humans in very specialised areas, such as go and chess.
But the missing piece here is that the human is still doing this while continuously processing all the sensory input that is occurring to that human, dealing with so much more than what the computer system is dealing with. The computer system is dealing with one and only one subject matter at a speed many magnitudes faster and is only just getting ahead.
If you see an estimation of human brain computation power, it's probably best to assume it's nonsense. There are wildly different estimates, and as computers have gotten faster, the estimates seem to have risen, which suggests ego is involved.
I recall checking a few years ago, and it would have taken about 40,000 high-end GPUs to match the common estimates of the computing power of the brain. It's no doubt much lower now.
The problem is that it takes far more than raw computing power to make AI. We have sufficient computing power but we don't know how to use it, not even close.
As for how it's measured, it's basically a matter of guesstimating the computing power of a single neuron based on its inputs, outputs, and the computation it appears to do to map between them, and then multiplying by the number of neurons in the brain. This is horribly imprecise so estimates vary a lot (describing it as "38" gives the estimate way too much credit, should probably say 10 or 100 instead) but they're probably in the very rough ballpark.
First I've heard of analog computers but it sounds super interesting. So basically the structure of the system defined the algorithm and is therefore very specialized and efficient?
[+] [-] cr0sh|8 years ago|reply
https://en.wikipedia.org/wiki/Analog_computer
Takeaway: Analog computers are limited in precision and by "analog noise"; the precision of the components used determine the precision of the output. Usually no more than 3 or 4 decimal places are possible, at least with the tech that was used in their heydey. I would say that is still close to the case even today. Of course, one could do things like cryogenic cooling or such, but it becomes a cost factor at that point.
Something else that wasn't mentioned:
The ADALINE/MADALINE memister technology is an analog component, and is a hardware component equivalent to a McCulloch–Pitts neuron model (perceptron).
The memister is NOT to be confused with the memristor, which is a different technology; the memister was a 3-terminal device (memory transistor):
https://en.wikipedia.org/wiki/ADALINE
...whereas a memristor is a two terminal device ("memory resistor"):
https://en.wikipedia.org/wiki/Memristor
[+] [-] pjc50|8 years ago|reply
("Somewhere" in the Concorde megathread is a description of its analogue computers: http://www.pprune.org/tech-log/423988-concorde-question.html )
[+] [-] RachelF|8 years ago|reply
- limited dynamic range of perhaps 30dB (1000)
- it is easy to saturate a signal (there is no overflow bit)
- oscillations are easy to induce, but once again hard to detect, especially in a circuit in the middle of a calculation chain
- noise gets amplified across the system
[+] [-] robotresearcher|8 years ago|reply
By that do you mean accurate to 1 part in 100 (3dp) or 1000 (4dp) or what? Since the scale of a representation is arbitrary, I'm not sure what dp means here.
[+] [-] wfunction|8 years ago|reply
[+] [-] jim-jim-jim|8 years ago|reply
[+] [-] ChuckMcM|8 years ago|reply
[+] [-] JKCalhoun|8 years ago|reply
http://www.americanradiohistory.com/Archive-Elementary-Elect...
[+] [-] mindcrime|8 years ago|reply
There's not a ton of content there yet, but please feel free to add anything you come across. I'm a fan of the idea and suspect, like the author of this piece, that there is "something there".
[+] [-] smellf|8 years ago|reply
[+] [-] peterburkimsher|8 years ago|reply
He developed Pertecs, which is a rudimentary analog computer paradigm written in C.
http://tcode.auckland.ac.nz/~mark/Signal%20Processing%3A%20P...
He got me to write some code to compile schematic diagrams into the XML config files. He's also done something similar now to compile from LaTeX, and he ported the controller from a Mac Mini to a Raspberry Pi.
Pertecs is being used to control an artificial lung, which is used for research into obstructive sleep apnoea (snoring).
The problem is, he's retiring, and I'm probably the only other person in the world who knows how to use his program. I would move back there, but the immigration policy got more difficult (minimum salary of $75k), so I'm seriously wondering whether I should stay in Taiwan longer and try to naturalise here.
[+] [-] sleet|8 years ago|reply
That and I doubt you would find many people working at F&P Healthcare that earn less than $75k.
[1] https://www.immigration.govt.nz/about-us/media-centre/news-n... [2] https://www.immigration.govt.nz/employ-migrants/hire-a-candi...
[+] [-] ramgorur|8 years ago|reply
https://en.wikipedia.org/wiki/MONIAC https://en.wikipedia.org/wiki/Hydraulic_macroeconomics
[+] [-] alimw|8 years ago|reply
[+] [-] valarauca1|8 years ago|reply
[+] [-] brian-armstrong|8 years ago|reply
https://pdfs.semanticscholar.org/7244/1c8377b1dfde1909d21463...
[+] [-] 11thEarlOfMar|8 years ago|reply
[0] http://i.telegraph.co.uk/multimedia/archive/03593/emerson6_3...
[+] [-] kitotik|8 years ago|reply
And don’t forget, analogue random is the shit. In your face entropy!
[+] [-] 11thEarlOfMar|8 years ago|reply
http://cdm.link/app/uploads/2016/03/dangeroussynth.jpg
[+] [-] funkdobiest|8 years ago|reply
[+] [-] crispyambulance|8 years ago|reply
[+] [-] kitotik|8 years ago|reply
[+] [-] nixpulvis|8 years ago|reply
[+] [-] hcarvalhoalves|8 years ago|reply
Apparently something similar to FPGA exists for analog signals [1], I wonder how popular/practical it is.
[1] https://books.google.com.br/books?id=qjnnBwAAQBAJ&pg=PA93&lp...
[+] [-] anfractuosity|8 years ago|reply
[+] [-] adamnemecek|8 years ago|reply
Fun fact, with analog computing, one could imagine achieving Real Computation (https://en.wikipedia.org/wiki/Real_computation) which is above and beyond Turing. completeness.
Note the fun sentence "If real computation were physically realizable, one could use it to solve NP-complete problems, and even #P-complete problems, in polynomial time. ".
We are on a wrong evolutionary branch of computing. Bits are lame-o-rama, whereas differentiable signals are pure unadulterated flavortown.
[+] [-] mgraczyk|8 years ago|reply
Whether or not there exist physically realizable computations that are not computable by a turing machine is an open question, but most physicists and computational complexity theorists seem to believe there does not exist such a class.
https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
[+] [-] dragontamer|8 years ago|reply
This is the most important paragraph in the entire article.
Analog Computers can be made very small. It'd take an ASIC, but the 741 OpAmp was less than 100 transistors. A more modern OpAmp might be under 1000 transistors... although noise issues would be abound.
Bernd Ulmann has developed a methodology that performs non-trivial computations (such as: http://analogparadigm.com/downloads/alpaca_4.pdf), but its still hand-programmed by connecting wires together.
If it were digitally programmed with a digital crossbar switch (consisting of CMOS Analog Gates instead), then it'd be controllable by a real computer.
I think what Ulmann is arguing here... is to use analog computers as a "differential equation accelerator". Perform a lot of computations in the digital world, but if you need to simulate a differential equation, then simulate it on an analog circuit instead.
And there are a large number of interesting mathematical problems that are described as differential equations.
-----------------
The main issues, as far as I can see, would be the multiplier, logarithm, and exponential functions. IIRC, these are created using a bipolar transistor... and modern manufacturing doesn't really mix BJT with MOSFET.
I mean, IGBT transistors exist, but modern computers are basically MOSFET all the way down. MOSFETs would be able to make a lot of things though: digital potentiometers / variable resistors... the crossbar switch, capacitors, resistors, and OpAmps.
And all of those can simulate addition, subtraction, derivatives and integrals. More than enough to build a "differential equation accelerator" that the author proposes.
[+] [-] danmaz74|8 years ago|reply
This is one of the worst "brain" analogies I've read lately.
[+] [-] supermdguy|8 years ago|reply
That's why digital computers have so far been so much more successful than analog computers. While analog computers may have the potential to be "better", they'd require massive global hardware and software changes to become the next big thing. People are lazy, and there's no way they'd want to port everything over to a completely different paradigm just for better energy efficiency and speed.
[+] [-] janekm|8 years ago|reply
[+] [-] read_only|8 years ago|reply
[+] [-] shmageggy|8 years ago|reply
[+] [-] orbifold|8 years ago|reply
[+] [-] smitherfield|8 years ago|reply
Huh? So we now have computers more powerful than the human brain? I thought that was still some decades off. And how would one even measure such a thing? In the apples-to-apples comparison, a stupid human trick floating-point calculation savant might manage 1 flop/s.
[+] [-] oldandtired|8 years ago|reply
I find it amusing that there is much hype about computer systems beating humans in very specialised areas, such as go and chess.
But the missing piece here is that the human is still doing this while continuously processing all the sensory input that is occurring to that human, dealing with so much more than what the computer system is dealing with. The computer system is dealing with one and only one subject matter at a speed many magnitudes faster and is only just getting ahead.
[+] [-] nitwit005|8 years ago|reply
[+] [-] mikeash|8 years ago|reply
The problem is that it takes far more than raw computing power to make AI. We have sufficient computing power but we don't know how to use it, not even close.
As for how it's measured, it's basically a matter of guesstimating the computing power of a single neuron based on its inputs, outputs, and the computation it appears to do to map between them, and then multiplying by the number of neurons in the brain. This is horribly imprecise so estimates vary a lot (describing it as "38" gives the estimate way too much credit, should probably say 10 or 100 instead) but they're probably in the very rough ballpark.
[+] [-] jayd16|8 years ago|reply
[+] [-] cctan|8 years ago|reply
[+] [-] RachelF|8 years ago|reply
[+] [-] aj7|8 years ago|reply
[+] [-] gautam1168|8 years ago|reply
[+] [-] trophycase|8 years ago|reply