top | item 565259

Non-Universality in Computation: The Myth of the Universal Computer

46 points| ewakened | 17 years ago |research.cs.queensu.ca | reply

58 comments

order
[+] cperciva|17 years ago|reply
Alan Turing was wrong.

Correction: Alan Turing was right, but if we redefine "universal computer" in sufficiently weird ways, Turing's results no longer apply.

In particular, while Turing was concerned with machines computing functions, Akl is looking at machines interacting with a changing environment.

[+] jgrahamc|17 years ago|reply
Thank you for that reply. I started reading his page and it was getting stupid. He almost deliberately seems to not want to understand the definition of a Universal Computer (by which he must mean a Universal Turing Machine).

It would surprise me if his example functions are actually computable. In his paper on the subject he comes up with a scenario in which a machine needs to compute a function of n variables that vary with time. He then places the restriction that reading each variable takes one unit of time so that once you've read the first one the value of the second one has changed and so you can't do the computation.

Then he goes on to say that if he supposes a computer capable of doing n computations at once he can get round the restriction he's imposed because he's able to read all the variables at the same time.

How is this supposed to make me think that Turing was wrong?

Having worked on software/hardware interactions quite a bit you actually see this sort of thing happen. Lots of data comes in on different ports and you need to read it all at the same time. This is solved by latching the data into a buffer which the CPU can read at its leisure.

But the fact remains that the function you are computing is actually computable (he gives the example of summing the data). He's just produced an artificial restriction on the computer being incapable of reading the data fast enough. His n-processors is a complicated way of solving what we solve with buffers.

And also who said there's a lower bound on the per-instruction speed of a Turing Machine. Sounds like this guy just needs a faster machine. After all a Turing Machine is an abstract idea, so let's just redefine it's operating speed by a factor of n and his function becomes computable.

[+] davidmathers|17 years ago|reply
How is this not complete gibberish? A function is a map from a value (set) to a value (set). The identity/definition of a function is the two values it relates. Values don't change, by definition. All he appears to be saying is that a single computer can't compute multiple functions simultaneously, which would be the non-statement of the century.

Definition of function is basic set theory / category theory: http://planetmath.org/encyclopedia/Image2.html

EDIT: let's say the function computes the average age of all living people. So at time t=1 the domain is people alive at time 1 and at time t=2 the domain is people alive at time 2. He's saying that the input to the function is a variable called "people" which keeps changing, but that's not how it works. The input to a function is always a fixed value. The functions defined at times 1 and 2 are different functions because they're defined on different domains.

[+] frig|17 years ago|reply
But even more to the point: the objection is that no particular 'computer' (like some physical device or at least a performance spec) could be a universal computer in his terms, b/c past a point it can't keep up.

The same idea rules out the possibility of a universal computer (barring magic), too, except in the heads of theoreticians: any particular real-world computer is just a finite state machine, and a turing machine with eg 32 gb of space on the tape isn't actually universal, either (or even a turing machine).

[+] ewakened|17 years ago|reply
Yes that's a better way of putting it :) Thanks.
[+] frisco|17 years ago|reply
He's saying though that those two can't be separated -- the functions the machines are computing are themselves changing with time.
[+] almost|17 years ago|reply
This appears to me to be a little (or in fact a lot) crackpotish.

A universal machine with capable of n operations per time unit can be simulate a universal machine capable of n+m operations per time unit. It's just a bit slower.

His argument seems to be that this will make it so slow that that it won't be able to keep up. Which is just stupid.

The Church-Turing thesis says that a universal machine can simulate any other computing machine given infinite memory. It doesn't say anything about how fast it will be :p

Maybe I'm misunderstanding this guy and he's not saying something so completely and utterly stupid. Anyone who wants to correct me please do...

[+] jmatt|17 years ago|reply
He seems to be misunderstood. Which is no surprise since he's into unconventional computation - whatever that is[1]. He's well published[2] and has chaired at least one conference on unconventional computing[3]. So that's more than I can say about my career in theoretical computer science. I suspect having unconventional thoughts alone will create some dissonance in related communities. Similar to what Doron Zeilberger gets for his beliefs in Ultrafinitism.

I agree that Akl is talking about a very specific circumstance of UTMs. How many thousands of Mathematicians and Computer Scientists have read that paper in the last 60 years and verified it? If there were ever going to be any true challenge to UTMs it had to be obscure and weird like this. That doesn't mean that I believe he's right, but I don't think he's a crackpot just unconventional.

[1] http://en.wikipedia.org/wiki/Unconventional_computing

[2] http://research.cs.queensu.ca/Parallel//publications.html#Pu...

[3] http://en.wikipedia.org/wiki/Selim_Akl#Conferences

[+] frisco|17 years ago|reply
His argument is deeper than that; he's basically saying that there's no such thing as a universal machine if the function being simulated meets any of the 5 criteria he lists, since the function can't be simulated (that is, you can't compute it with any less steps than running the whole thing through, and it could require up to infinite variables). I don't know if this makes it non-crackpot-ish, though; I don't have the background in formal theory tell; but it doesn't seem something like a "it'll take arbitrarily long" argument.
[+] shalmanese|17 years ago|reply
Turing machines are constructed in a closed, static universe. This guy says that if you need to deal with a source of changing data, turing results don't apply. This is both obvious and important to state, although not with as much verbiage.
[+] jerf|17 years ago|reply
Expanding on that, it should also be pointed out that this is well-known as being among the list of assumptions that prevent Turing Machines from working in the real world. We speak of "steps", but they do not have any actual relationship to "time". The tape has no obvious relationship to "space", either.

So, right off in the first sentence "I have recently shown that the concept of a Universal Computer cannot be realized.", I thought "well, it's hardly like that's a challenge..."

There's actually some interesting theory here, I think, but it's only obscured by being wrapped in rhetoric about Church-Turing being wrong and such. All mathematical theorems include a set of assumptions, and it's not news that if you change the truth of the assumptions the theorems may stop holding.

[+] trjordan|17 years ago|reply
Look at it this way: imagine you have a server that's totally maxed out (CPU-bound), 24 hours a day. If you add just one more connection to its load, it won't be able to handle it. No matter how long you give it, it'll just fall further and further behind. The requested pages are uncomputable! A faster computer, on the other hand, will keep up just fine, and finish all the computations.

This isn't how computations have been traditionally defined, so that's why it seems so unintuitive. But, he's right nonetheless.

[+] tybris|17 years ago|reply
The server will get further and further behind, but you can not give it work it will never finish.
[+] amalcon|17 years ago|reply
Suppose that my "computing device" were a machine designed to build and program more copies of itself. This device would "compute" a function across n variables in time log n by building n-1 copies of itself, and then parallelizing the computation across them. With sufficient resources, this machine could could compute the function for any n in finite time.

All the author has done is posit an ever-increasing volume of data, such that the rate at which the volume increases causes it to outpace the speed of the computation. It seems that positing an ever-increasing computational capacity is a reasonable solution to this problem.

[+] psb217|17 years ago|reply
Your time complexity of log n appears from nowhere. It is entirely possible that the complexity of calculating the original function F was worse than polynomial in the number of variables, in which case a polynomial/linear increase in the number of machines could not reduce the complexity by more than a polynomial/linear factor, thus leaving the computational complexity worse than polynomial (e.g. exponential/super-exponential). I've made a comment further down that posits an idea similar to yours, though with a bit more precision.
[+] antipax|17 years ago|reply
Since when was there a minimum amount of time for a theoretical computer to do any number of computations?

I suppose that any computer must take some finite, non-zero amount of time to perform an "instruction", because if the amount of time per instruction is zero, that would violate Turing's proof of the undecidability of the halting problem over Turing machines. His idea does have some merit, but I still don't think that this means there cannot be a Universal Computer -- just that it may be something that operates on a level above a Turing machine. I'd imagine it would involve some sort of recursive structure.

[+] psb217|17 years ago|reply
The way I see it, you could program a Turing Machine MF which has some finite "per-time-step" computation speed _n_, such that MF "knows" the following:

1: The general form/program for solving function F for any arbitrary number, _m_, of variables.

2: How to construct a Turing Machine MF' capable of performing _m_ "operations per-time-step", and programmed with 1.

Even in his crazy-town world of redefined computation, the Turing Machine MF is capable of computing any computable function F, of any number of time-varying variables _m_. I.e. given such a function F, MF simply creates a suitable machine MF' and copies the output of MF' to its own output. Thus, MF computes F, albeit by proxy. QED, etc.

It's possible that I'm now in some TM dual to his time-varying primal, but whatevs, I can do what I want.

ADD: As far as the computability results that I am familiar with are concerned, without loss of generality one could also just assume that all Turing Machines have infinite speed. The necessity of introducing the concept of a "time-step" into the definition of Turing Machines only occurs when computation time is actually of interest. Though, for complexity results one could still assume infinite speed, albeit the number of operations would still need to be counted.

[+] codeodor|17 years ago|reply
How does one go about showing that there is no way to solve such a problem?

What about the argument that perhaps the author was not clever enough to devise an algorithm which solves the problem he stated?

Perhaps when I read the papers I will be enlightened?

[+] cperciva|17 years ago|reply
How does one go about showing that there is no way to solve such a problem?

With great difficulty. Usually the approach is to suppose that you have an algorithm and show that either (a) this algorithm allows you to do something impossible (e.g., solve a different impossible problem, or reach a contradiction); or (b) there are two different problems with different answers which the specified algorithm cannot distinguish between (this is generally only possible where you're proving that it's impossible to compute something in less than some number of steps).

The first approach is used for things like showing that the halting problem is impossible; the second approach is used for things like showing that it's impossible to have a comparison sort which runs in less than O(N log N) time.

[+] ggchappell|17 years ago|reply
Folks, the only problem with this article is the HN headline. Aki never claims Turing was wrong.

And I think he is demonstrating something rather interesting: that time dependence has a significant effect on the power of a computing device.

[+] stonemetal|17 years ago|reply
Isn't this more or less the same issue as multiple infinities. Whole numbers are infinite but countable, but rationals are infinite and not countable. UTMs are able to compute the infinite but there are infinities that are not computable.
[+] banned_man|17 years ago|reply
Rationals are countable but dense. You can biject Z * Z <-> N trivially (start from origin, spiral out) and Q can be constructed as a subset of Z * Z. Reals are uncountable.
[+] forinti|17 years ago|reply
I think Peter Wegner made this same point in the paper "Why interaction is more powerful than algorithms" in CACM (Volume 40, Issue 5, May 1997).
[+] dhs|17 years ago|reply
Yes, interactive computation is proven to be - statistically - more powerful than algorithmic computation - sometimes. It's also "less powerful" some other times, and since it's not computable at which times interaction is "better", it's not a general advantage. But Turing never considered the case of interaction in his definition of computation, so Wegner is talking about something else.
[+] milkmandan|17 years ago|reply
Wegner is not the best example for 'not a crackpot' though.
[+] varjag|17 years ago|reply
Stopped reading at mention of "physical variable".
[+] milkmandan|17 years ago|reply
He basically says that the UTM cannot run a TM in real time, so that makes it non-universal. It's pretty inane.