Do you know information theory, by which I mean the actual mathematical study of information? The definition of information is a wild and woolly beast; if you understand the definition you already know what I mean, if you don't you'll argue endlessly without ever having understood what you're arguing against.
But what it basically boils down to is a counting argument; if you are a human and you have ten thoughts of various kinds that in your illiteracy all get "translated" (putting it politely) to "lolz that suckz", no computer algorithm, or indeed anything else, can translate that back out to the original ten thoughts. It could pick one of them and use that, and that chosen though may be extremely eloquently expressed, but that's all it can do. A fixed number of inputs can only go to a fixed number of outputs; there's no magical process whereby anything, not even a human, can take truly useless semantic garbage text and turn that into accurate informative well-written text without loss. There's radically more well-formed, coherent thoughts than there are fuzzy thoughts and there simply isn't any sort of mapping at all from fuzzy to coherent. If someone wishes to communicate precisely, they will always have to put the effort into it, there is no magical shortcut. There's no need to worry about the effects of the magical shortcut on people or on society, no such thing exists.
In fact this is actually just another form of the following argument in heavy disguise: http://matt.might.net/articles/why-infinite-or-guaranteed-fi... I'm making the same argument in the other direction; that argues there is no such thing as a "guaranteed compressor", I'm arguing in the other direction, that there can not exist a decompresser than can take some smallish set of gibberish and map it to the much richer set of interesting thoughts. Information theory is fun for being able to make arguments about this hypothetical translator and a hypothetical perfect decompresser the same argument in the end.
The obvious counterargument is that a human might be able to use more context and extract more details to get a richer sense of the details; a stream of poorly written text can still mean more than one bad tweet. The problem is that the source information is everything that a human or anything else uses to make its decisions. Anything a human could use, a program could in principle use as well, even if we couldn't write it today. Differences between humans and computers are only issues of how good the translator may be; the theoretical limit on both is unaffected by those differences.
the reason there isn't a guaranteed compressor is that a legitimate compressor has to be injective, and there isn't an injective function from, say, {0,1}^[n] to {0,1}^[m] where m<n. in contrast, there are many guaranteed "decompressors," if by a decompressor you mean an invertible function that expands the length of a string.
you seem to contradict yourself a bit, first you agree that an algorithm could take "lolz that suckz" and produce "one of" the original thoughts that got transcribed as it, then you say there "simply isn't any sort of mapping" from "fuzzy thoughts" (sentences produced by transcribing thoughts?) to coherent ones. but obviously there are many such mappings, it's just that they aren't guaranteed to produce the same thought the author of the "fuzzy thought" had when he wrote down what he was thinking.
now, what is the actual cause for fear? is it the idea that we might someday be able to communicate without taking too much effort (what you say is impossible), or that people with nothing going on between their ears will be able to produce computer-mediated output that is indistinguishable from coherent thought? i think the latter is more cause for concern/more interesting but i dont think anything youve said really addresses it. tbh i also dont think your brief discussion of information theory does much in the way of arguing against the former either. (your argument could be adapted to show that morse code is impossible, because messages are shorter in morse code than they would be in a straightforward ternary ascii encoding. but actually, it's just that the messages we typically wish to send are generally shorter.)
jerf|15 years ago
But what it basically boils down to is a counting argument; if you are a human and you have ten thoughts of various kinds that in your illiteracy all get "translated" (putting it politely) to "lolz that suckz", no computer algorithm, or indeed anything else, can translate that back out to the original ten thoughts. It could pick one of them and use that, and that chosen though may be extremely eloquently expressed, but that's all it can do. A fixed number of inputs can only go to a fixed number of outputs; there's no magical process whereby anything, not even a human, can take truly useless semantic garbage text and turn that into accurate informative well-written text without loss. There's radically more well-formed, coherent thoughts than there are fuzzy thoughts and there simply isn't any sort of mapping at all from fuzzy to coherent. If someone wishes to communicate precisely, they will always have to put the effort into it, there is no magical shortcut. There's no need to worry about the effects of the magical shortcut on people or on society, no such thing exists.
In fact this is actually just another form of the following argument in heavy disguise: http://matt.might.net/articles/why-infinite-or-guaranteed-fi... I'm making the same argument in the other direction; that argues there is no such thing as a "guaranteed compressor", I'm arguing in the other direction, that there can not exist a decompresser than can take some smallish set of gibberish and map it to the much richer set of interesting thoughts. Information theory is fun for being able to make arguments about this hypothetical translator and a hypothetical perfect decompresser the same argument in the end.
The obvious counterargument is that a human might be able to use more context and extract more details to get a richer sense of the details; a stream of poorly written text can still mean more than one bad tweet. The problem is that the source information is everything that a human or anything else uses to make its decisions. Anything a human could use, a program could in principle use as well, even if we couldn't write it today. Differences between humans and computers are only issues of how good the translator may be; the theoretical limit on both is unaffected by those differences.
hc|15 years ago
you seem to contradict yourself a bit, first you agree that an algorithm could take "lolz that suckz" and produce "one of" the original thoughts that got transcribed as it, then you say there "simply isn't any sort of mapping" from "fuzzy thoughts" (sentences produced by transcribing thoughts?) to coherent ones. but obviously there are many such mappings, it's just that they aren't guaranteed to produce the same thought the author of the "fuzzy thought" had when he wrote down what he was thinking.
now, what is the actual cause for fear? is it the idea that we might someday be able to communicate without taking too much effort (what you say is impossible), or that people with nothing going on between their ears will be able to produce computer-mediated output that is indistinguishable from coherent thought? i think the latter is more cause for concern/more interesting but i dont think anything youve said really addresses it. tbh i also dont think your brief discussion of information theory does much in the way of arguing against the former either. (your argument could be adapted to show that morse code is impossible, because messages are shorter in morse code than they would be in a straightforward ternary ascii encoding. but actually, it's just that the messages we typically wish to send are generally shorter.)