(no title)
Isinlor | 1 year ago
https://chatgpt.com/share/6775c9a6-8cec-8007-b709-3431e7a2b2...
Basically one feed forward is not Turing complete, but autoregressive (feeding previous output back into itself) are Turing complete.
Isinlor | 1 year ago
https://chatgpt.com/share/6775c9a6-8cec-8007-b709-3431e7a2b2...
Basically one feed forward is not Turing complete, but autoregressive (feeding previous output back into itself) are Turing complete.
frikskit|1 year ago
Regardless, I’d love if you would explain a bit more why the transformer internals make this problem so difficult?
Isinlor|1 year ago
https://arxiv.org/html/2407.15160v2
The Expressive Power of Transformers with Chain of Thought
https://arxiv.org/html/2310.07923v5
Transformer needs to retrieve letters per each token while forced to keep internal representation still aligned in length with the base tokens (each token also has finite embedding, while made out of multiple letters), and then it needs to count the letters within misaligned representation.
Autoregressive mode completely alleviate the problem as it can align its internal representation with the letters and it can just keep explicit sequential count.
BTW - humans also can't count without resorting to sequential process.