If the neural network had moveable tape heads which could seek between invocations, and the inputs were provided in little-endian format, a fairly small model could implement arbitrary addition with carry, and you'd only need to add a few redundant dimensions to get something that could be trained.
No comments yet.