I suspect that this is because GPT-2 doesn't have any overarching narrative that it is piecing together. Ultimately it is like a super-powerful Markov based text generator -- predicting what comes next from what has come before. It has longer "memory" than a Markov model, and a lot more complexity, but where a person often formulates a plan for the next few sentences and the direction they should go, GPT-2 doesn't really work that way. And hence it sounds like dream logic because in dreams your brain is just throwing together "what comes next" without an overall plan. Of course your brain is also back-patching and retconning all sorts of stuff in dreams too, but that's a different matter.
andai|6 years ago
Beyond that I am wondering if some sort of logic based AI / goal based AI could be integrated to make it more consistent (or does that still require too much manual fiddling to be useful on large scales?)