(no title)
tytso | 3 years ago
The way that people who are trying to use ChatGPT is certainly an example of what humans _hope_ the future of human/computer interaction should be. Whether or not Large Language Models such as ChatGPT is the path forward is yet to be seen. Personally, I think that model of "every-increasing neural network sizes" is a dead-end. What is needed is better semantic understanding --- that is, mapping words to abstract concepts, operating on those concepts, and then translating concepts back into words. We don't know how to do this today; all we know how to do is to make the neural networks larger and larger.
What we need is a way to have networks of networks, and creating networks which can handle memory, and time sense, and reasoning, such that the network of networks has pre-defined structures for these various skills, and ways of training these sub-networks. This is all something that organic brains have, but which neural networks today do not..
whimsicalism|3 years ago
It's pretty clear that these LLMs basically can already do this - I mean they can solve the exact same tasks in a different language, mapping from the concept space they've been trained on english in to other languages. It seems like you are awaiting a time where we explicitly create a concept space with operations performed on it, this will never happen.
kenjackson|3 years ago
I feel like DNNs do this today. At higher levels of the network they create abstractions and then the eventual output maps them to something. What you describe seems evolutionary, rather than revolutionary to me. This feels more like we finally discovered booster rockets, but still aren't able to fully get out of the atmosphere.
beepbooptheory|3 years ago