(no title)
ag315
|
3 years ago
This is spot on in my opinion and I wish more people would keep it in mind--it may well be that large language models can eventually become functionally very much like AGI in terms of what they can output, but they are not systems that have anything like a mind or intentionality because they are not designed to have them, and cannot just form it spontaneously out of their current structure.
bigtex88|3 years ago
Go play around with Conway's Game of Life if you think that things cannot just spontaneously appear out of simple processes. Just because we did not "design" these LLM's to have minds does not mean that we will not end up creating a sentient mind, and for you to claim otherwise is the height of arrogance.
It's Pascal's wager. If we make safeguards and there wasn't any reason then we just wasted a few years, no big deal. If we don't make safeguards and then AI gets out of our control, say goodbye to human civilization. Risk / reward here greatly falls on the side of having extremely tight controls on AI.
ag315|3 years ago
I don't disagree that we should have tight safety controls on AI and in fact I'm open to seriously considering the possibility that we should stop pursuing AI almost entirely (not that enforcing such a thing is likely). But that's not really what my comment was about; LLMs may well present significant dangers, but that's different from asking whether or not they have minds or can produce intentionality.
lstodd|3 years ago
https://news.ycombinator.com/item?id=33978978
mr_toad|3 years ago
Evolution - replication and natural selection. This is completely orthogonal to intelligence.
int_19h|3 years ago
https://thegradient.pub/othello/
So then why do you believe that ChatGPT doesn't have a model of the outside world? There's no doubt that it's a vastly simpler model than a human would have, but if it exists, how is that not "something like a mind"?
mr_toad|3 years ago
puszczyk|3 years ago