top | item 34808422

(no title)

ag315 | 3 years ago

This is spot on in my opinion and I wish more people would keep it in mind--it may well be that large language models can eventually become functionally very much like AGI in terms of what they can output, but they are not systems that have anything like a mind or intentionality because they are not designed to have them, and cannot just form it spontaneously out of their current structure.

discuss

order

bigtex88|3 years ago

This very much seems like a "famous last words" scenario.

Go play around with Conway's Game of Life if you think that things cannot just spontaneously appear out of simple processes. Just because we did not "design" these LLM's to have minds does not mean that we will not end up creating a sentient mind, and for you to claim otherwise is the height of arrogance.

It's Pascal's wager. If we make safeguards and there wasn't any reason then we just wasted a few years, no big deal. If we don't make safeguards and then AI gets out of our control, say goodbye to human civilization. Risk / reward here greatly falls on the side of having extremely tight controls on AI.

ag315|3 years ago

My response to that would be to point out that these LLM models, complex and intricate as they are, are nowhere near as complex as, for example, the nervous system of a grasshopper. The nervous systems of grasshoppers, as far as we know, do not produce anything like what we're looking for in artificial general intelligence, despite being an order of magnitude more complicated than an LLM codebase. Nor is it likely that they suddenly will one day.

I don't disagree that we should have tight safety controls on AI and in fact I'm open to seriously considering the possibility that we should stop pursuing AI almost entirely (not that enforcing such a thing is likely). But that's not really what my comment was about; LLMs may well present significant dangers, but that's different from asking whether or not they have minds or can produce intentionality.

mr_toad|3 years ago

> Go play around with Conway's Game of Life if you think that things cannot just spontaneously appear out of simple processes.

Evolution - replication and natural selection. This is completely orthogonal to intelligence.

int_19h|3 years ago

Just because they aren't "designed" to have them doesn't mean that they actually do not. Here's a GPT model trained on board game moves - from scratch, without knowing the rules of the game or anything else about it - ended up having an internal representation of the current state of the game board encoded in the layers. In other words, it's actually modelling the game to "just predict the next token", and this functionality emerged spontaneously from the training.

https://thegradient.pub/othello/

So then why do you believe that ChatGPT doesn't have a model of the outside world? There's no doubt that it's a vastly simpler model than a human would have, but if it exists, how is that not "something like a mind"?

mr_toad|3 years ago

It was trained to model the game. LLMs are trained to model language. Neither are trained to take over the world.