top | item 34808960

(no title)

ag315 | 3 years ago

My response to that would be to point out that these LLM models, complex and intricate as they are, are nowhere near as complex as, for example, the nervous system of a grasshopper. The nervous systems of grasshoppers, as far as we know, do not produce anything like what we're looking for in artificial general intelligence, despite being an order of magnitude more complicated than an LLM codebase. Nor is it likely that they suddenly will one day.

I don't disagree that we should have tight safety controls on AI and in fact I'm open to seriously considering the possibility that we should stop pursuing AI almost entirely (not that enforcing such a thing is likely). But that's not really what my comment was about; LLMs may well present significant dangers, but that's different from asking whether or not they have minds or can produce intentionality.

discuss

order

int_19h|3 years ago

You forget that nervous systems of living beings have to handle running the bodies themselves in the first place, which is also a very complicated process (think vision, locomotion etc). ChatGPT, on the other hand, is solely doing language processing.

That aside, I also wonder about the source for the "nowhere near as complex" claim. Per Wikipedia, most insects have 100-1000k neurons; another source gives a 400k number for grasshopper specifically. The more interesting figure would be the synapse count, but I couldn't find that.

ag315|3 years ago

In most cases there are vastly more synapses than there are neurons, and beyond that the neurons and synapses are not highly rudimentary pieces but are themselves extremely complex.

It's certainly true that nervous systems do quite a bit more than language processing, but AGI would presumably also have to do quite a bit more than just language processing if we want it to be truly general.