top | item 38389477

(no title)

muskmusk | 2 years ago

If you ask Ilya Sutskever he will say your kids head is full of neurons, so is LLMs.

LLMs comsume training data and can then be asked questions. How different is that to your son watching YouTube and then answering questions?

It's not 1:1 the same,yet, but it's in the neighborhood.

discuss

order

cduzz|2 years ago

Well, my son is a meat robot who's constantly ingesting information from a variety of sources including but not limited to youtube. His firmware includes a sophisticated realtime operating system that models reality in a way that allows interaction with the world symbolically. I don't think his solving the |i+1| question was founded in linguistic similarity but instead in a physical model / visualization similarity.

So -- to a large degree "bucket of neurons == bucket of neurons" but the training data is different and the processing model isn't necessarily identical.

I'm not necessarily disagreeing as much as perhaps questioning the size of the neighborhood...

muskmusk|2 years ago

Heh I guess it's s matter of perspective. Your son's head is not made of silicon so in that sense it is a large neighborhood. But if you put them behind a screen and only see the output then the neighborhood looks smaller. Maybe it looks even smaller a couple of years in the future. It certainly looks smaller than it did a couple of years in the past.

meheleventyone|2 years ago

From the meat robot perspective the structure, operation and organisation of the neurons is also significantly different.

leobg|2 years ago

Maybe Altman should just go have some kids and RLHF them instead.

swatcoder|2 years ago

There are thousands of structures and substances in a human head besides neurons, at all sorts of commingling and overlapping scales, and the neurons in those heads behave much differently and with tremendously more complexity than the metaphorical ones in a neural network.

And in a human, all those structures and substances, along with the tens of thousands more throughout the rest of the body, are collectively readied with millions of years of "pretraining" before processing a continuous, constant, unceasing mulitmodal training experience for years.

LLM's and related systems are awesome and an amazing innovation that's going to impact a lot of our experiences over the next decades. But they're not even the same galaxy as almost any living system yet. That they look like they're in the neighborhood is because you're looking at them through a very narrow, very zoomed telescope.

xanderlewis|2 years ago

Even if they are very different (less complex at the neuron level?) to us, do you still think they’ll never be able to achieve similar results (‘truly’ understanding and developing pure mathematics, for example)? I agree that LLMs are less impressive than it may initially seem (although still very impressive), but it seems perfectly possible to me that such systems could in principle do our job even if they never think quite like we do.

Davidzheng|2 years ago

True. But a human neuron is more complex than an AI neuron by a constant factor. And we can improve constants. Also you say years like it's a lot of data--but they can run RL on chatgpt outputs if they want, isn't it comparable? But anyway i share your admiration for the biological thinking machines ;)

Davidzheng|2 years ago

To continue on this. LLMs are actually really good at asking questions even about cutting edge research. Often, I believe, convincing the listener that it understands more than it goes

gunapologist99|2 years ago

... which ties into Sam's point about persuasiveness before true understanding.