top | item 47162571

(no title)

callmeal | 4 days ago

The "predict the next word" to a current llm is at the same level as a "transistor" (or gate) is to a modern cpu. I don't understand llms enough to expand on that comparison, but I can see how having layers above that feed the layers below to "predict the next word" and use the output to modify the input leading to what we see today. It is turtles all the way down.

discuss

order

brookst|4 days ago

It’s a good comparison. It’s about abstraction and layers. Modern LLMs aren’t just models, they’re all the infrastructure around promoting and context management and mixtures of experts.

The next-word bit may be slightly higher than an individual transistor, possibly functional units.

ejolto|4 days ago

There is a big difference, because I understand how those transistors produce a picture on a screen, I don’t understand how LLMs do what they do. The difference is so big that the comparison is useless.

jcul|4 days ago

I understand how transistors work too, and how they can result in a picture on a screen. But I think most people outside the software / electronics areas don't and to them it's just magic.

echelon|4 days ago

Humans are future predictors. Our vision systems, our mental models of our careers. People that predict the future tend to do well financially.

Now the machines are getting better than we are. It's exciting and a little bit terrifying.

We were polymers that evolved intelligence. Now the sand is becoming smart.

qsera|4 days ago

>Now the machines are getting better than we are

Then AI companies should stop looking for investors and instead play stock markets with all that predictive powers!