The "predict the next word" to a current llm is at the same level as a "transistor" (or gate) is to a modern cpu. I don't understand llms enough to expand on that comparison, but I can see how having layers above that feed the layers below to "predict the next word" and use the output to modify the input leading to what we see today. It is turtles all the way down.
brookst|4 days ago
The next-word bit may be slightly higher than an individual transistor, possibly functional units.
ejolto|4 days ago
jcul|4 days ago
echelon|4 days ago
Now the machines are getting better than we are. It's exciting and a little bit terrifying.
We were polymers that evolved intelligence. Now the sand is becoming smart.
qsera|4 days ago
Then AI companies should stop looking for investors and instead play stock markets with all that predictive powers!