(no title)
karparov | 11 months ago
We know quite well how it does it. It's applying extrapolation to its lossily compressed representation. It's not magic and especially the HN crowd of technical profficient folks should stop treating it as such.
karparov | 11 months ago
We know quite well how it does it. It's applying extrapolation to its lossily compressed representation. It's not magic and especially the HN crowd of technical profficient folks should stop treating it as such.
TeMPOraL|11 months ago
kazinator|11 months ago
LLM AI is different in that it does produce helpful results, not only entertaining prose.
It is practical for users to day to replace most uses of web search with a query to a LLM.
The way the token prediction operates, it uncovers facts, and renders them into grammatically correct language.
Which is amazing given that, when the thing is generating a response that will be, say, 500 tokens long, when it has produced 200 of them, it has no idea what the remaining 300 will be. Yet it has committed to the 200; and often the whole thing will make sense when the remaining 300 arrive.