(no title)
berniedurfee | 1 year ago
LLMs are giant word Plinko machines. A million monkeys on a million typewriters.
LLMs are not interns. LLMs are assumption machines.
None of the million monkeys or the collective million monkeys are “reasoning” or are capable of knowing.
LLMs are a neat parlor trick and are super powerful, but are not on the path to AGI.
LLMs will change the world, but only in the way that the printing press changed the world. They’re not interns, they’re just tools.
idiotsecant|1 year ago
HarHarVeryFunny|1 year ago
OTOH, maybe pre-trained LLMs could be used as a hardcoded "reptilian brain" that provides some future AGI with some base capabilities (vs being sold as newborn that needs 20 years of parenting to be useful) that the real learning architecture can then override.
swader999|1 year ago
famouswaffles|1 year ago
https://news.ycombinator.com/item?id=41504226
awb|1 year ago
Maybe some evaluation of the sample size would be helpful? If the LLM has less than X samples of an input word or phrase it could include a cautionary note in its output, or even respond with some variant of “I don’t know”.
ijk|1 year ago
It can get really obvious when it's repeatedly using clichés. Both in repeated phrases and in trying to give every story the same ending.
freejazz|1 year ago
The problem space in creative writing is well beyond the problem space for programming or other "falsifiable disciplines".
0xdeadbeefbabe|1 year ago
Makes me wonder if the medical doctors can ever blame the LLM over other factors for killing their patients.