top | item 41926127

(no title)

bigyikes | 1 year ago

I fall apart when responding outside the bounds of my training data, too. Does that imply I’m not thinking?

This idea is often used to argue that LLMs will never be capable of novel idea generation, but I don’t think it’s a good argument.

For one, the LLM has such a large breadth and depth of knowledge that it could conceivably learn relations between concepts in a way that no human has before.

Secondly, novel ideas occur at the margins. Very rare is the case where someone comes up with a fundamentally new idea out of the blue. Instead, novel ideas arise just at the edge of one’s expertise. If you dial up the temperature of an LLM, it will generate novelty, and then it’s just a matter of evaluating merit.

Iterated inference at the margins of LLM knowledge will lead to novel knowledge synthesis.

discuss

order

janice1999|1 year ago

> I fall apart when responding outside the bounds of my training data, too. Does that imply I’m not thinking?

You can use reasoning. Whether LLMs can is a matter of research and debate. I'm not an astrobiologist but if someone claimed that frogs live on Pluto, I would never hallucinate an answer in which I confidently assert that they do.

deafpolygon|1 year ago

But they will never come up with novel ideas that they've never encountered before. I.e. they are incapable of thinking outside of the box.

sega_sai|1 year ago

I would argue that the absolute majority of people don't come up with really novel ideas either (and I'm speaking of myself too). Most people just develop existing ideas, and maybe apply them in new contexts.

spwa4|1 year ago

"Okay Google tell me what 5 flowers would say discussing shoe sizes with 28 pigs". There, thinking outside of the box, delivered. ChatGPT a nice story.