top | item 41742731

(no title)

surprisetrex | 1 year ago

I'm actually not so sure that LLMs are good at knowledge regurgitation. They're good at generating text that semi-plausibly looks like knowledge regurgitation (which may or may not be incomplete or wrong).

See the recent Google AI Summary mishaps for some good examples of this.

discuss

order

zephyreon|1 year ago

I’m thinking of knowledge regurgitation in the context of a very structured environment — a la knowledge base for a company & internal policies as opposed to the entire internet.

A better way to convey this might be that LLMs are good at being conversational and given the appropriate context and guardrails, they can regurgitate knowledge from said context with reasonable accuracy.

Google’s mishaps (eating rocks, etc.) demonstrate there’s still quite a bit of work to do for this to work at scale, but the tech is still pretty good.