top | item 44804656

(no title)

pocketarc | 6 months ago

Others have already said it, but it needs to be said again: Good god, stop treating LLMs like oracles.

LLMs are not encyclopedias.

Give an LLM the context you want to explore, and it will do a fantastic job of telling you all about it. Give an LLM access to web search, and it will find things for you and tell you what you want to know. Ask it "what's happening in my town this week?", and it will answer that with the tools it is given. Not out of its oracle mind, but out of web search + natural language processing.

Stop expecting LLMs to -know- things. Treating LLMs like all-knowing oracles is exactly the thing that's setting apart those who are finding huge productivity gains with them from those who can't get anything productive out of them.

discuss

order

saurik|6 months ago

I am getting huge productivity gains from using models, and I mostly use them as "oracles" (though I am extremely careful with respect to how I have to handle hallucination, of course): I'd even say their true power--just like a human--comes from having an ungodly amount of knowledge, not merely intelligence. If I just wanted something intelligent, I already had humans!... but merely intelligent humans, even when given months of time to screw around doing Google searches, fail to make the insights that someone--whether they are a human or a model--that actually knows stuff can throw around like it is nothing. I am actually able to use ChatGPT 4.5 as not just an employee, not even just as a coworker, but at times as a mentor or senior advisor: I can tell it what I am trying to do, and it helps me by applying advanced mathematical insights or suggesting things I could use. Using an LLM as a glorified Google-it-for-me monkey seems like such a waste of potential.

pxc|6 months ago

> I am actually able to use ChatGPT 4.5 as not just an employee, not even just as a coworker, but at times as a mentor or senior advisor: I can tell it what I am trying to do, and it helps me by applying advanced mathematical insights or suggesting things I could use.

You can still do that sort of thing, but just have it perform searches whenever it has to deal with a matter of fact. Just because it's trained for tool use and equipped with search tools doesn't mean you have to change the kinds of things you ask it.

diegocg|6 months ago

The problem is that even when you give them context, they just hallucinate at another level. I have tried that example of asking about events in my area, they are absolutely awful at it.

dankwizard|6 months ago

I love how with this cutting edge tech people still dress up and pretend to be experts. Pleasure to meet you, pocketarc - Senior AI Gamechanger, 2024-2025 (Current)

Salgat|6 months ago

It's fine to expect it to not know things, but the complaint is that it makes zero indication that it's just making up nonsense, which is the biggest issue with LLMs. They do the same thing when creating code.

dust42|6 months ago

Exactly this. And that is why I like this question because the amount of correct details and the amount of nonsense give a good idea about the quality of the model.

CrackerNews|6 months ago

LLMs should at least -know- the semantics about the text it analyzed as opposed to the syntax.

orbital-decay|6 months ago

To be coherent and useful in general-purpose scenarios, LLM absolutely has to be large enough and know a lot, even if you aren't using is as an oracle.