top | item 46771688

(no title)

zugi | 1 month ago

> I'm sure you could get an LLM to create a plausible sounding justification for every decision.

That's a great point: funny, sad, and true.

My AI class predated LLMs. The implicit assumption was that the explanation had to be correct and verifiable, which may not be achievable with LLMs.

discuss

order

storystarling|1 month ago

It seems solvable if you treat it as an architecture problem. I've been using LangGraph to force the model to extract and cite evidence before it runs any scoring logic. That creates an audit trail based on the flow rather than just opaque model outputs.

fwip|1 month ago

It's not. If you actually look at any chain-of-thought stuff long enough, you'll see instances where what it delivers directly contradicts the "thoughts."

If your AI is *ist in effect but told not to be, it will just manifest as highlighting negative things more often for the people it has bad vibes for. Just like people will do.