top | item 47125185

(no title)

kstrauser | 6 days ago

Sure, people die from regular programming. Mistakes happen. That’s not good or ok, but it seems unavoidable given today’s technologies and tools.

However, I think that’s in a different category than giving life advice. How is an LLM to know that God forgives Joe for stealing a loaf of bread to feed his children, but doesn’t forgive Tom for doing the same thing because Tom had money but was saving up to buy cooler shoes and didn’t want to spend it? A priest’s advice might be “Joe, don’t make a habit of it, but you didn’t hurt anyone and you children were hungry. Tom, would you freaking knock it off already?” An LLM might reply “that’s a wonderful idea!” to both.

Again, I’m firmly not anti-AI. I use it every day. I absolutely to not want to hear its advice on how to navigate the complexities of life as a human being.

discuss

order

ben_w|5 days ago

Yeah, no. What you described here and what I described before are not programming errors, they're data errors. An A* route finder isn't going to know a bridge is out unless it is told, an LLM won't know that case history unless it is told.

I'd say the real problem with using an LLM for this kind of thing is not what the LLM writes, but that the act of writing helps the human understand their community, so when it is skipped that understanding remains absent. It's like cheating on your homework.