top | item 43025145

(no title)

bjornstar | 1 year ago

Recently I had an experience where the chatbot gave completely wrong subway instructions over and over again. It confidently told me that it was accurate and it was my "trusty companion". It had no idea what was wrong with the answers and kept prompting me to give it the correct answers.

This was in contrast to when I asked it who had access to my chat logs and would only tell me to read the privacy policy. When I asked it to for specifics in the privacy policy it refuses to give wrong answers:

"When it comes to company policies, especially related to privacy and data handling, it's crucial to provide accurate information because these topics are very sensitive and important. I want to ensure you have the most reliable information, and the best way to do that is to refer you directly to the official privacy statement."

It's clear what the priority is for these chatbots: get the public to train them and protect the corporations that run them.

discuss

order

rsynnott|1 year ago

> Recently I had an experience where the chatbot gave completely wrong subway instructions over and over again.

So, slightly offtipic, but, I just... don't understand why anyone would use it for this. This is a solved problem. The operator likely has a planner app. Google and Apple Maps have planners which support most systems. Transit and various other third party things have planners. I think even OpenStreetMap may even have one!

Quality can vary (I find that Google Maps in particular feels like the people who worked on the trip planner had never in fact used public transport; it's very prone to suggesting absurdly complex routes involving three or four transfers where "walk for ten minutes and no transfers" is viable), but this feels like something an LLM is likely to be _particularly bad at_, unless it just calls Google Maps or whatever, in which case why bother?