LLM accuracy is so bad, especially in summarization, that I now have to fact check google search results because they’ve been repeatedly wrong about things like the hours restaurants are open.
There's a huge difference between summarizing a stable document that was part of the training data or the prompt, and knowing ephemeral facts like restaurant hours.
Technically true statement. If you're offering it to imply that the GP bears responsibility for knowing what document was in the training data and what's not, I have to quibble with you.
Knowing it's shortcomings should be the responsibility of the search app that is currently designed to give screen real estate to the wrong summary of the ephemeral fact. Or, users will start to lose trust.
tracerbulletx|2 years ago
nvader|2 years ago
Knowing it's shortcomings should be the responsibility of the search app that is currently designed to give screen real estate to the wrong summary of the ephemeral fact. Or, users will start to lose trust.
_t89y|2 years ago