You mean the cook who will in the same unendingly patient and helpful manner sometimes confidently suggest putting glue into your dishes and serving your guests rocks for lunch?
What bad product? I'm not as categorical as OP, but acting like this is a solved problem is weird. LLMs being capable of generating nonsensical stuff isn't a one-off blip on the radar in one product that was quickly patched out, it's nigh unavoidable due to their probabilistic nature, likely until there's another breakthrough in that field. As far as I know, there's no LLM that will universally refuse to try outputting something it doesn't "know" - instead outputting a response that feels correct but is gibberish. Or even one that wouldn't have rare slip-ups even in known territory.
There's a difference between recent frontier coding LLMs and Google doing quick-and-cheap RAG on web results. It's good to understand it before posting cheap shots like this.
pegasus|2 months ago
square_usual|2 months ago
tavavex|2 months ago
viraptor|2 months ago