(no title)
mooxie | 2 years ago
This keeps being my argument when people at work daydream about time and cost savings by offloading non-critical business functions to AI. I say, "Great, so it can produce 1000x more work than a person. But then what army of people are we planning to use to check those outputs?"
I'm super-impressed with the current crop of language models for their ability to so accurately simulate correctness, but their inability to understand what they don't know - because, in fact, they don't 'know' any of it in the sense that we do - makes them like very productive but completely untrustworthy employees. A junior dev who monopolizes his mentor's time through inconsistent performance is not a good hire.
pixl97|2 years ago
Have you ever had a dumb/wrong thought in your head? I'm going to go ahead and answer yes for you, you do all the time. In fact you don't (hopefully) verbalize a stream of consciousness to other people around you. In general you think of something then reflect on what it is true/false.
This is not what LLMs do, they pitch back the first 'thought' they have, "correct" or not. This is why things like COT/TOT greatly increase the accuracy of LLM output. The problem? It requires at least an order of magnitude more processing to get an answer, and with GPU time already in high demand and expensive you don't see much of it happen.
Betting on LLMs commonly being wrong is not a safe bet at this point.
giantrobot|2 years ago
It's like reviewing an overconfident junior developer's code except you can't learn their particular weaknesses. If a developer is bad about memory leaks, you know to check their every PR for memory leaks. An LLM won't necessarily produce the same types of errors given similar prompts or even the same prompt with some period of time between invocations.