top | item 35363911

(no title)

nicksrose7224 | 2 years ago

I see this argument all the time. Why are you assuming that this technology just "stops" at the LLM level?

If I'm openAI or Google or whatever, I'm definitely going to run extra classifiers on top of the output of the LLM to determine & improve accuracy of results.

You can layer on all kinds of interesting models to make a thing that's generally useful & also truthful.

discuss

order

lanstin|2 years ago

Your last word is a bit of a weasel word. There is no currently known way to get truth except to try something out and see what happens. Even the marvelous GPS routing takes feedback from people driving down the routes and succeeding or failing. Add more layers but without some equivalent of arms and legs, it won’t be able to be sure about truth.

The nice thing about the easy to bamboozle GPT4 is that it can’t hurt anything, so its flaws are safe. Giving it these arms and legs is where the risks increase, even as the reward increases.