(no title)
poyu
|
1 year ago
This exactly scenario is why our company is so afraid to put AI into production without the results being completely clear that it could be wrong. But if there’s even a chance that it could be wrong, why are we offering it to the user? How much due diligence does the user need to do? Does the benefits outweigh the cons?
z3t4|1 year ago
joegibbs|1 year ago
That’s where the problem is - it’s originally citing a Reddit post where someone recommends it as a joke, then Business Insider through citogenisis from the original story.
Pure LLMs (no RAG) don’t make this mistake - Claude will tell you it’s a bad idea and will taste bad.
Euphorbium|1 year ago
benterix|1 year ago
moffkalast|1 year ago
Corporations push code written by fresh junior devs into production every day, breaking stuff that could cost them tens of thousands. Do they care? On paper, very much so, in practice, they dgaf.