(no title)
gjadi | 1 month ago
My issue is, LLM fooled me more than a couple of times with stupid but difficult to notice bugs. At that point, I have hard time to trust them (but keep trying with some stuff).
If I asked someone for something and found out several time that the individual is failing, then I'll just stop working with them.
Edit: and to avoid with just anthropomorphizing LLM too much, the moment I notice a tool I use bug to point to losing data for example, I reconsider real hard before I use it again or not.
No comments yet.