(no title)
0x500x79 | 7 months ago
We asked them: "Where is xyz code". It didn't exist, it was a hallucination. We asked them: "Did you validated abc use cases?" no they did not.
So we had a PM push a narrative to executives that this feature was simple, that he could do it with AI generated code: and it didn't solve 5% of the use cases that would need to be solved in order to ship this feature.
This is the state of things right now: all talk, little results, and other non-technical people being fed the same bullshit from multiple angles.
AdieuToLogic|7 months ago
This is likely because LLM's solve for document creation which "best" match the prompt, via statistical consensus based on their training data-set.
> We asked them: "Where is xyz code". It didn't exist, it was a hallucination. We asked them: "Did you validated abc use cases?" no they did not.
So many people mistake the certainty implicit in commercial LLM responses as correctness, largely due to how people typically interpret similar content made by actual people when the latter's position supports the former's. It's a confluence of Argument from authority[0] and Subjective validation[1].
0 - https://en.wikipedia.org/wiki/Argument_from_authority
1 - https://en.wikipedia.org/wiki/Subjective_validation
escapedmoose|7 months ago
I figured the issue out the old-fashioned way, but it was a little annoying that I had to waste extra time deciphering the hallucinations, and then explaining why they were hallucinations.