(no title)
kianN | 2 months ago
LLMs consistently misrepresent information in this exact same way in, more critical applications. Because they are often employed on datasets that engineers and potentially end users are not deeply familiar with, the results often seem exceptional.
Disclaimer via my HN wrapped: “The Anti LLM Manifesto You will write a 5,000-word blog post on why a single Bayesian prior is more 'sentient' than GPT-6, and it will be ignored because the summary was generated by a 3B parameter model.”
No comments yet.