top | item 41350511

(no title)

dazilcher | 1 year ago

Most frequent output does not imply correctness, LLMs often are confidently wrong.

They can't even perform basic arithmetic (which is not surprising since they operate at the syntactic level, oblivious to any semantic rules), yet people seem to think offloading more complex tasks with strict correctness requirements is a good idea. Boggles the mind tbh.

discuss

order

No comments yet.