top | item 34777118

(no title)

capitalsigma | 3 years ago

These models are very impressive, but the issue (imo) is that lay people without an ML background see how plausibly-human the output is and infer that there must be some plausibly-human intelligence behind it that has some plausibly-human learning mechanism -- if your new hire at work made the kinds of mistakes that ChatGPT does, you'd expect them to be up to speed in a couple of weeks. The issue is that ChatGPT really isn't human-like, and removing inaccurate output isn't just a question of correcting it a few times -- it's learning process is truly different and it doesn't understand things how we do.

discuss

order

No comments yet.