(no title)
amputect | 2 years ago
Obviously, people find some value in some output of some LLMs. I've enjoyed the coding autocomplete stuff we have at work, it's helpful and fun. But "it's not qualified to answer my questions" is still true, even if it occasionally does something interesting or useful anyway.
*- this is a complicated term with a lot of baggage, but fortunately for the length of this comment, I don't think that any sense of it applies here. An LLM doesn't understand its training set any more than the mnemonic "ETA ONIS"** understands the English language.
**- a vaguely name-shaped presentation of the most common letters in the English language, in descending order. Useful if you need to remember those for some reason like guessing a substitution cypher.
CamperBob2|2 years ago
Behavior indistinguishable from understanding is understanding. Sorry, but that's how it's going to turn out to work.
zlg_codes|2 years ago
Why are people so eager to believe that electric rocks can think?
ethbr1|2 years ago
LLMs encode some level of understanding of their training set.
Whether that's sufficient for a specific purpose, or sufficiently comprehensive to generate side effects, is an open question.
* Caveat: with regards to introspection, this also assumes it's not specifically guarded against and opaquely lying.
ekianjo|2 years ago
Exactly like humans dont understand how their brain works
zlg_codes|2 years ago
Unlike LLMs, which are built by humans and have literal source code and manuals and SOPs and shit. Their very "body" is a well-documented digital machine. An LLM trying to figure itself out has MUCH less trouble than a human figuring itself out.