(no title)
cmenge | 7 months ago
Long before LLMs, I would talk about classes / functions / modules like "it then does this, decides the epsilon is too low, chops it up and adds it to the list".
The difference I guess it was only to a technical crowd and nobody would mistake this for anything it wasn't. Everybody know that "it" didn't "decide" anything.
With AI being so mainstream and the math being much more elusive than a simple if..then I guess it's just too easy to take this simple speaking convention at face value.
EDIT: some clarifications / wording
flir|7 months ago
Maybe it's cog-nition (emphasis on the cog).
LeonardoTolstoy|7 months ago
I think the above poster gets a little distracted by suggesting the models are creative which itself is disputed. Perhaps a better term, like above, would be to just use "model". They are models after all. We don't make up a new portmanteau for submarines. They float, or drive, or submarine around.
So maybe an LLM doesn't "write" a poem, but instead "models a poem" which maybe indeed take away a little of the sketchy magic and fake humanness they tend to be imbued with.
JimDabell|7 months ago
Why?
A plane is not a fly and does not stay aloft like a fly, yet we describe what it does as flying despite the fact that it does not flap its wings. What are the downsides we encounter that are caused by using the word “fly” to describe a plane travelling through the air?
whilenot-dev|7 months ago
Atlas667|7 months ago
All imitations require analogous mechanisms, but that is the extent of their similarities, in syntax. Thinking requires networks of billions of neurons, and then, not only that, but words can never exist on a plane because they do not belong to a plane. Words can only be stored on a plane, they are not useful on a plane.
Because of this LLMs have the potential to discover new aspects and implications of language that will be rarely useful to us because language is not useful within a computer, it is useful in the world.
Its like seeing loosely related patterns in a picture and keep derivating on those patterns that are real, but loosely related.
LLMs are not intelligence but its fine that we use that word to describe them.
intended|7 months ago
The rest of the time it’s generating content.
ryeats|7 months ago
seanhunter|7 months ago
When we need to speak precisely about a model and how it works, we have a formal language (mathematics) which allows us to be absolutely specific. When we need to empirically observe how the model behaves, we have a completely precise method of doing this (running an eval).
Any other time, we use language in a purposefully intuitive and imprecise way, and that is a deliberate tradeoff which sacrifices precision for expressiveness.
psychoslave|7 months ago
delusional|7 months ago
I personally find that description perfect. If you want it shorter you could say that an LLM generates.
loxs|7 months ago
It's much more interesting when we are talking about... say... an ant... Does it "decide"? That I have no idea as it's probably somewhere in between, neither a sentient decision, nor a mathematical one.
0x457|7 months ago
stoneyhrm1|7 months ago
I don't think LLMs are sentient or any bullshit like that, but I do think people are too quick to write them off before really thinking about how a nn 'knows things' similar to how a human 'knows' things, it is trained and reacts to inputs and outputs. The body is just far more complex.
grey-area|7 months ago
These are very different and knowledge is not intelligence.
HelloUsername|7 months ago
This made me think, when will we see LLMs do the same; rereading what they just sent, and editing and correcting their output again :P