(no title)
woeirua | 19 days ago
I can't imagine why someone would want to openly advertise that they're so closed minded. Everything after this paragraph is just anti-LLM ranting.
woeirua | 19 days ago
I can't imagine why someone would want to openly advertise that they're so closed minded. Everything after this paragraph is just anti-LLM ranting.
Cloudef|19 days ago
km3r|19 days ago
Like look at our brains. We know decently well how a single neuron works. We can simulate a single one with "just a computer program". But clearly with enough layers some form of complexity can emerge, and at some level that complexity becomes intelligence.
woopsn|19 days ago
hodgehog11|19 days ago
jbotz|19 days ago
Not GP, but... the author said explicitly "if you believe X you should stop reading". So I did.
The X here is "that the human mind can be reduced to token regurgitation". I don't believe that exactly, and I don't believe that LLMs are conscious, but I do believe that what the human mind does when it "generates text" (i.e. writes essays, programs, etc) may not be all that different from what an LLM does. And that means that most of humans's creations are also the "plagiarism" in the same sense the author uses here, which makes his argument meaningless. You can't escape the philosophical discussion he says that he's not interested in if you want to talk about ethics.
Edit: I'd like to add that I believe that this also ties in to the heart of the philosophy of Open Source and Open Science... if we acknowledge that our creative output is 1% creative spark and 99% standing on the shoulders of Giants, then "openness" is a fundamental good, and "intellectual property" is at best a somewhat distasteful necessity that should be as limited as possible and at worst is outright theft, the real plagiarism.
woeirua|19 days ago
Ygg2|19 days ago
Because humans often anthropomorphize completely inert things? E.g. a coffee machine or a bomb disposal robot.
So far whatever behavior LLMs have shown is basically fueled by Sci-Fi stories of how a robot should behave under such and such.
acjohnson55|19 days ago
But I agree that it is self limiting to not bother to consider the ways that LLM inference and human thinking might be similar (or not).
To me, they seem do a pretty reasonable emulation of single- threaded thinking.
Zardoz84|19 days ago
palmotea|19 days ago
It's not being closed-minded. It's not wanting to get sea-lioned to death by obnoxious people.
PaulDavisThe1st|19 days ago
Here's what sea-lioned means to me:
I say something.
You accuse me of sea-lioning.
I have two choices: attempt to refute the sea-lioning, which becomes sea-lioning, or allowing your accusation to stand unchallenged, which appears to most people as a confirmation of some kind that I was sea-lioning.
It is a nuclear weapon launched at discussion. It isn't that it doesn't describe a phenomena that actually happens in the world. However, it is a response/accusation to which there is never any way to respond to that doesn't confirm the accusation, whether it was true or not.
It is also absolutely rooted in what appears to me to be a generational distinction: it seems that a bunch of younger people consider it to be a right to speak "in public" (i.e in any kind of online context where people who do not know you can read what you wrote) and expect to avoid a certain kind of response. Should that response arise? Various things will be said about the responder, including "sea-lioning".
My experience is that people who were online in the 80s and 90s find this expectation somewhere between humorous and ridiculous, and that people who went online somewhere after about 2005 do not.
Technologically, it seems to reflect a desire among many younger people for "private-public spaces". In the absence of any such actual systems really existing (at least from their POV), they believe they ought to be able to use very non-private public spaces (facebook, insta, and everything else under the rubric of "social media") as they wish to, rather than as the systems were designed. They are communicating with their friends and the fact that their conversations are visible is not significant. Thus, when a random stranger responds to their not-private-public remarks ... sea-lioning.
We used to have more systems that were sort-of-private-public spaces - mailing lists being the most obvious. I sympathize with a generation that clearly wants more of these sorts of spaces to communicate with friends, but I am not sympathetic to their insistence that corporate creations that are not just very-much-non-private-public spaces but also essentially revenue generators should work the way they want them to.
wolrah|19 days ago
I would say the exact same about you, rejecting an absolutely accurate and factual statement like that as closed minded strikes me as the same as the people who insist that medical science is closed minded about crystals and magnets.
I can't imagine why someone would want to openly advertise they think LLMs are actual intelligence, unless they were in a position to benefit financially from the LLM hype train of course.
PaulDavisThe1st|19 days ago
I am not ready to say that "LLMs are actual intelligence", and most of their publically visible uses seem to me to be somewhere between questionable and ridiculous.
Nevertheless, I retain a keen ... shall we call it anti-skepticism? ... that LLMs, by modelling language, may have accidentally modelled/created a much deeper understanding of the world than was ever anticipated.
I do not want LLMs to "succeed", I think a society in which they are common is a worse society than the one in which we lived 5 years ago (as bad as that was), but my curiosity is not abated by such feelings.
woeirua|19 days ago