top | item 40766059

(no title)

madsbuch | 1 year ago

There is an immensely strong dogma that, to my best knowledge, is not founded in any science or philosophy:

        First we must lay down certain axioms (smart word for the common sense/ground rules we all agree upon and accept as true).
        
        One of such would be the fact that currently computers do not really understand words. ...
The author is at least honest about his assumptions. Which I can appreciate. Most other people just has it as a latent thing.

For articles like this to be interesting, this can not be accepted as an axiom. It's justification is what's interesting,

discuss

order

mensetmanusman|1 year ago

It’s a reasonable axiom, because for many people understanding involves qualia. If you believe LLM have qualia, you also believe a very large Excel sheet with the right numbers has an experience of consciousness and feels pain or something where the document is closed.

madsbuch|1 year ago

As I wrote, I appreciate that the author wrote it out as they did. It might be reasonable in the context of the article. But fixing it as an axiom just makes the discussion boring (for me).

> If you believe LLM have qualia, you also believe a ...

You use the word believe twice here. I am actively not talking about beliefs.

I just realise, that the author indeed gave themselves an out:

> ... currently computers do not really understand words.

The author might believe that future computers can understand words. This is interesting. Questions being _what_ needs to be in order for them to understand? Could that be an emergent feature of current architectures? That would also contradict large parts of the article.

shkkmo|1 year ago

Amusingly, the author does not appear to fully understand the meaning of "axiom".

While practice, axioms are often statements that we all agree on and accept as true, that isn't necessarily true and isn't the core of it's meaning.

Axioms are something we postulate as true, without providing an argument for its truth, for the purposes of making an argument.

In this case, the assertion isn't really used as part of a argument, but to bootstrap an explanation of how words are represented in LLMs.

Edit: I find this so amusing because it is an example of learning a word without understanding it.

LtWorf|1 year ago

> Axioms are something we postulate as true, without providing an argument for its truth, for the purposes of making an argument.

Uhm… no?

They are literally things that can't be proven but allow us to prove a lot of other things.

matwood|1 year ago

Yeah, for axioms like the above my next question is define 'understand'. Does my dog understand words when it completes specific actions because of what I say? I'm also learning a new language, do I understand a word when I attach a meaning (often a bunch of other words to it) to it? Turns out computers can do this pretty well.

southernplaces7|1 year ago

Oh please, enough with the semantics. It reminds me of a post modernist asking me to define what "is" is. The LLM does not understand words in the way a human understands them and that's obvious. Even the creators of LLMs implicitly take this as a given and would rarely openly say they think otherwise no matter how strong the urge to create a more interesting narrative.

Yes, we attach meaning to certain words based on previous experience, but we do so in the context of a conscious awareness of the world around us and our experiences within it. An LLm doesn't even have a notion of self, much less a mechanism for attaching meaning to words and phrases based on conscious reasoning.

Computers can imitate understanding "pretty well" but they have nothing resembling a pretty good or bad or any kind of notion of comprehension about what they're saying.

logicallee|1 year ago

It's the most incredible coincidence. Three million paying OpenAI customers spend $20 per month (compare: NetFlix standard: $15.49/month) thinking they're chatting with something in natural language that actually understands what they're saying, but it's just statistics and they're only getting high-probability responses without any understanding behind it! Can you imagine spending a full year showing up to talk to a brick wall that definitely doesn't understand a word you say? What are the chances of three million people doing that! It's the biggest fraud since Theranos!! We should make this illegal! OpenAI should put at the bottom of every one of the millions of responses it sends each day: "ChatGPT does not actually understand words. When it appears to show understanding, it's just a coincidence."

You have kids talking to this thing asking it to teach them stuff without knowing that it doesn't understand shit! "How did you become a doctor?" "I was scammed. I asked ChatGPT to teach me how to make a doctor pepper at home and based on simple keyword matching it got me into medical school (based on the word doctor) and when I protested that I just want to make a doctor pepper it taught me how to make salsa (based on the word pepper)! Next thing you know I'm in medical school and it's answering all my organic chemistry questions, my grades are good, the salsa is delicious but dammit I still can't make my own doctor pepper. This thing is useless!

/s

shkkmo|1 year ago

Maps are useful, but they don't understand the geography they describe. LLMs are maps of semantic structures and as such, can absolutely be useful without having an understanding of that which they map.

If LLMs were capable of understanding, they wouldn't be so easy to trick on novel problems.

madsbuch|1 year ago

i am not sure where this comment fits as an answer to my comment.

Firstly, do understand that I am not saying that LLMs (or ChatGPT) do understand.

I am merely saying that we don't have any sound frameworks to assess it.

For the rest of your rant: I definitely see that you don't derive any value from ChatGPT. As such I really hope you are not paying for it - or wasting your time on it. What other people decide to spend their money on is really their business. I don't think any normal functioning people have the expectation that a real person is answering them when they use ChatGPT - as such it is hardly a fraud.