(no title)
madsbuch | 1 year ago
First we must lay down certain axioms (smart word for the common sense/ground rules we all agree upon and accept as true).
One of such would be the fact that currently computers do not really understand words. ...
The author is at least honest about his assumptions. Which I can appreciate. Most other people just has it as a latent thing.For articles like this to be interesting, this can not be accepted as an axiom. It's justification is what's interesting,
mensetmanusman|1 year ago
madsbuch|1 year ago
> If you believe LLM have qualia, you also believe a ...
You use the word believe twice here. I am actively not talking about beliefs.
I just realise, that the author indeed gave themselves an out:
> ... currently computers do not really understand words.
The author might believe that future computers can understand words. This is interesting. Questions being _what_ needs to be in order for them to understand? Could that be an emergent feature of current architectures? That would also contradict large parts of the article.
shkkmo|1 year ago
While practice, axioms are often statements that we all agree on and accept as true, that isn't necessarily true and isn't the core of it's meaning.
Axioms are something we postulate as true, without providing an argument for its truth, for the purposes of making an argument.
In this case, the assertion isn't really used as part of a argument, but to bootstrap an explanation of how words are represented in LLMs.
Edit: I find this so amusing because it is an example of learning a word without understanding it.
LtWorf|1 year ago
Uhm… no?
They are literally things that can't be proven but allow us to prove a lot of other things.
matwood|1 year ago
southernplaces7|1 year ago
Yes, we attach meaning to certain words based on previous experience, but we do so in the context of a conscious awareness of the world around us and our experiences within it. An LLm doesn't even have a notion of self, much less a mechanism for attaching meaning to words and phrases based on conscious reasoning.
Computers can imitate understanding "pretty well" but they have nothing resembling a pretty good or bad or any kind of notion of comprehension about what they're saying.
logicallee|1 year ago
You have kids talking to this thing asking it to teach them stuff without knowing that it doesn't understand shit! "How did you become a doctor?" "I was scammed. I asked ChatGPT to teach me how to make a doctor pepper at home and based on simple keyword matching it got me into medical school (based on the word doctor) and when I protested that I just want to make a doctor pepper it taught me how to make salsa (based on the word pepper)! Next thing you know I'm in medical school and it's answering all my organic chemistry questions, my grades are good, the salsa is delicious but dammit I still can't make my own doctor pepper. This thing is useless!
/s
shkkmo|1 year ago
If LLMs were capable of understanding, they wouldn't be so easy to trick on novel problems.
madsbuch|1 year ago
Firstly, do understand that I am not saying that LLMs (or ChatGPT) do understand.
I am merely saying that we don't have any sound frameworks to assess it.
For the rest of your rant: I definitely see that you don't derive any value from ChatGPT. As such I really hope you are not paying for it - or wasting your time on it. What other people decide to spend their money on is really their business. I don't think any normal functioning people have the expectation that a real person is answering them when they use ChatGPT - as such it is hardly a fraud.
unknown|1 year ago
[deleted]