top | item 41273380

(no title)

eltoxo | 1 year ago

It is like having Google's MusicML output a mp3 of saxophone music and then ask what proof is there that MusicML has not learned to play the saxophone?

In a certain context that is only judging the output, what is meant by "play the saxophone", the model has achieved.

In another context of what is normally meant, the idea the model has learned to play the saxophone is completely ridiculous and not something anyone would even try to defend.

In the context of LLMs and intelligence/reasoning, I think we are mostly talking about the later and not the former.

"Maybe you don't have to blow throw a physical tube to make saxophone sounds, you can just train on tons of output of saxophone sounds then it is basically the same thing"

The enter discussion is ridiculous.

discuss

order

ninetyninenine|1 year ago

Let's limit the discussion to things that can be actually done with an LLM.

Getting one to blow on a saxophone is outside of this context.

An LLM can't blow on a saxophone period. However it can write and read English.

>In the context of LLMs and intelligence/reasoning, I think we are mostly talking about the later and not the former.

And I'm saying the later is completely wrong. I'm also saying the former is irrelevant. Look this is what you're doing. For the former you're comparing something humans can do to something LLMs Can't do. That's a completely irrelevant comparison.

for the later we are comparing things humans and LLMs BOTH can do. Sometimes humans give superior output, sometimes LLMs give superior output. Given similar inputs and outputs the internal analysis of what's going on whether it's true intelligence or true reasoning is NOT ridiculous.

"Ridiculous" is comparing things where no output exists. LLMs do not have saxophone output where they actually blow into an instrument. There's nothing to be compared here.