top | item 45174412

(no title)

solardev | 5 months ago

I don't think this is the unprovable you think it is?

If LLMs and statistics can't encode semantics, how can do chatbots perform long-form translations with appropriate contexts? How do codebreakers use statistics to break an adversary's communications?

Sometimes the statistics are semantic, like when "orange" and "arancia" the picture of that fruit all mean the same thing, but Orange the wireless carrier and orange the color are different. Those are connections/probabilities humans also learn via repeated exposure in different contexts.

I'm not arguing that LLMs are synthesizing new ideas (or old ones), but that they ARE capable of deriving semantic meaning from statistics. Rather than:

> language, based solely on statistical data, shorn of semantics

Isn't it more like:

> language, based solely on statistical data, with meanings emerging from clusters in the data

discuss

order

Terr_|5 months ago

Fair, the word "semantics" probably shouldn't be used here, because, strictly speaking, it is a departure from the original "ideas" being discussed.

A system of vectors for man + royalty = king may capture relationships of meaning that we invested into a language, but does it conceive ideas?

solardev|5 months ago

I don't think so — I don't disagree with you there. It was more the parent (the child to your post) I was responding to, about statistics and semantics.