(no title)
solardev | 5 months ago
If LLMs and statistics can't encode semantics, how can do chatbots perform long-form translations with appropriate contexts? How do codebreakers use statistics to break an adversary's communications?
Sometimes the statistics are semantic, like when "orange" and "arancia" the picture of that fruit all mean the same thing, but Orange the wireless carrier and orange the color are different. Those are connections/probabilities humans also learn via repeated exposure in different contexts.
I'm not arguing that LLMs are synthesizing new ideas (or old ones), but that they ARE capable of deriving semantic meaning from statistics. Rather than:
> language, based solely on statistical data, shorn of semantics
Isn't it more like:
> language, based solely on statistical data, with meanings emerging from clusters in the data
Terr_|5 months ago
A system of vectors for man + royalty = king may capture relationships of meaning that we invested into a language, but does it conceive ideas?
solardev|5 months ago