top | item 47163596

(no title)

WithinReason | 3 days ago

It's called "generalization":

https://en.wikipedia.org/wiki/Generalization_(learning)

discuss

order

selridge|3 days ago

>"If A and B are separately in the training data, the model can provide a result when A and B occur in the input because the model has made a connection between A and B in the latent space."

This statement (The one I was replying to) is fundamentally unbounded. There's nothing that can't be explained as a combination of "A" and "B" in "training data" because practically speaking we can express anything as such where the combination only needs to be convex along some high-dimensional semantic surface. Add on to that my scare quotes around "training data" because very few people have any practical idea of what is or isn't in there, so we can just make claims strategically. Do we need to explain a success? It was in the training data. A failure, probably not in the training data. Will anyone call us on this transparent farce? Not usually, no.

If a statement can--at will--explain everything and nothing, what's it worth?