(no title)
HappMacDonald | 22 days ago
My factory produces squares, and every square is between 1ft and 3ft in side length.
Now what is the probability that the next square it outputs will be between 1ft and 2ft long?
The probability is zero percent, of course. Because my factory only produces squares with a side length of exactly 2.5ft (to within a micrometer tolerance, hooray!), day in and day out.
And as anyone can easily verify, every single one of those squares is between 1ft and 3ft in side length.
Notice how I didn't have to even begin to talk about areas?
The video's thesis is simply that "Talking out of your ass when you have insufficient information has the capability of backfiring sometimes: oh the horror" and I find the subject approximately as uninteresting as the fact that different interpolation methods (nearest neighbor, bicubic, "ask AI image gen to fill in the gaps", etc) are capable of inventing completely different false details into an image or dataset.
But I probably only find it equally uninteresting due to the claims being isomorphic.
When you don't have enough data, guessing at what is missing can be incorrect, and guessing in different ways can be incorrect in different ways, and you have to allow that to wash out as enough genuine data arrives (which means washing out the differences between potential methods of interpolation) and maintain your error bars correctly in the meantime instead of throwing them away.
So to loop back to the start: the probability that the next square will be between 1ft and 2ft is 50% plus or minus 50%, which is just an over-engineered way of saying "there is literally not enough information yet offered to make a guess of any trustworthiness at this point".
No comments yet.