(no title)
_hark | 1 year ago
The entropy of some data is well-defined with respect to a model, but the model choice is free. I.e. different models will assign different entropy to the same data.
And how do we choose a model...? Well, formally by minimizing the information needed to describe both the model and data (the sum of model complexity and data entropy under the model) [1]
You might argue that's all too information-theoretic and in physics there simply is an objective count of the state-space, a maximum entropy, and so on. Alas, there is not even general consensus on whether there is a locally finite number of degrees of freedom.
[1]: https://en.wikipedia.org/wiki/Minimum_description_length
beagle3|1 year ago
E.g. the KT estimator is, for each individual Bernoulli sequence, as good as the best Bernoulli model for that sequence with at most 1/2 but difference (independent of sequence length)
It is undecidable/uncomputable, and only well defined up to a constant, but you have a “universally universal” model - Kolmogorov complexity. In that sense, entropy IS an absolute.