(no title)
it_does_follow | 4 years ago
One thing I do find a bit surprising is that in the nearly 2000 pages covered between these two books there is almost no mention of understanding parameter variance. I get that in machine learning we typically don't care, but this is such an essential part of basic statistics I'm surprised it's not covered at all.
The closest we get is in the Inference section which is mostly interested in prediction variance. It's also surprising that in neither the section on Laplace Approximation or Fisher information does anyone call out the Cramér-Rao lower-bound which seems like a vital piece of information regarding uncertainty estimates.
This is of course a minor critique since virtual no ML books touch on these topics, it's just unfortunate that in a volume this massive we still see ML ignoring what is arguably the most useful part of what statistics has to offer to machine learning.
dxbydt|4 years ago
it_does_follow|4 years ago
I'm not sure this will ever dominate. As much as I love Bayesian approaches I sort of feel there is a push to make them ever more byzantine, recreating all of the original critiques of where frequentist stats had gone wrong. So essentially we're just seeing a different orthodoxy dominant thinking with all of the same trapping of the previous orthodoxy.
0. https://www.mbmlbook.com/
kuloku|4 years ago
jstx1|4 years ago
yldedly|4 years ago
yellowcake0|4 years ago
barrenko|4 years ago
Currently I have lined up - Math for programmers (No starch press), Practical Statistics for data scientists (O'Reily - the crab book), and Discovering Statistics using R.
Basically I'm trying to follow the theory from "Statistical Consequences of Fat Tails" by NNT.