maxminminmax | 2 months ago | on: PhDs Can't Find Work as Boston's Biotech Engine Sputters
maxminminmax's comments
maxminminmax | 1 year ago | on: Legendre transform, better explained (2017)
maxminminmax | 1 year ago | on: Legendre transform, better explained (2017)
In Legendre transform, what we have (the y variable is a red herring, and I will ignore it; everything happens "pointwise in y"), is curve in u-x plane, which we lift to u-x-z space in two ways -- that is, we find functions f and g defined for the points on that curve such that: 1) if the curve is parametrized by x, so that f is a function of x, then df/dx=u 2)if the curve is parametrized by u, so that g is a function of u, then dg/dx = u. (Why do we want this? Presumably because when x is velocity and f is energy, u is momentum, and we want g to have same property going back. And yes, there are conditions when one can parametrize a curve by one of the coordinates, either locally, or globally; one such is that u is monotone increasing function of x - that corresponds to convexity of f.) Of course now "derivatives of f and g are inverse" is tautological.
If we already know f(x), but don't know neither u nor g we could set u = df/dx and try to compute g. Or we could do it the way Goldstein does it: dg/du=x, so dg=xdu (this is an ODE), integrating it "by parts" g=int x du = xu - int u dx = xu - f.
(In advanced speak, u-x curve is a Lagrangian in the u-x plane, which is symplectic as every sum of vector space and its dual is; the functions f and g correspond to lifts of this Lagrangian to Legendrians based on choice of "canonical" 1-forms udx and xdu, respectively,so that df - udx=0 and g-xdu=0.)
maxminminmax | 2 years ago | on: I don't use Bayes factors in my research (2019)
maxminminmax | 3 years ago | on: Algorithms to Live By – The Computer Science of Human Decisions
For what value function? It is basically never the case that my value function is "all choices other than the optimal are equally bad" -- which is what this rule is based on.
As a personal opinion, this drives me up the wall. There is a great problem here, and there is a whole area (several of them, actually!) of applied math dedicated to it (Statistical Decision Theory, Reinforcement Learning, you name it). Instead we get this toy version -- which at best is an oversimplified intro to he subject, and at worst an excuse to bamboozle with math-fairy-dust -- brought out as some kind of rule "to live by". Your algorithm is bad, and you should feel bad.
maxminminmax | 3 years ago | on: Ask HN: Where to Read Proofs?
maxminminmax | 3 years ago | on: Costs of California’s troubled bullet train rise again, by estimated $5B
As an example, an Ivy graduate makes more than state school graduate on average, but there was a study showing that those offered Ivy admission but deciding to go to a state school made just as much (that study setup has its own selection bias issues, but hopefully those gives an idea of what I mean).