No, it is a hypothesis I formulated here after reading the article. I did a quick check on google scholar but I didn't hit any result. The more interesting question is, if true, what can you do with this information. Maybe it can be a way to evaluate a complete program or specific heap allocator, as in "how fast does this program reach universality". Maybe this is something very obvious and has been done before, dunno, heap algos are not my area of expertise.
blurbleblurble|1 month ago
I realize what I'm saying is very gestural. The analogous context I'm imagining is deriving blue noise distributed points from randomly distributed points: intuitively speaking it's necessary to inspect the actual distributions of the points in order to move the points toward the lower entropy distribution of blue noise, which means "consuming" information about where the points actually are.
The "random song" thing is similar: in order to make a shuffle algorithm that doesn't repeat, you need to consume information about the history of the songs that have been played. This requirement for memory allows the shuffle algorithm to produce a lower entropy output than a purely random process would ever be able to produce.
So hearing that a "purely random matrix" can have these nicely distributed eigenvalues threw me off for a bit, until I realized that observing the eigenvalues has some intrinsic computational complexity, and that it requires consuming the information in the matrix.
Again, this is all very hunchy, I hope you see what I'm getting at.
FjordWarden|1 month ago