concreteblock's comments

concreteblock | 4 years ago | on: Kelly Criterion – how to calculate optimal bet sizes

> Again, Kelly gives us terrible advice. If you have a trillion to one payout on a coin flip, you want to bet less, not more! Why would you risk half your money, and have a 1/4 chance of losing all your money, when you can bet 1 cent at a time and just wait to win one time so you can buy half of the stock market with your winnings?

So you're just going to throw away the criterion because you think the results are unintuitive? That's the argument you're making here.

To take your reasoning seriously, the reason why you might not want to bet 1 cent at a time is because the Kelly bet is guaranteed to eventually overtake your 1-cent-bet-strategy. Furthermore, it is completely incorrect to say that the Kelly bet has a 1/4 chance of losing all your money in the given situation. If you lose your first bet, the Kelly criterion tells you not to bet the whole house on the next bet.

Nothing you have written so far suggests that you actually understand the sense in which the Kelly criterion is optimal, which I attempted to explain in my other reply to you. You keep writing as though it only maximizes the expectation of log-utility. In fact it's not clear that you even understand what the Kelly criterion is telling you to do.

concreteblock | 4 years ago | on: Kelly Criterion – how to calculate optimal bet sizes

You obviously wouldn't use Kelly criterion during a poker game because the assumptions don't fit. But on a larger scale it can be used for 'bankroll management' - what proportion of your wealth should you use on a tournament entry fee. Of course you don't have the exact parameter p but you can use an estimate to make sure you are not making a grossly over/under-sized bet.

concreteblock | 4 years ago | on: Kelly Criterion – how to calculate optimal bet sizes

The first sentence of your post, while technically true, misses the point. This misunderstanding undermines many of your other points.

The kelly criterion happens to be optimal with respect to log wealth but that's not the main reason why it's interesting. Many explanations, including the original post, make this mistake. Maybe because 'maximizing expected utility' is a more common idea.

The first sentence of the wikipedia article:

"In probability theory and intertemporal portfolio choice, the Kelly criterion (or Kelly strategy or Kelly bet), also known as the scientific gambling method, is a formula for bet sizing that leads almost surely (under the assumption of known expected returns) to higher wealth compared to any other strategy in the long run".

In other words, pick a strategy. I'll pick the kelly strategy. There will be some point in time, after which I will have more money than you, and you will never overtake me. No logarithms involved. This is something you can easily check by simulation, but requires some heavier math to formulate precisely and prove.

See also posts by spekcular and rssoconnor elsewhere in this thread.

concreteblock | 4 years ago | on: Elliptic Curve Cryptography Explained (2019)

Elliptic curves is a vast field of study and ECC is a subfield of it. The name kind of suggests this relationship.

It’s like hearing that someone works at Microsoft and then asking them about features in MS Word.

concreteblock | 4 years ago | on: New proof reveals that graphs with no pentagons are fundamentally different

To clarify, since I can't edit: This is coming from my experience as someone who has taught math to teachers getting their master's degree. It sounds mean but I wanted to emphasize that the state of early math education is not just due to poorly designed curriculum, but because there is little incentive for mathematically competent people to teach children. (Imo, of course).

concreteblock | 4 years ago | on: Announcing the Rule 30 Prizes (2019)

Yes, that's correct.

There are no edges, it's an infinite grid. You start with 1 black cell and all the other cells are white.

Because the triple-white configuration does not produce a black cell, you can compute up the any finite time step with finite computational power.

(Actually, later in the article he talks about finite grids with periodic boundary conditions. That means that, if you're on the edge, you 'wrap around' to the other side of the grid).

concreteblock | 4 years ago | on: Announcing the Rule 30 Prizes (2019)

The problem seems to have the same flavor as the Collatz conjecture. Simple dynamical system - very difficult to tell what happens in the long run.

Perhaps these things are too hard for (human) mathematics. I wonder if anyone has proved any theorems that make this precise. E.g. "Most cellular automata rules cannot be analyzed efficiently".

I don't know enough complexity theory/set theory to formulate this precisely.

concreteblock | 4 years ago | on: Announcing the Rule 30 Prizes (2019)

In case anyone is wondering, the way to interpet the rules is as follows.

* The state of the system is a black-white assignment of colors to the grid.

* The rules tell you how to compute the next state of the system.

* To update the state at a certain cell x, you look at the colors of x-1, x and x+1. (left, self, and right). Then use use the table of rules to determine the new color of the middle cell. For instance, the first rule tells you that if you see three black cells in a row, then in the next time step the middle cell is white.

* This update is done simultaneously over all cells, so you compute all the new cell colors and then update them all at once.

concreteblock | 4 years ago | on: Mathematicians Settle Erdős Coloring Conjecture

Many problems can be reduced to graph coloring problems if you look at it the right way. Try understanding the standard examples (see the applications section in Wikipedia), then it is really easy to come up with your own applications.

concreteblock | 5 years ago | on: Ergodicity, What's It Mean

I agree with the first 3 statements.

Just to confirm, I am using 'almost surely' in the technical sense, which means 'with probability 1.'

Consider the following statement:

If you keep flipping a fair coin every day, it is almost sure that after some day you will have gotten a tails.

This is the same 'almost surely' that I am referring to.

concreteblock | 5 years ago | on: Ergodicity, What's It Mean

You’re right. I made a mistake - I thought you were trying to contradict the theorem I stated. I’ve just realized you were saying something orthogonal.

As for the proof of my theorem, By taking logarithms, the process becomes an additive random walk with negative drift (log 1.6 + log 0.5 < 0). This is well known to converge to negative infinity almost surely. After exponentiating to undo the logarithm, this is exactly the statement I made.

It does not matter how many test subjects there are ( as long as there’s finitely many) because, informally speaking , you can just wait for each of them to become irrevocably bankrupt in turn.

concreteblock | 5 years ago | on: Ergodicity, What's It Mean

The mean converges exponentially to zero with time. It doesn’t grow exponentially. So the theorem you cited also goes in the same direction of my statement.

concreteblock | 5 years ago | on: Ergodicity, What's It Mean

No matter how many test subjects you use, if you run the experiment for a very long time, everyone goes bankrupt and will never recover.

More precisely there is a finite time after which no-one ever passes above $0.0000000000000001.

That is a mathematical theorem.

This doesn’t depend on the number of test subjects, and you can add as many zeroes as you want.

Therefore in the long run the mean outcome is 0.

Forgive me if I have misinterpreted what you are are trying to say.

Edit: I’ve just realized that I have indeed missed your point.

concreteblock | 5 years ago | on: The dispassionate developer

> I'd expect many candidates to think that you actually want them to generate all the permutations and then get bogged down in recursion.

Ideally the candidate should realize that this is computationally infeasible and quickly come up with a much more efficient solution, right? Perhaps they believe that this question filters out people who 1) don't have a feel for what is computationally feasible 2) will just dive into coding without spending 10 seconds to see if there's a significant optimization.

page 2