top | item 11015625

Random Walks: the mathematics in 1 dimension

82 points| outputchannel | 10 years ago |mit.edu | reply

17 comments

order
[+] Xcelerate|10 years ago|reply
One might ask the question: what is the probability that you will return to your starting position over the course of an infinite random walk? On a 1 dimensional or 2 dimensional lattice, that probability is 1. What's crazy though is that for a 3D lattice, the probability is not 1 — it's about 0.3405.
[+] tristanj|10 years ago|reply
I really love this proof. It's a great example of using maths to prove a counter-intuitive result. They way to prove it is rather clever, and made me appreciate what mathematicians do a lot more.

Shame I've never seen it shared online. I was actually hoping the submitted article was a proof of this, but you can't have everything in life.

[+] iaw|10 years ago|reply
Wow, do you have any proofs for this? I'm especially curious about the generalized n-dimensional case.
[+] amelius|10 years ago|reply
Random walks have been used also to numerically solve differential equations. See e.g. [1]

[1] http://www.jstor.org/stable/3612176 "A Proof of the Random-Walk Method for Solving Laplace's Equation in 2-D"

[+] awalGarg|10 years ago|reply
Here is a related lecture from MIT https://youtu.be/56iFMY8QW2k which mathematically proves how it is pretty much impossible to go "happy" from gambling in a club even though intuition says otherwise.
[+] Dylan16807|10 years ago|reply
What does "happy" mean?

It's not hard to set up a bet that gives you an arbitrarily high chance of gaining money, despite an expected value of less than 1.

[+] zodiac|10 years ago|reply
Isn't the expected distance (undirected) given by E[|d|], while sqrt(n) is the value of sqrt(E[d^2])?
[+] pash|10 years ago|reply
They each measure the same thing, more or less, but it's easier to work analytically with squares than absolute values. Similarly, we tend to work with the variance rather than with expected absolute deviations, we calculate sums of squares rather than sums of absolute values, etc.

More fundamentally, root-mean-square is the norm induced by the expectation inner product in the space of random variables. Norms generalize the geometric notion of length, so intuitively RMS is an appropriate measure of the "stochastic distance" from the origin of a random walk after a set number of steps. RMS can likewise be used as an analogue for geometric length for other purposes in a stochastic context, e.g., in calculating the similarity dimension of fractal stochastic processes like Brownian motion.