There's a probability model called the Pólya urn where you imagine an urns containing numbered balls (colored balls in a typical example, but to draw the comparison with dice we can say they're numbered 1-6), and every time you draw a ball of a certain color, you put back more balls according to some rule. A few probability distributions can be expressed in terms of a Pólya urn, see https://en.wikipedia.org/wiki/P%C3%B3lya_urn_model.
A fair 6-sided die would be an equal number of balls numbered 1-6 and a rule that you simply return the ball you drew. You can get a gambler's fallacy distribution by, say, adding one of every ball that you didn't draw. I read the code as a Pólya urn starting with 1 ball 1-N and doing that on each draw plus reducing the number of balls of the drawn number to 1.
Also related, in 2d space, is the idea of randomly covering the plane in points but getting a spread-out distribution, since uniformity will result in clusters. (If you're moving a small window in any direction and you haven't seen a point in a while, you're "due" to see another one, and vice versa if you just saw a point.) Mike Bostock did a very nice visualization of that here: https://bost.ocks.org/mike/algorithms/
A great application for this is in randomizing playlists. My friends, who are also CS grads and should know better, have often complained that their MP3 players, CD carousels, etc play the same music too often claiming that the random is broken, when a song repeating in a short period of time or other songs never playing is what you would expect from a truly random selection. Using this algorithm, you'd be sure to hear all of your songs. I'm guessing most music services already do something like this.
A simpler way that's guaranteed to not repeat a song till the entire set has been played is: Assuming you have N songs, pick a prime P such that N is not divisible by P. Then pick a random index I (0 <= I < N) to start from. The index of the next song is I' = MOD(I + P, N). This method has the advantage of being O(1) regardless of the size of the playlist.
Another method is to pick a random key K and sort the entries of the playlist based on HMAC(SONG, K) (where SONG is the name or other identifier of the song). This has a number of interesting properties. For starters the playlist will be randomized (that's good). It will also keep it's overall order if a new song is added. The new song will get inserted based upon its HMAC. You can switch things up a bit by having K be a based on the current date. That way new songs will maintain their order and jumping to a track on a given day will continue the chain as before, but day to day you won't always hear the same tracks back to back.
Many music players have done this for a long time. In fact I believe that's part of the reason why they refer to that function as "shuffle" rather than "random"
Several years ago I used to use a music service that had a bunch of sliders, something like:
Favorites: Normal <----------> Very Frequently
Artist: Liked only <-----------> Wide variety
Prefer: Old <----Balanced----> New
Popularity: Hits only <--------> Fringe
It was great to be able to customize stations. I don't remember what service it was, but I think it did get purchased and folded into something else.
This was a great way of tweaking the "randomization", and really gets into understanding what people want to hear. In my case, I had different stations/playlists customized in different ways.
> My friends, who are also CS grads and should know better...
I know this is about being "technically correct" but I'd argue that even if you're pedantic and know the behavior of true randomness, the complaints are justified.
First, "pick a song from a uniform distribution" is an implementation detail, not the use case. The use case is "pick songs in a pleasant, novel order". If the implementer chose to do this using uniform randomness, that's their descision. If that implementation does not fulfill the use case, it's a bug and also their responsibility.
Second, the function is frequently called "shuffle" - so it even gives some hints about the implementation expected. No shuffling algorithm should ever result in duplicates. (except at the beginning and end of playlists)
This is called "sampling without replacement" and lines up with the physical intuition users have for taking a playlist where each song appears once, randomly reordering it, and then playing the entire playlist.
Good implementations actually don't make a random choice, but do a random sort and then play the whole list, then random sort again, then play again the whole list, etc. The only possible problem here is when the last song in one playthrough is the same as the first song in the next playthrough. But even that can be overcome by re-random-sorting the follow-up.
I had a few issues with Spotify doing this. I even saw a post where an engineer claimed it was all psychological, but I was definitely getting repeats when I had queued up a large amount of music (e.g. over 40 hours.) I think they only shuffle a certain percentage of the playlist and forget which songs have already been played after a while.
I manually shuffle some playlists now. One nice thing about the desktop client is the clipboard integration. You can press cmd+a to highlight all the songs, cmd+c to copy, and then paste all the song URLs into a text editor. Then I used "permute lines" in Sublime Text to shuffle them, then cmd+c and cmd+v to paste the songs back into Spotify.
back when I used Winamp as my daily player, I was getting annoyed that its "shuffle" kept playing the same songs. and it would play the same songs in a row. so I would hear the songs C,A,F play. then the next day again the sequence would be C,A,F. over time, I would start hearing the melody of the next song before it even started playing.
when I finally noticed this and realized that could only mean it's not actually random, I researched it, and found out that once you hit shuffle/random on Winamp, it shuffled the songs once and kept the sequence forever.
and then as it turned out, Winamp actually had a setting to select a truly random song each time. all I needed to cure my madness was a checkbox!
You can take it too far, though. When I listen to Venetian Snares on Spotify, they only play some songs off of 3 albums, despite the fact that have about a dozen albums. If you're going to do that, tell people it's not actually random.
Interesting, and at first I was excited about the possibilities in something like D&D, where a series of bad rolls can have you feeling down. "I'm due for a critical hit any swing now..."
Players would love that! Make my hero feel more heroic! The inevitable comeback!
But then I thought about the inverse case -- you are doing really well, and now you are due for a failure. Or series of failures. That would feel awful.
We have a lot of emotions around dice rolling. I wonder if players really want from their dice. Would players prefer dice that are secretly unevenly weighted towards good rolls? Would they still want those dice if they knew they were weighted?
I made this specifically for my online D&D sessions. I can tell you that nobody was suspicious when I replaced the standard Math.random() with this version, and all of the "I think your dice function is broken" statements all the sudden stopped happening.
In some competitive games like Dota and LoL, the random distribution for critical hits is weighted so that it ramps up over time, resetting after a successful crit.
(http://dota2.gamepedia.com/Random_distribution) This both reduces the chance of two high rolls coming up twice in a row as well as long periods of low rolls.
It doesn't have the "you are doing really well, and now you are due for a failure" since "not a critical" isn't really that negative of an outcome.
For something like D&D, the same kind of "better-than-random" distribution just comes from adding dice. 4d6 is much more consistent than 1d21+3.
In hobby time I have been exploring the storytelling possibility space of RPGs using card decks. I love playing cards and I love custom playing card designs and I wish there was more impetus for players to bring decks with interesting designs to an RPG table in the same way that dice are so varied and fun to see what people bring to a table.
I think there are interesting stories to tell in an RPG where you know the "weight" of your rolls, approximately how many "success" cards you have left, some choice in what you play from your hand.
«But then I thought about the inverse case -- you are doing really well, and now you are due for a failure. Or series of failures. That would feel awful.»
It depends on how personal you take it in how awful it feels. If the goal is to create an interesting story, a grand sequence of failures can be a very interesting story. Look at Fiasco, for instance, where some of the fun can be very explicitly "how do I make my character more miserable?" In those cases where you know you are about to hit a lot of failures in die rolls/card draws you can lean in to it. Try to explain why your character is having such awful luck, for instance. Maybe try to set up an elaborate Rube Goldberg design of failures that eventually cascade into the weirdest, grandest success, like a "drunken master" kung fu ballet.
Randomness is what makes a game more realistic but true randomness will not make it more likable. True randomness dictates that in half of your games, your players will have a harder time achieving anything.
It is not uniformity of results that the players are looking for, it is uniformity of successes weighted by the importance of the rolls. Failing a perception check in an empty room is less important than failing a dodge roll in the final fight on your last health point.
Many games suggest that the gamemaster cheats on some crucial rolls. More games propose an alternate system that biases rolls based on points systems. Some others even replace totally the dices with a points system.
I really prefer those. A very simple biased rolls system I often use (Dk2) is that GM can emphasize the action of dangerous NPCs by adding special D6 dices to a D20 roll (they are special as they only give +3 or +6 on 3 and 6 and +0 on other results). Once used, these dices are added to a pool that the players can use for their actions.
Anyway, my favorite system is Amber Diceless RPG. It trades randomness for secrets. Every player is a bit of a backstabing asshole who does not show all their skills on any action. If someone has 30 in strength, they may just announce 10 if they think it is enough for the action.
The winner in a fight is the one with the bigger score. Add bonus if they use some kind of trick or ambushes (recommended). It is surprisingly interesting for such a simple system.
Maybe a hybrid mode, where too many bad rolls bias the dice towards a more "true uniform" - lol - distribution while a series of good rolls just keep it actually fair.
Better have games without dice rolls. Unexpected events could come from other non-random sources (like the combination of a lot of factors involving other players actions).
> I made a chatbot that rolled dice, and it was constantly criticized for being "broken" because four 3's would come up in a row.
> These accusations would come up even though they (all being computer science majors) know it's possible (although unlikely) for these events to happen. They just don't trust the black box.
I had the privilege of studying probability from G-C. Rota. One of my favorite quotes from him was "Randomness is not what we expect", which he used to describe the phenomenon of people disbelieving that random data was actually random. Another great was "This will become intuitive to you, once you adjust your intuition to the facts."
There was an old flash video game I worked on a long time ago where I did exactly this. I had a boss with two main attacks, and I didn't want it to be super predictable A/B/A/B, so I had it pick between A and B randomly, then reweight the probabilities, so if it picked A, instead of 50% A, 50% B it'd now be 25% A, 75% B. If it picked A again it'd be down to like 12.5% A, 87.5% B. If B then got chosen, it'd flip flop to 75% A, 25% B, etc. The result was it mostly went back and forth between the two, but would do some attacks 2 or 3 times in a row before switching back to the other.
A similar, simpler idea is sometimes used in games: you put all choices in a "bag", then draw from the bag until it's empty, then put everything back.
Tetris is the go-to example. Tetris has seven tetronimos, and in most modern implementations you're guaranteed to see them in sets of seven in random order.
This is pretty essential to make competitive play err on the side of skill rather than randomness: pro-players can anticipate and plan for this. For fun, here's a recent Tetris head-to-head speed-run from Awesome Games Done Quick, with pretty good narration about the tactics involved:
Basically, the higher your chance to hit with an attack was, the the sooner it would break a streak of misses and force a hit. This is definitely not random, but seems to be more satisfying to the player. Plus it no doubt cuts down on the number of complaints on the forums about "unfair" RNG.
Playerunknown's battlegrounds currently suffers from item spawns being "too random". You can go houses without finding a rifle then suddenly find 2-3 practically on top of each other.
They could do with learning a little from other games and making it a little less random.
Ultimately you can see it as a bug in human reasoning but we're the ones playing the game.
For board games, like "Settlers of Catan" where resources are generated based on rolls of 2d6, one could use an analog version of this with a cup containing 36 chits, of the numbers 2-12 according to the normal distribution, _without_ replacement. You would still get the randomness of ordering, but over 36 draws/turns would get a complete "average" distribution.
If that is a bug or a feature is left as an exercise for the reader.
I often use google "flip a coin"[1] for stupid things and the other day I was wondering why almost every single time it came up heads. I started to wonder if there was a browser rng problem or the code was crazy etc.
At first I was confused, because statistical models that aren't temporally independent are very common.
But it's very clear from the comments that having dice that aren't independent between rolls is incredibly in demand :o, and having the right words to google can be tricky.
Video games, especially competitive ones, do this to limit the effect of randomness on the outcome of the game, while still keeping the sequence of random events unpredictable enough to "feel" random and preventing simple exploits.
DoTA2 uses a simple distribution based on the number of "rolls" since the last successful one - P(N) = P0 * N, where P0 is the base probability and N is the number of rolls since the last successful one[1].
It keeps both "hot" and "cold" streaks from being too much of an issue, although that doesn't stop players from cursing the RNG gods when they lose.
> I made a chatbot that rolled dice, and it was constantly criticized for being "broken" because four 3's would come up in a row.
> These accusations would come up even though they (all being computer science majors) know it's possible (although unlikely) for these events to happen. They just don't trust the black box.
This reminds me of a talk [1] given at Game Developer's Conference (GDC) about the game Civilization, in which the Sid Meyer -- creator of said game -- spent a bit of the time talking about the difference between fairness and perceived fairness. The talk is only an hour long and worth watching.
This reminds me of an article discussing the perceived outcome of RNG decisions in video games. In many types of games, the system will display a percentage chance of success for a given action which allows the player to make risk assessments regarding possible choices. Unfortunately, the unmodified RNG behavior creates an unpleasant experience for the user because the unweighted random outcomes feel "unfair" when failure streaks pop-up, thus, game designers almost always introduce some type of magic cushioning to the RNG so that the user never faces too many repeated failures.
I'm probably very wrong, but I still feel there is some undiscovered science when it comes to RNG and the fallacy of the maturity of chances ( Gambler's Fallacy ).
Einstein believed the universe was deterministic. Just because it appears to us that there is no correlation between independent events ( the roll of a dice ), does not mean that there isn't some underlaying variable that we are unable to measure or perceive which is affecting the outcome of the roles.
I wonder if this could be / has been applied to loot tables in video games in order to keep the player interested in playing.
I've designed a few loot tables and the Gambler's Fallacy is a criticism I often have to deal with when people don't understand why a given item hasn't dropped despite them having killed a mob enough times to statistically warrant it.
The quote at the end is meant as a joke but it's interesting how often this is true. A lot of magic tricks rely on being prepared for different outcomes, while often trying for the least likely one first. This unlikely outcome happens surprisingly often and therefore makes the effect even more unbelievably amazing.
I had a friend think of a playing card and any number (1-52). She picked the 6 of spades and the number 15 which is exactly the position where the card was located. It was only the third time I had done this trick with anybody.
Obviously, card and number picking is not uniformly random, especially when you influence their choice (e.g. "pick a large number"). But the odds of someone guessing the exact combination should still be extremely low.
A lot of what you see from David Blaine on TV is exactly this. He always has a backup plan but more often than not he doesn't need it.
It's always nagged me that statistical problems are scoped so small. Surely in saying there's 6 outcomes on a dice you'd obfuscated the billions of interactions between atoms and input possibilities in doing so. Thrower A and thrower B will undoubtedly throw slightly different which might actually constraint the outcomes and skew the 1 in 6 percentages?
It's similar to me to condensing 30 characters to 5 via an algorithm. You can go one direction but not the other and if your model was centered around the resulting 5 it doesn't really reflect what's actually happening which may skew the probabilities quite a bit. e.g. if the algorithm was "if first letter is not q, then first letter in output is q". If you were saying each has an equal percentage of occurring it'd be flat out wrong.
* I am not a statistician and have no idea what I'm talking about
For these kinds of exercises about fair dice and fair coins, it's a lot like the kind of jiggery-pokery that happens in Physics classes and stuff. "Under ideal circumstances . . .", "In a perfect vacuum . . .", etc.
Outside of games, for which this was explicitly created, these kinds of things are learning tools to understand distributions and dependent vs. independent events, and it also makes a separate point about assumptions.
In reality, if we were wanting to predict the outcome from a real dice-throwing event, we would either sample the results from actual people throwing dice, or we would simulate the results based on parameter inputs for exactly the types of things you are talking about.
Of course, no one really cares that much about dice, other than people who play board games with dice. :) So substitute any other stochastic method for generating an outcome that does matter, and the statistical approach will generally be the same: either sample or simulate.
Obviously there are exceptions, but that's really the basic idea.
[+] [-] andy_wrote|8 years ago|reply
A fair 6-sided die would be an equal number of balls numbered 1-6 and a rule that you simply return the ball you drew. You can get a gambler's fallacy distribution by, say, adding one of every ball that you didn't draw. I read the code as a Pólya urn starting with 1 ball 1-N and doing that on each draw plus reducing the number of balls of the drawn number to 1.
Also related, in 2d space, is the idea of randomly covering the plane in points but getting a spread-out distribution, since uniformity will result in clusters. (If you're moving a small window in any direction and you haven't seen a point in a while, you're "due" to see another one, and vice versa if you just saw a point.) Mike Bostock did a very nice visualization of that here: https://bost.ocks.org/mike/algorithms/
[+] [-] ideonexus|8 years ago|reply
[+] [-] koolba|8 years ago|reply
Another method is to pick a random key K and sort the entries of the playlist based on HMAC(SONG, K) (where SONG is the name or other identifier of the song). This has a number of interesting properties. For starters the playlist will be randomized (that's good). It will also keep it's overall order if a new song is added. The new song will get inserted based upon its HMAC. You can switch things up a bit by having K be a based on the current date. That way new songs will maintain their order and jumping to a track on a given day will continue the chain as before, but day to day you won't always hear the same tracks back to back.
[+] [-] drewrv|8 years ago|reply
[+] [-] laumars|8 years ago|reply
[+] [-] gregmac|8 years ago|reply
This was a great way of tweaking the "randomization", and really gets into understanding what people want to hear. In my case, I had different stations/playlists customized in different ways.
[+] [-] xg15|8 years ago|reply
I know this is about being "technically correct" but I'd argue that even if you're pedantic and know the behavior of true randomness, the complaints are justified.
First, "pick a song from a uniform distribution" is an implementation detail, not the use case. The use case is "pick songs in a pleasant, novel order". If the implementer chose to do this using uniform randomness, that's their descision. If that implementation does not fulfill the use case, it's a bug and also their responsibility.
Second, the function is frequently called "shuffle" - so it even gives some hints about the implementation expected. No shuffling algorithm should ever result in duplicates. (except at the beginning and end of playlists)
[+] [-] corpMaverick|8 years ago|reply
[+] [-] michaelmcmillan|8 years ago|reply
[+] [-] munificent|8 years ago|reply
https://en.wikipedia.org/wiki/Simple_random_sample
[+] [-] erikb|8 years ago|reply
[+] [-] nxc18|8 years ago|reply
Whether this was because they had a botched 'smart' randomizer or because they in fact used true random to shuffle, I may never know.
[+] [-] nathan_f77|8 years ago|reply
I manually shuffle some playlists now. One nice thing about the desktop client is the clipboard integration. You can press cmd+a to highlight all the songs, cmd+c to copy, and then paste all the song URLs into a text editor. Then I used "permute lines" in Sublime Text to shuffle them, then cmd+c and cmd+v to paste the songs back into Spotify.
[+] [-] jameshart|8 years ago|reply
Long ago I asked this related question on stackoverflow: https://stackoverflow.com/questions/5467174/how-to-implement... - it surprises me still that nobody has identified a classic algorithm for solving exactly this problem.
[+] [-] wodenokoto|8 years ago|reply
[+] [-] legohead|8 years ago|reply
when I finally noticed this and realized that could only mean it's not actually random, I researched it, and found out that once you hit shuffle/random on Winamp, it shuffled the songs once and kept the sequence forever.
and then as it turned out, Winamp actually had a setting to select a truly random song each time. all I needed to cure my madness was a checkbox!
[+] [-] BurningFrog|8 years ago|reply
A simpler system is picking a random song, but excluding the last 20 songs played.
[+] [-] c3534l|8 years ago|reply
[+] [-] hammock|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] accountyaccount|8 years ago|reply
[+] [-] Pfhreak|8 years ago|reply
Players would love that! Make my hero feel more heroic! The inevitable comeback!
But then I thought about the inverse case -- you are doing really well, and now you are due for a failure. Or series of failures. That would feel awful.
We have a lot of emotions around dice rolling. I wonder if players really want from their dice. Would players prefer dice that are secretly unevenly weighted towards good rolls? Would they still want those dice if they knew they were weighted?
[+] [-] xori|8 years ago|reply
[+] [-] talyian|8 years ago|reply
It doesn't have the "you are doing really well, and now you are due for a failure" since "not a critical" isn't really that negative of an outcome.
For something like D&D, the same kind of "better-than-random" distribution just comes from adding dice. 4d6 is much more consistent than 1d21+3.
[+] [-] WorldMaker|8 years ago|reply
I think there are interesting stories to tell in an RPG where you know the "weight" of your rolls, approximately how many "success" cards you have left, some choice in what you play from your hand.
«But then I thought about the inverse case -- you are doing really well, and now you are due for a failure. Or series of failures. That would feel awful.»
It depends on how personal you take it in how awful it feels. If the goal is to create an interesting story, a grand sequence of failures can be a very interesting story. Look at Fiasco, for instance, where some of the fun can be very explicitly "how do I make my character more miserable?" In those cases where you know you are about to hit a lot of failures in die rolls/card draws you can lean in to it. Try to explain why your character is having such awful luck, for instance. Maybe try to set up an elaborate Rube Goldberg design of failures that eventually cascade into the weirdest, grandest success, like a "drunken master" kung fu ballet.
[+] [-] Iv|8 years ago|reply
Randomness is what makes a game more realistic but true randomness will not make it more likable. True randomness dictates that in half of your games, your players will have a harder time achieving anything.
It is not uniformity of results that the players are looking for, it is uniformity of successes weighted by the importance of the rolls. Failing a perception check in an empty room is less important than failing a dodge roll in the final fight on your last health point.
Many games suggest that the gamemaster cheats on some crucial rolls. More games propose an alternate system that biases rolls based on points systems. Some others even replace totally the dices with a points system.
I really prefer those. A very simple biased rolls system I often use (Dk2) is that GM can emphasize the action of dangerous NPCs by adding special D6 dices to a D20 roll (they are special as they only give +3 or +6 on 3 and 6 and +0 on other results). Once used, these dices are added to a pool that the players can use for their actions.
Anyway, my favorite system is Amber Diceless RPG. It trades randomness for secrets. Every player is a bit of a backstabing asshole who does not show all their skills on any action. If someone has 30 in strength, they may just announce 10 if they think it is enough for the action.
The winner in a fight is the one with the bigger score. Add bonus if they use some kind of trick or ambushes (recommended). It is surprisingly interesting for such a simple system.
[+] [-] vslira|8 years ago|reply
Anyway, nice work!
[+] [-] fiatjaf|8 years ago|reply
[+] [-] doodpants|8 years ago|reply
> These accusations would come up even though they (all being computer science majors) know it's possible (although unlikely) for these events to happen. They just don't trust the black box.
I am reminded of the approach that GamesByEmail.com used to address this criticism: http://www.gamesbyemail.com/News/DiceOMatic
[+] [-] nickm12|8 years ago|reply
[+] [-] cableshaft|8 years ago|reply
You can actually play it right here and go direct to the Boss Fight if you wanted to: http://briancable.com/clock-legends/
[+] [-] vanderZwan|8 years ago|reply
Tetris is the go-to example. Tetris has seven tetronimos, and in most modern implementations you're guaranteed to see them in sets of seven in random order.
http://tetris.wikia.com/wiki/Random_Generator
This is pretty essential to make competitive play err on the side of skill rather than randomness: pro-players can anticipate and plan for this. For fun, here's a recent Tetris head-to-head speed-run from Awesome Games Done Quick, with pretty good narration about the tactics involved:
https://www.youtube.com/watch?v=PeNB4w99FiY&t=1h21m15s
[+] [-] bigato|8 years ago|reply
[+] [-] hesdeadjim|8 years ago|reply
http://www.gdcvault.com/play/1012186/The-Psychology-of-Game-...
More often than not, true RNG in game design takes a back seat to fun.
[+] [-] lawtguy|8 years ago|reply
Basically, the higher your chance to hit with an attack was, the the sooner it would break a streak of misses and force a hit. This is definitely not random, but seems to be more satisfying to the player. Plus it no doubt cuts down on the number of complaints on the forums about "unfair" RNG.
[+] [-] eterm|8 years ago|reply
They could do with learning a little from other games and making it a little less random.
Ultimately you can see it as a bug in human reasoning but we're the ones playing the game.
[+] [-] Thespian2|8 years ago|reply
If that is a bug or a feature is left as an exercise for the reader.
[+] [-] cletusw|8 years ago|reply
[+] [-] kator|8 years ago|reply
[1] https://www.google.com/search?q=google+flip+a+coin
[+] [-] closed|8 years ago|reply
But it's very clear from the comments that having dice that aren't independent between rolls is incredibly in demand :o, and having the right words to google can be tricky.
(I feel like there's an important lesson there)
[+] [-] xori|8 years ago|reply
[+] [-] onetwotree|8 years ago|reply
DoTA2 uses a simple distribution based on the number of "rolls" since the last successful one - P(N) = P0 * N, where P0 is the base probability and N is the number of rolls since the last successful one[1].
It keeps both "hot" and "cold" streaks from being too much of an issue, although that doesn't stop players from cursing the RNG gods when they lose.
[1] http://dota2.gamepedia.com/Random_distribution
[+] [-] eriknstr|8 years ago|reply
> These accusations would come up even though they (all being computer science majors) know it's possible (although unlikely) for these events to happen. They just don't trust the black box.
This reminds me of a talk [1] given at Game Developer's Conference (GDC) about the game Civilization, in which the Sid Meyer -- creator of said game -- spent a bit of the time talking about the difference between fairness and perceived fairness. The talk is only an hour long and worth watching.
[1]: https://www.youtube.com/watch?v=AJ-auWfJTts
[+] [-] root_axis|8 years ago|reply
[+] [-] _Marak_|8 years ago|reply
Einstein believed the universe was deterministic. Just because it appears to us that there is no correlation between independent events ( the roll of a dice ), does not mean that there isn't some underlaying variable that we are unable to measure or perceive which is affecting the outcome of the roles.
[+] [-] YCode|8 years ago|reply
I've designed a few loot tables and the Gambler's Fallacy is a criticism I often have to deal with when people don't understand why a given item hasn't dropped despite them having killed a mob enough times to statistically warrant it.
[+] [-] rivo|8 years ago|reply
I had a friend think of a playing card and any number (1-52). She picked the 6 of spades and the number 15 which is exactly the position where the card was located. It was only the third time I had done this trick with anybody.
Obviously, card and number picking is not uniformly random, especially when you influence their choice (e.g. "pick a large number"). But the odds of someone guessing the exact combination should still be extremely low.
A lot of what you see from David Blaine on TV is exactly this. He always has a backup plan but more often than not he doesn't need it.
[+] [-] erikb|8 years ago|reply
[+] [-] methodin|8 years ago|reply
It's similar to me to condensing 30 characters to 5 via an algorithm. You can go one direction but not the other and if your model was centered around the resulting 5 it doesn't really reflect what's actually happening which may skew the probabilities quite a bit. e.g. if the algorithm was "if first letter is not q, then first letter in output is q". If you were saying each has an equal percentage of occurring it'd be flat out wrong.
* I am not a statistician and have no idea what I'm talking about
[+] [-] ianamartin|8 years ago|reply
Outside of games, for which this was explicitly created, these kinds of things are learning tools to understand distributions and dependent vs. independent events, and it also makes a separate point about assumptions.
In reality, if we were wanting to predict the outcome from a real dice-throwing event, we would either sample the results from actual people throwing dice, or we would simulate the results based on parameter inputs for exactly the types of things you are talking about.
Of course, no one really cares that much about dice, other than people who play board games with dice. :) So substitute any other stochastic method for generating an outcome that does matter, and the statistical approach will generally be the same: either sample or simulate.
Obviously there are exceptions, but that's really the basic idea.