The well known bettors will place bets knowing others will emulate it, moving the odds in favor of their real bet they have yet to place. I wouldn't be surprised if some of the big Vegas books are in on it since they make their money long term on vig, and don't have to worry about short term losses (thus enjoying any increase in bets those schemes generate).
There is an older 60 minutes episode on Billy Walters that is a good watch for anyone interested in people trying to beat sports books.
Vegas always gets their cut as you point out. You need to reliably win 52.5% of the time to beat the bet $110 to win $100 edge.
In general, betting schemes tend to stop working in the long term, but can have large gains in the short term [1]. We created this example to encourage people to think about model tuning in the applications where they are domain experts and how it could benefit them (and hopefully the world in return).
Post author and co-founder of SigOpt (YC W15) here. Thanks for all the great questions and comments, I'll be around all day answering questions. Feel free to ask anything about the post or what we do at SigOpt (or how).
Vegas odds aren't even the most accurate odds to beat. I.e you can already beat Vegas odds by looking at odds pinnacle, 5 dimes, bet 188 and consider them the true odds. It's called arbitrage betting. If you have a model that can truly beat the market, then it is the players I mentioned that you want to beat.
Arbitrage betting is utilizing differences in money line payoffs to place two or more bets that cannot lose regardless of the result of the sporting event. These opportunities do exist, however sports books absolutely hate it and some of the offshore ones will either ban you from playing or in some cases simply refuse to pay off your bet.
Great points. Stay tuned for a post sometime soon where we do something similar with Wall Street, where you can build hedging and adversarial trades into the model itself.
More very old, very dangerous sharks swimming in the same pool. Betting to win long term is not something most people are mathematically or emotionally equipped to do and a toy demonstration showing a handful of results means very little.
It's not clear from the article what the input data is. I had a look in the repo, it wasn't clear from that either. Is it the normal fare you'd expect?
I've heard of prop betting firms using stuff like "distance travelled by the away team" and other logical things like that.
We give a high level overview in footnote #4 of the post, but more detail can be found in the code [1]. We tried to pick a relatively small set of features (and kept those picks constant) in order to isolate the model tuning gains from pure feature selection. You can fork the code and try any other interesting features you think would make it better (there is a ton of data we didn't have the chance to look at).
ugexe|10 years ago
There is an older 60 minutes episode on Billy Walters that is a good watch for anyone interested in people trying to beat sports books.
Zephyr314|10 years ago
In general, betting schemes tend to stop working in the long term, but can have large gains in the short term [1]. We created this example to encourage people to think about model tuning in the applications where they are domain experts and how it could benefit them (and hopefully the world in return).
[1]: http://www.wsj.com/articles/a-fantasy-sports-wizards-winning...
Zephyr314|10 years ago
All of the code can be found here: https://github.com/sigopt/sigopt-examples
More about how SigOpt works here: https://sigopt.com/research
Other discussions here: https://www.reddit.com/r/MachineLearning/comments/3yy0vp/usi...
and here: https://news.ycombinator.com/item?id=10819170
bjourne|10 years ago
downandout|10 years ago
Zephyr314|10 years ago
shabbaa|10 years ago
chillydawg|10 years ago
Zephyr314|10 years ago
[1]: https://github.com/sigopt/sigopt-examples
lordnacho|10 years ago
I've heard of prop betting firms using stuff like "distance travelled by the away team" and other logical things like that.
Zephyr314|10 years ago
[1]: https://github.com/sigopt/sigopt-examples/blob/master/sigopt...
joshmn|10 years ago
I'm not salty or anything, and it wouldn't have influenced me into reading the article or not. It just puts a bad taste in my mouth.
punnerud|10 years ago
discardorama|10 years ago
cfcef|10 years ago