top | item 2139000

Twitter Can Predict The Stock Market

80 points| pmorici | 15 years ago |wired.com

53 comments

order
[+] 3pt14159|15 years ago|reply
Why publish this if it works? One article, a fleeting bit of fame, and a truck load of copycats. It would be more impressive if the article had read: "Researchers, upon discovering tweets predict the stock market, make $100mm before disclosing research to the public." No more need for University grants anymore, much more believable findings. As an aside, data analysis can be tricky. I'm pretty wary of loosely defined research objectives. For example, why is it 3 days? Why is it those 72 words? Over-fitting is a real problem with prediction based stuff.
[+] futuremint|15 years ago|reply
I've always been fascinated by the romantic idea of writing my own trading engine.

So I did some research, and most people who have written them will tell you that in cases like this, training on past data doesn't correlate well with current & future data.

The stock market of 2011 is not the market of 2008.

But what do I know, not like I've actually done it :)

[+] achompas|15 years ago|reply
Correction--it worked. The authors chose a two-year old sample during which the Dow Jones fell 30.7%. I seriously doubt this will have any predictive power outside of that sample.
[+] charlief|15 years ago|reply
I have seen the study a few times, most recently in http://news.ycombinator.com/item?id=1803505 . I think the big problem is selection bias:

The Dow Industrial Average over the last 10 years

http://www.google.com/finance?chddm=997050&q=INDEXDJX:.D...

* Notice that the end of 2008 was unusual for the index. 2008 had the most herded and fearful stock market in recent history. If at anytime the stock market was correlated to mood, it would be then. I am not sure if a 2008 analysis can be generalized to any year but 2008.

* They have not done an analysis on 2009 or 2010, and they chose to split the analysis and pick December 2008 based on a qualitative assumption from the "stabilization of DJIA values after considerable volatility in previous months and the absence of any unusual or significant sociocultural events". December 2008 was very much in the midst of the crisis still.

* For their December "stable" data set, they only used 30 days. That is limited in sample size. There is a big pool to draw from since 2009 as the market has been relatively stable.

[+] achompas|15 years ago|reply
Right, upvoted. I read the paper and the authors are very particular in their sample selection. How could someone choose 2008 as a sample? I'd be much more impressed if they used a larger sample.

Also, some food for thought: it would be interesting to see someone testing Twitter moods as an instrumental variable for a project.

[+] Judson|15 years ago|reply
I may be wrong, but an initial success rate of 73.3% before adding the emotional data seems like overfitting.
[+] wildwood|15 years ago|reply
Depending on how they defined success, 73% might be achievable just by looking at co-correlation. Up days and down days tend to run in streaks. If they defined success as predicting 'up' or 'down' for the day for the DJIA, just going with the most popular result for the last N days could work.

But overfitting is definitely still a concern. Looking at the overall trend for the Dow Jones in 2008, I wonder what the success rate of an indicator that always said 'down' would be.

[+] iwwr|15 years ago|reply
The trouble with these financial models is that once they become common knowledge, it's too late. The market absorbs these algorithms into its pricing mechanism and renders no further arbitrage profits.
[+] seb|15 years ago|reply
So can google. Supposedly Sergey even suggested to start a Hedge fund, but it would probably be insider trading if they would do their decisions based on the user data.
[+] daeken|15 years ago|reply
How could it be insider trading, if they're not doing anything with GOOG?
[+] vannevar|15 years ago|reply
I would need to hear more to be convinced. The fact that they had a large number of signals they were tracking, without a clear rationale for any one of them, is troubling.

Consider a set of random signals; arbitrarily select one as the benchmark. Then from among the rest take the signal that best predicts the daily direction of the benchmark. That signal will likely have much better than 50% accuracy because by definition the worst signal will be around 50% accurate (if it were any less it would have an equally useful inverse correlation).

[+] Dn_Ab|15 years ago|reply
I know very little about trading but even I can see a whole bunch of red flags here. Firstly, if it has just made it to the news then its probably a decade too late to take advantage of. Recently there was a news article about how firms have software that reads and operates based on news events. Except that this recent 'news' article was about something that had already been happening for years. Secondly, twitter contains a subset of information in the market. No surprise that there will be some correlation.

Then: predicting up or down movement of a stock is very vague. At what time scale, what sort of trades are required and what sort of response times to execute. What are its drawdowns like, does it account for taxes, commission fees etc. Next, use of a complex nonlinear learning model with lots of parameters - raises alarm bells - these tend to be very susceptible to noise, trading data is highly correlated and typical regularization methods often do not suffice. Then there is the whole issue of over-fitting in general, data used to train on (size, survivor bias, accounting for splits and what not) which makes the whole thing very hand wavy. Without additional info as basic as rate of return, the stated 83% accuracy is meaningless. Like with all things, its easy to get results that work within the limited and safe confines of academic testing but actually shipping a working product is another story.

There has always been a draw to beating the stock market. And these days there is nothing more romantic than doing so using Artificial Intelligence! But I think the most important part of any trading strategy is to be made up of parts that are constantly being swapped out and replaced based on research. you can't just throw a machine learning algorithm at it and think job done. The thing will likely only profit for a couple microseconds. however, as an aside, I would not be surprised if one of [anti]spam/virus/botnets or HFT wars will one day produce AI.

[+] hogu|15 years ago|reply
I skimmed the paper, but I couldn't find very much information on how they did the cross validation (like, what dates, they trained, and then what dates they tested the prediction) Also - I do believe that tweet sentiment can predict the stock market, but not on such a large timescale. I would guess that any analyst reading the news could have a good estimate of sentiment, at least as good as the twitter opinion finder. I think the twitter opinion finder is useful when you want to measure sentiment at a rate higher than that which humans can do it.
[+] fbnt|15 years ago|reply
I remember doing some research about this a while ago. Getting some sort of text-based emotional index isn't trivial at all, there are few hardly viable solutions (google's prediction api and Bayes-based algorithms), but they aren't really accurate. This is also been tried in the past by startups of techcrunch fame such as stockmood.com, all failed miserably. Props to twitter or anyone who will succeed at this.
[+] zackattack|15 years ago|reply
I have been doing research on this problem, send me an email if you'd like to connect.
[+] achompas|15 years ago|reply
Edit: removed some points b/c charlief made them more succinctly.

An even better question: is the relationship causal? The researchers use Granger causality analysis to test their hypothesis. Wikipedia tells me this analysis "may produce misleading results when the true relationship involves three or more variables." [2] By definition, Twitter and the DJIA are macro aggregates of a number of factors. How could the researchers apply Granger here?

[1] See Table 1 at http://www.sca.isr.umich.edu/documents.php?c=c

[2] http://en.wikipedia.org/wiki/Granger_causality#Limitations

[+] grantbachman|15 years ago|reply
I find this interesting because the consensus among the economic community is that markets are highly efficient, that is, information is reflected immediately in stock market prices. This suggests information exists which is not being reflected. That's why I'm skeptical.
[+] loewenskind|15 years ago|reply
>the consensus among the economic community is that markets are highly efficient

Really? There are some true believers out there under this impression, but I didn't think anyone credible was. It wasn't so long ago that someone showed efficient markets were an P=NP problem.

EDIT: I'm not the one who downvoted you.

[+] damoncali|15 years ago|reply
Just because information is reflected immediately doesn't mean that it's reflected correctly.
[+] bhickey|15 years ago|reply
Is Twitter predicting the market or are traders moving it based on perception of prediction?
[+] mikeleeorg|15 years ago|reply
Out of curiosity, has anyone used, or know someone who uses http://stocktwits.com/ to inform their trading decisions - and had a positive outcome? I haven't, I'm just curious.
[+] nowarninglabel|15 years ago|reply
Haven't used it, but not sure why most would when there are already tools like this built in to most online brokerages. Most of the trading advice I have seen goes in two places, mass market appeal such as this, or obscure and sometimes secretive forums.
[+] palewery|15 years ago|reply
If you can decode a way to 'beat the market' that means once you start beating the market someone else is loosing. They will adjust their trading techniques and your "algorithms" are now wrong
[+] adamtmca|15 years ago|reply
I think you misunderstood the efficient market hypothesis.
[+] scrrr|15 years ago|reply
I doubt those hn users that think this is working will say so. ;)
[+] andrewcamel|15 years ago|reply
Can someone please link to a download of the OpenFinder module?