Another conclusion to draw from this article (which I really enjoyed, by the way) is that Big Data has been turned into one of the most abstract buzzwords ever. You thought "cloud" was bad? "Big Data" is far worse in its specificity.
I can't count the number of times I'll be talking to some sales rep and they'll describe how they scan the data within whatever application they're demoing and "suggest" items using "big data techniques". In almost all cases they're talking about a few thousand or hundred thousand records, tops.
I've found that when non-hardcore techies talk about Big Data, what they really mean is "they have some data" vs before, when they had zero data.
From the article:
"Consultants urge the data-naive to wise up to the potential of big data. A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes.
What these consultants mean is that by having just some data compared to the silo'd data that is the norm in US healthcare, they could save a lot, and they're right. My previous company had a large data set (20+ million patients) and we'd find millions of dollars of savings opportunities for every hospital we implemented in, but that's because we had the data, not because we were running some kind of non-causual correlation analysis like the article references. It was just because we could actually run queries on a data set.
-----
Off Topic - how annoying is it that when you copy & paste from the FT, they preface your copy with the following text?
Less than a month ago I was at an event about Big Data in marketing. One of the speakers spoke about how they used user data to improve their client's brand experience. If was very effective and I agree they did something extraordinary.
I then asked what tools they used. He responded with a well-known relational database. I then asked the total size of their dataset, with a good idea of what the upper bounds would be. He responded "around 100 million events" since the product started, 6 months ago.
It's really sad because they may end up under fire despite the effectiveness of their work.
But, like "cloud" or "web 2.0" or any similar buzzword there obviously is some substance to it unspecific, un-novel, abused, or not. It just break into unsatisfying mush when you look at it to closely.
Web 2.0 was some sort of a shift over web 1.0, the line between publisher and consumer melted. Cloud is etherealizing computing and data. There was a thread a few days ago about the film Her. "Where is Samantha" (the AI) is borderline nonsensical statement. It doesn't come up to a viewer. That's because people are used to cloud as an idea now It doesn't really matter that servers, replication, dumb clients, remote data or whatever were invented a long time ago.
I enjoyed your post, nemesisj.
Within your field of longitudinal patient data, if I am correct in what you have written, your large datasets simply have a new name, paranthetically Big Data, and that you could get what you need to save money without the newfangled algorithms. Within academic bioscience, I think there is great consensus on what Big Data is - I have not seen much argument at all; however, it is still very hard to define within that field. The best I can do, over this cup of coffee, is to state that there is a clear distinction between the study of a gene, up to a few pathways vs. computational analysis of multiple OMICS (genomics, metabolomics and proteomics) datasets. I know that definition is terribly lacking and I am fighting the urge to delete it for the sake of getting the post completed. Anyway, Big Data is clearly changing the academic biosciences through the funding trend. That is, grants with a computational focus, or sub focus, certainly seem to be doing comparably well. I mention this because todays academic funding trends influence the direction of tomorrows startups as those being trained are disproportionally within the better funded labs, and draw upon their previous experience when forming companies. So, I personally believe this Big Data thing, however it is best defined over all, or within a given field, is in some way something new, and will continue to shape the startup sphere for years to come, especially in the areas of genomics, metabolomics and proteomics. This is my first post: ) Thanks!
Out of curiosity, when does it effectively become "big data"?
I ask not to be snarky, but it might be the case that it's "big data" to someone else, but not necessarily to you. I figured it was a relative term for your industry/business, but the hacker crowd definitely seems to peg that amount in the millions of data points before calling it big data at all.
Re: copying from FT, if you're using Firefox you can set dom.event.clipboardevents.enabled to false to get around that. Will probably break copying in some web apps.
"But while big data promise much to scientists, entrepreneurs and governments, they are doomed to disappoint us if we ignore some very familiar statistical lessons.
“There are a lot of small data problems that occur in big data,” says Spiegelhalter. “They don’t disappear because you’ve got lots of the stuff. They get worse.”"
This should be the main learning point. Humans can be astonishingly bad at dealing with stats and biases which can led to erroneous decisions being made. If you want an example where such decisions by very smart people can have catastrophic consequences, look up the Challenger disaster [1].
I rarely see people stating their assumptions upfront, which doesn't help the problem (I guess it's not cool to admit potential weaknesses). The more people/companies that get into 'big data' (without adequate training) the more false positives we're going to see.
This article reminds me of the argument [0] between Noam Chomsky [1] and Peter Norvig [2]. TL;DR (paraphrased with hyperbole) Chomsky claims the statistical AI of Norvig is a fancy sideshow that doesn't understand _why_ it is doing a thing. It just throws gigabytes of data at an ensemble and comes out with an answer.
"“The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics,” Stuart Russell and Peter Norvig write in their leading textbook, Artificial Intelligence: A Modern Approach. AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?"
While the Norvig-Chomsky debate is about the philosophy of the science of AI, it has practical implications to practitioners who tend to apply statistical techniques as if they are popping a pill. Engineers applying statistical learning, etc. should understand the limitations of the techniques, as outlined by Chomsky in the debate. The outcome of the Chomsky-Norvig (or Hofstadter vs. everyone else in CS) debate is less important than the arguments put forth by both the groups.
I think they are both wrong. You need better models than just throwing lots of data at something simple which Norvig likes. But they are still statistical models at some level.
> a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”
I think that's indicative of Wired breathless enthusiasm for technology that turned my off buying the print version many years ago.
Scrape away some of the hyperbole and it is true that data driven management has made many companies more competitive and, if I dare mention the hobgoblin, efficient.
Hunches and ideas can only get you so far. It is important to visit the data gemba and do the genchi genbutsu.
I'm much more impressed when someone can squeeze information out of small data. W.S Gosset was extracting tons of information from as little as two observations. I'm very grateful that my advisor guided my cohort to work with two-observation MLE in many contexts. This type of practice focuses the analyst on squeezing out as much information as possible. When applied to big data, this approach can be very useful. Big data comes with data wrangling challenges, but if you don't carefully squeeze out information, you'll be leaving tons and tons on the table.
The misconceptions about big data are similar to those surrounding the word science.
Many people associate "science" with things: cells, microscopes, the inner workings of the body. But science isn't a set of things; it's a process, a method of thinking, that can be applied to any facet of life.
Big data is similar, in my opinion. It's not so much about the stuff — the size or diversity of a company's datasets. It has more to do with the types of observations you're making and the statistical methods involved.
This distinction is important for two reasons:
1. If Big Data is recognized as a process rather than a circumstance, businesses will be more deliberate in deciding whether to use the methods. They will weigh the benefits of, say, MapReduce against other approaches.
2. The idea that "Big Data" techniques have everything to do with size is somewhat misleading. A comprehensive query of a 50,000 user dataset can be more computationally expensive than a simple operation on a 100,000-record dataset.
It's the misconception that measurable observations equal the real distribution of the underlying events. Even professional data people often get that wrong, and it's not strictly limited to big data.
One of the most obvious examples was this one:
A data set of all known meteorite landings[1] turns into
"Every meteorite fall on earth mapped" [2] with looks like a world population maps sprinkled with some deserts known for their meteorite hunter tourism.
The actual distribution can be theoretically described as a curve falling towards the poles.[3]
While this example is pretty obvious, one could expect similar observation biases in other data sources. A danger lies where data analyst do not bother to investigate what their data actually represents and then go on to present their conclusions like it would be some kind of universal truth.
Agreed, the buzzword 'Big Data' has nothing to do with the actual size of a given dataset except that it is about gathering as much data ( really metadata) as possible and finding novel ways to extract value from that data.
I find it amusing that the article talks about big mistakes in polling data, when the clear winner of the last two US elections is one Nate Silver, who aggregated polls to get predictions so close to the actual results, one wonders why people actually vote anymore.
Now, just like with every other technological solution, we only learn about the limits of its use by overuse. There's plenty of people out there storing large amounts of data and getting no valuable conclusions out of it. But the fact that many people will fail doesn't mean the concept is not worth pursuing.
Chasing what is cool is a pretty dangerous impulse. The trick is to be able to tell when it can pay off, and to quickly learn when it will not, and cut your losses. Maybe you don't need big data, just like maybe your shiny cutting edge library might not be ready for production.
Nate's approach is based on evaluating the quality of the various polls - which is the thrust of the FT article. In fact he actively weighted each of the polls & corrected for known biases.
This is my favorite line and the one that damns so many "big data" efforts:
"They cared about correlation rather than causation."
Analytics are a tool to help find correlations and patterns so that humans can do the hard work of determining and testing for causation. Computers are doing their jobs; humans aren't.
The “with enough data, the numbers speak for themselves” statement has several meanings.
In one sense, if you can observe real phenomena, you don't have to guess at what is happening. For businesses that collect troves of it, they may need statistics 'less' because the sample size may approach the population size.
But calculating basic (mean, standard deviation, etc.) statistics is hardly the most interesting part. Inferential statistics is often more useful: how does one variable affect another?
As the article points out, the "... the numbers speak for themselves” statement may also be interpreted as "traditional statistical methods (which you might call theory-driven) are less important as you get more data". I don't want to wade in the theory-driven vs. exploratory argument, because I think they both have their places. Both are important, and anyone who says that only one is important is half blind.
Here is my main point: data -- in the senses that many people care about; e.g. prediction, intuition, or causation -- does not speak for itself. The difficult task of thinking and reasoning about data is, by definition, driven by both the data and the reasoning. So I'm a big proponent of (1) making your model clear and (2) sharing your model along with your interpretations. (This is analogous to sharing your logic when you make a conclusion; hardly a controversial claim.)
"Facebook’s mission is to give people the power to share and make the world more open and connected."
What it actually does... (that will be left to the reader.)
"Big Data" is often sold as one thing by Enterprise software folks. But what value the data, or processing of it actually has is usually much more dependent on the user and his context (like FB!) and usually doesn't fit as nicely onto a PPT slide.
Articles like this usually confuse the PR definition and the analyst definition.
A few other comments have raised this point, but Big Data is basically the new Web 2.0. Aside from being a buzzword, as a term it's so nebulous that half of the articles about it don't really define what it is. When does "data" become "big data"?
Conclusion: "Big Data" is a stupid buzzword and it makes me cringe every time I'm forced to say it to sell some new solution or frame something in a way someone who barely knows anything about computer science can understand.
It's nebulous. I've seen it applied to machine learning, data management, data transfer, etc. These are all things that existed long before the term, but bloggers just won't STFU about it. Businesses, systems, etc. generate data. If you don't analyze that data to test your hypotheses and theories, at the end of the day, you don't understand your own business and are relying on intuition for decision making.
There is definitely value to big data, but isn't it also a form of legitimizing stereotypes, at least in some cases? I mean, the general premise of big data, is to glean conclusions and new knowledge of the world from billions of records. When humans are the source of the data that is being extracted and analyzed, are the conclusions not stereotypes of those individuals, unless the correlation is 100%? This might be ok, and even useful, when trying to optimize clicks on ads, but what about when the government uses it to make policy decisions?
if i work for facebook and i want to figure out something about my users, isn't it safe to say N = All since the data im accessing is all user data from fb?
it's easy to go wrong with big data, and although the article glossed over some fairly important things (assuming the people who work on these datasets are much dumber than they are in reality), they're right on about idea that the scope and scale of what big data promises may be too grandiose for it's capabilities
Whilst, in the example you provide, it might be the case that "N = all", the cautionary tale offered in the article is that you always need to make sure you are asking the right question, and it is pretty easy to confuse yourself.
So you said "if i work for facebook and i want to figure out something about my users", and for whatever you were doing, looking at your existing user base might be the right thing to do. Perhaps, though, you actually want to know something about all your potential users, not just the users you happen to have right now. Whether or not your current user base offers a good model for your potential user base would then be a pretty important question, and one that almost certainly isn't answered by "big data".
I think that, as with most of statistics, the key point is "think about your problem", and that focusing on a set of solutions rather than the problems themselves can get in the way of that.
Even if you have the full population in question and thereby avoid sampling issues, you still have a lot of pitfalls. For example if you just start correlating every variable against every other one and picking out ones that hit some test of statistical significances as "findings", you run into a range of familiar problems generally grouped under the pejorative term "data dredging".
At first I thought so too. But it's actually easy to come up with cases where N != all. As a radical example, Facebook preserves the accounts of dead users.
[+] [-] nemesisj|12 years ago|reply
I can't count the number of times I'll be talking to some sales rep and they'll describe how they scan the data within whatever application they're demoing and "suggest" items using "big data techniques". In almost all cases they're talking about a few thousand or hundred thousand records, tops.
I've found that when non-hardcore techies talk about Big Data, what they really mean is "they have some data" vs before, when they had zero data.
From the article:
"Consultants urge the data-naive to wise up to the potential of big data. A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes.
What these consultants mean is that by having just some data compared to the silo'd data that is the norm in US healthcare, they could save a lot, and they're right. My previous company had a large data set (20+ million patients) and we'd find millions of dollars of savings opportunities for every hospital we implemented in, but that's because we had the data, not because we were running some kind of non-causual correlation analysis like the article references. It was just because we could actually run queries on a data set.
-----
Off Topic - how annoying is it that when you copy & paste from the FT, they preface your copy with the following text?
High quality global journalism requires investment. Please share this article with others using the link below, do not cut & paste the article. See our Ts&Cs and Copyright Policy for more detail. Email [email protected] to buy additional rights. http://www.ft.com/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabd...
[+] [-] rbanffy|12 years ago|reply
I then asked what tools they used. He responded with a well-known relational database. I then asked the total size of their dataset, with a good idea of what the upper bounds would be. He responded "around 100 million events" since the product started, 6 months ago.
It's really sad because they may end up under fire despite the effectiveness of their work.
Big Data is a lot like teen sex.
[+] [-] netcan|12 years ago|reply
Web 2.0 was some sort of a shift over web 1.0, the line between publisher and consumer melted. Cloud is etherealizing computing and data. There was a thread a few days ago about the film Her. "Where is Samantha" (the AI) is borderline nonsensical statement. It doesn't come up to a viewer. That's because people are used to cloud as an idea now It doesn't really matter that servers, replication, dumb clients, remote data or whatever were invented a long time ago.
[+] [-] STORMSPLITTER|12 years ago|reply
[+] [-] JonLim|12 years ago|reply
I ask not to be snarky, but it might be the case that it's "big data" to someone else, but not necessarily to you. I figured it was a relative term for your industry/business, but the hacker crowd definitely seems to peg that amount in the millions of data points before calling it big data at all.
Seems fair, but I'd rather clarify.
[+] [-] nemetroid|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] ElDiablo666|12 years ago|reply
[+] [-] owenversteeg|12 years ago|reply
[+] [-] LambdaAlmighty|12 years ago|reply
[deleted]
[+] [-] amirmc|12 years ago|reply
“There are a lot of small data problems that occur in big data,” says Spiegelhalter. “They don’t disappear because you’ve got lots of the stuff. They get worse.”"
This should be the main learning point. Humans can be astonishingly bad at dealing with stats and biases which can led to erroneous decisions being made. If you want an example where such decisions by very smart people can have catastrophic consequences, look up the Challenger disaster [1].
I rarely see people stating their assumptions upfront, which doesn't help the problem (I guess it's not cool to admit potential weaknesses). The more people/companies that get into 'big data' (without adequate training) the more false positives we're going to see.
[1] http://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disast...
[+] [-] sitkack|12 years ago|reply
[0] - http://www.theatlantic.com/technology/archive/2012/11/noam-c...
[1] - http://en.wikipedia.org/wiki/Noam_Chomsky
[2] - http://en.wikipedia.org/wiki/Peter_Norvig
----
Norvigs rebuttal, http://norvig.com/chomsky.html
[+] [-] shas3|12 years ago|reply
This analogy is particularly illuminating,
"“The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics,” Stuart Russell and Peter Norvig write in their leading textbook, Artificial Intelligence: A Modern Approach. AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?"
While the Norvig-Chomsky debate is about the philosophy of the science of AI, it has practical implications to practitioners who tend to apply statistical techniques as if they are popping a pill. Engineers applying statistical learning, etc. should understand the limitations of the techniques, as outlined by Chomsky in the debate. The outcome of the Chomsky-Norvig (or Hofstadter vs. everyone else in CS) debate is less important than the arguments put forth by both the groups.
[+] [-] Houshalter|12 years ago|reply
[+] [-] wglb|12 years ago|reply
[+] [-] SixSigma|12 years ago|reply
I think that's indicative of Wired breathless enthusiasm for technology that turned my off buying the print version many years ago.
Scrape away some of the hyperbole and it is true that data driven management has made many companies more competitive and, if I dare mention the hobgoblin, efficient.
Hunches and ideas can only get you so far. It is important to visit the data gemba and do the genchi genbutsu.
http://en.wikipedia.org/wiki/Gemba
http://en.wikipedia.org/wiki/Gembutsu
[+] [-] sireat|12 years ago|reply
It seems pretty much everything they write about is supposed to change the world in a major paradigm shift.
[+] [-] RA_Fisher|12 years ago|reply
[+] [-] hawkharris|12 years ago|reply
Many people associate "science" with things: cells, microscopes, the inner workings of the body. But science isn't a set of things; it's a process, a method of thinking, that can be applied to any facet of life.
Big data is similar, in my opinion. It's not so much about the stuff — the size or diversity of a company's datasets. It has more to do with the types of observations you're making and the statistical methods involved.
This distinction is important for two reasons:
1. If Big Data is recognized as a process rather than a circumstance, businesses will be more deliberate in deciding whether to use the methods. They will weigh the benefits of, say, MapReduce against other approaches.
2. The idea that "Big Data" techniques have everything to do with size is somewhat misleading. A comprehensive query of a 50,000 user dataset can be more computationally expensive than a simple operation on a 100,000-record dataset.
[+] [-] mxfh|12 years ago|reply
One of the most obvious examples was this one: A data set of all known meteorite landings[1] turns into "Every meteorite fall on earth mapped" [2] with looks like a world population maps sprinkled with some deserts known for their meteorite hunter tourism. The actual distribution can be theoretically described as a curve falling towards the poles.[3]
While this example is pretty obvious, one could expect similar observation biases in other data sources. A danger lies where data analyst do not bother to investigate what their data actually represents and then go on to present their conclusions like it would be some kind of universal truth.
[1]http://visualizing.org/datasets/meteorite-landings
[2]http://www.theguardian.com/news/datablog/interactive/2013/fe...
[3]http://articles.adsabs.harvard.edu//full/1964Metic...2..271H...
previous discussion of this: https://news.ycombinator.com/item?id=5240782
[+] [-] Varcht|12 years ago|reply
[+] [-] nobbyclark|12 years ago|reply
I fear that now that SOAP and enterprise buses have gone their way, they look a new buzzword to sell. More solutions looking for problems...
[+] [-] hibikir|12 years ago|reply
Now, just like with every other technological solution, we only learn about the limits of its use by overuse. There's plenty of people out there storing large amounts of data and getting no valuable conclusions out of it. But the fact that many people will fail doesn't mean the concept is not worth pursuing.
Chasing what is cool is a pretty dangerous impulse. The trick is to be able to tell when it can pay off, and to quickly learn when it will not, and cut your losses. Maybe you don't need big data, just like maybe your shiny cutting edge library might not be ready for production.
[+] [-] jaravis|12 years ago|reply
[+] [-] joosters|12 years ago|reply
[+] [-] emiliobumachar|12 years ago|reply
http://en.wikipedia.org/wiki/Multiple_comparisons
[+] [-] joosters|12 years ago|reply
[+] [-] sitkack|12 years ago|reply
[+] [-] akadien|12 years ago|reply
"They cared about correlation rather than causation."
Analytics are a tool to help find correlations and patterns so that humans can do the hard work of determining and testing for causation. Computers are doing their jobs; humans aren't.
[+] [-] dj-wonk|12 years ago|reply
In one sense, if you can observe real phenomena, you don't have to guess at what is happening. For businesses that collect troves of it, they may need statistics 'less' because the sample size may approach the population size.
But calculating basic (mean, standard deviation, etc.) statistics is hardly the most interesting part. Inferential statistics is often more useful: how does one variable affect another?
As the article points out, the "... the numbers speak for themselves” statement may also be interpreted as "traditional statistical methods (which you might call theory-driven) are less important as you get more data". I don't want to wade in the theory-driven vs. exploratory argument, because I think they both have their places. Both are important, and anyone who says that only one is important is half blind.
Here is my main point: data -- in the senses that many people care about; e.g. prediction, intuition, or causation -- does not speak for itself. The difficult task of thinking and reasoning about data is, by definition, driven by both the data and the reasoning. So I'm a big proponent of (1) making your model clear and (2) sharing your model along with your interpretations. (This is analogous to sharing your logic when you make a conclusion; hardly a controversial claim.)
[+] [-] stillsut|12 years ago|reply
"Facebook’s mission is to give people the power to share and make the world more open and connected."
What it actually does... (that will be left to the reader.)
"Big Data" is often sold as one thing by Enterprise software folks. But what value the data, or processing of it actually has is usually much more dependent on the user and his context (like FB!) and usually doesn't fit as nicely onto a PPT slide.
Articles like this usually confuse the PR definition and the analyst definition.
[+] [-] MCarusi|12 years ago|reply
[+] [-] sam_sach|12 years ago|reply
It's nebulous. I've seen it applied to machine learning, data management, data transfer, etc. These are all things that existed long before the term, but bloggers just won't STFU about it. Businesses, systems, etc. generate data. If you don't analyze that data to test your hypotheses and theories, at the end of the day, you don't understand your own business and are relying on intuition for decision making.
[+] [-] bsbechtel|12 years ago|reply
[+] [-] SworDsy|12 years ago|reply
[+] [-] mrow84|12 years ago|reply
So you said "if i work for facebook and i want to figure out something about my users", and for whatever you were doing, looking at your existing user base might be the right thing to do. Perhaps, though, you actually want to know something about all your potential users, not just the users you happen to have right now. Whether or not your current user base offers a good model for your potential user base would then be a pretty important question, and one that almost certainly isn't answered by "big data".
I think that, as with most of statistics, the key point is "think about your problem", and that focusing on a set of solutions rather than the problems themselves can get in the way of that.
[+] [-] _delirium|12 years ago|reply
[+] [-] rapala|12 years ago|reply
[+] [-] linuxhansl|12 years ago|reply
BigData vs. Theory, Java vs. C++, Capitalism vs. Socialism, Industry vs. Nature, Good vs. Bad, etc.
BigData allows to store a lot of data and provides a means run some computation on that data. Not more, and not less.
[+] [-] pella|12 years ago|reply
from: http://www.wired.com/2013/02/big-data-means-big-errors-peopl...
[+] [-] Sami_Lehtinen|12 years ago|reply
[+] [-] dreamfactory2|12 years ago|reply