This is not just an issue for neural nets, but also for brains. Our interpretations of our own actions should always be considered posthoc rationalizations in the absence of some falsifiable experiment being conducted to demonstrate the validity of the interpretation. Human brains are excellent at creating a coherent story about the world they experience based on the data at hand, thus we suffer the same kinds of issues, mitigated only by the fact that we have inherited developmental programs that have been subjected to a huge variety of adverse situations that have rigorously tested their performance (by killing anything that failed).
You're hand-waving a real problem by asserting rather misanthropic ideas with scant justification.
Show some child a cat and ask why it's not a crocodile. They will be able to explain it through the differences in shape, color, behavior and other features of those animals. Whether you consider it post-hoc is unimportant. The explanation is still real and relevant.
So people can explain their deliberate reasoning. Also, reason about reasoning. If they truly can't, it's called gut feeling and no one rational trusts those for solving complex problems with important outcomes. Especially at scale.
Neural networks, at least right now, have no capabilities of this sort. There are some attempts at visualization, but they are inherently limited, because they must make assumptions about the domain of the problem.
The pneumonia-asthma example seems to be an example of a Simpson's paradox [1]. The doctors acted on a strong (accurate) belief about asthma sufferers contracting pneumonia and acted in such a way that the data obscured an actual causal link (asthma as an aggravating factor to pneumonia). This is opposed to the canonical Simpson's paradox where doctors acted on a strong (inaccurate) belief about severe kidney stones [1a] and again produced lopsided data that hid the best treatment option until the paradox was identified.
Humans have a very hard time uncovering so-called "lurking variables" [2] and identifying such paradoxes. I don't see how a neural network (or other machine learning tool) could do so on their own, but I don't know that much about machine learning. So, I guess I have two questions for the experts out there:
* If all training data is affected by a confounding variable, can a machine learning algorithm identify its existence, or is it limited by only knowing a tainted world?
* Once we have identified such lopsided data and understood its cause, how do you feed that back into your algorithm to correct for it?
One method of fixing this would be to have the neural network make 2 predictions. The first would be to predict what decision the doctor would make. The second would be to predict what decision is actually likely to lead to the best outcome.
In cases where it's very likely the doctor would make a different decision, it should flag it for human review.
The answer is no. The problem is, is that we don't trim the neural networks of their spurious connections and instead we're stuck staring at these fully (visually) connected layered networks.
Once you start to trim out the spurious connections you start to see that you are left with a logic design with integration/threshold circuits instead of straight binary circuits that we're used to seeing. There are even certain universal network patterns what will emerge to perform different functions just like in binary circuit design.
I wrote a paper about this in 2008 that's now been cited about 150 times. It's using Artificial Gene Regulatory Networks instead of Artificial Neural Networks, but the math is the same and the principle still holds:
Around 2006 - 2008 I participated in a Research Experiences for Undergraduates program at an AI lab. I used to get into arguments with grad students when I asserted it was intuitively obvious that given a neural network which recognizes two features, it should be possible to extract a trimmed network which recognizes one feature but not the other.
The trick to accurate interpretability is to decouple accuracy from explanations.
Just like an International Master commentator can explain most of the moves of a Super GM, so can an interpretable simple model explain the predictions of a very complex black box model.
The work by Caruana referenced in this article actually culminated in a method to get both very accurate models and still retain interpretability.
When I try to explain neural nets (specifically in vision systems) to people I basically explain how you take inputs in the form of images, label pixels/pixel groups in the images with what you want them to output in the future, and then do that thousands of times and continue to test the results.
Critically though, I will say something to the effect of "but if you try and break the net open and see how this specific net came to it's result, it will look like spaghetti"
So it's a roundabout way of saying "junk in; junk out." That holds true for any learning system, including human animals. The thought process of humans is inscrutable thus far, and I think that future computing will be similarly inscrutable if we do it correctly.
I think this issue of "Explainable Machine Learning" and interpretability is just going to get more and more important as ML grows. It will also be important for verifying ML-based systems - another problem area.
I really disagree and think the whole "There's no way to gauge results!" meme is low-impact FUD. FUD that isn't particularly dangerous so people that don't know any better just believe it (like python can't be performant because of the GIL or macs are better than PCs or some other inanity).
> As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.
Nonsense! Cross validation. Develop hypotheses, develop subsets of data to prove or disprove given hypotheses, observe how the network reacts. All of these people complaining about not being able to understand what's going on are either reporters, bloggers, or machine learning dabblers looking to say something seemingly unconventional.
From your linked article, which gives more specifics as to the argument:
> Because ML systems are opaque, you cannot really reason about what they do.
Yes, it is possible to reason about a system even if it is "opaque"; the discipline is called reverse engineering. Or the scientific method.
> Also, you can’t do modular (as in module-by-module) verification.
You can do "modular verification" in a variety of ways. Start with analyzing the behavior of each layer and how that changes as you incorporate more layers. It's beyond the scope of this comment to go into it beyond surface level, but there are a lot of papers written about it, google "analyzing neural network hidden activations" or something.
> And you can never be sure what they’ll do about a situation never encountered before.
Humans can never be sure what they'll do about a situation they haven't encountered. Or engineers. They can simulate the events that they can think of, but we can also do that with a neural network.
> Finally, when fixing a bug (e.g. by adding the buggy situation + correct output to the learning set), you can never be sure (without a lot of testing) that the system has fixed “the full bug” and not just some manifestations of it.
Fixing the "full bug" is often not something that can be done in traditional software development "without a lot of testing". Machine learning works the same way.
If you want if/then statements, use a decision tree. If you want strong accuracy on predictions, use a neural network. It helps to know what you're doing when verifying results. You will run into trouble if you don't know what you're doing, as per common sense.
The human brain also does massive dimensionality reduction on very large amounts of data, and a lot of unconscious processing, with much of it being beyond our capabilities of conscious introspection.
I think eventually, within a couple of decades, we will have AI that correlates well enough with human thought process, and has enough knowledge of the world, to be able to introspect and explain in various levels of detail, in natural language, images and other human readable constructs, why it has reached a certain conclusion. And we will be able to experimentally verify those explanations.
I've been saying that ML is much like alchemy than science. They've pretty much given up to understand the underlying mechanism because it's so complex, but that doesn't stop them experimenting because they still get something that looks like a result. And hey, they can get paid for it.
Eventually it might grow into a full-fledged science, but it will probably take an awful lot of time.
I disagree about the underlying mechanism being complex. Machine learning algorithms are a class of equation-based systems and strategies for configuring these equations for a specific task. All our math and reasoning tools still apply to this class of algorithms, in principle.
Where we pass into alchemy though is the interplay of these basic components with each other and the parameters they encounter while running, this is where complexity happens. Part of this lies in the very nature of the tasks we use them for: we basically push a cart of raw data in front of a set of "AI" solvers and expect them to do something with it. When that doesn't work, start over, tweak parameters, and try again.
I agree that there is no sufficiently useful intellectual framework for creating these artificially intelligent components, and that shows not only in the uneven success rates and performance, but also in the surprising fact that experts in very different AI systems can usually create components with similar performance characteristics for a given problem, despite using very disparate strategies.
I think you mean it's much more like science than math. Not that it's much more like alchemy than science.
Machine learning research is very empirical. With a philosophy of doing lots of experiments and tests, and discovering what methods work best. That's what science is. No scientific field starts off with complete knowledge and understanding, they have to do a lot of experiments to discover the general laws.
Some people dislike empiricism and want pure, provable math. Machine learning isn't a field of mathematics, at least not in practice. But that doesn't mean it's not a science.
They haven't given up at all. And more importantly, they've developed a huge number of "if it looks good it is good" mathematical theorems.
This means that under many circumstances, you can build a model that satisfies certain abstract properties, test it, and have a high probability of generalizability. I.e. we've circumvented the "understanding" stage.
In fact, I'd say that the core of machine learning (as opposed to merely statistics) is exactly this.
Isn't this all simply about correlation vs causation? Machine learning can find strong correlations and we can make predictions based on those correlations, but at the end of the day, the machine knows nothing about what is causing any of it, and hence is "inscrutable".
So it is up to us to fill the gap in our understanding because that is what machine learning ultimately says about the subject. It tells us what we don't know. If we knew all about the subject, our predictions would match the predictions of the machine because there is only one reality we're both observing. But if there is any gap, then the machine is telling us what we don't know, not what it (of all things) knows. It's just crunching numbers. It doesn't "know" anything.
Interesting article. Some things are weird. I don't know why a support vector machine is ranked better than Bayesian nets, or why they are both worse than ensemble methods w.r.t. interpretability.
However, I think the human should not be in the loop. The network should have another semantic layer that serves communication. It can be done from the ground up like Steels or Vogt have been doing.
In other words, yes we need insight, but I prefer it through introspective networks. The network should be able to explain itself.
Some people, for example medics and civil engineers, are held legally liable for the decisions that they make. If they are to use machines that help them make those decisions (and mostly they would like to due to the terrible business of killing people) then they have to be able to understand what they are being told to do, or they have to trust it enough to bet their futures on it. If the machine is literally infallible then you can imagine option b being exercised, but being honest, if you were being threatened with five years of jail and you didn't understand why it was telling you to do something would you sign it off?
Is that Luc Steels you are referring to? I took a couple classes from him at uni a little over tem years ago. What is he doing lately that you refer to?
This isn't unique to neural networks at all. There was a machine learning system designed to produce interpretable results, called Eureqa. Eureqa is a fantastic piece of software that finds simple mathematical equations that fit your data as good as possible. Emphasis on the "simple", it searches for the smallest equations it can find that works, and gives you a choice of different equations at different levels of complexity.
But still, the results are very difficult to interpret. Yes you can verify that the equation works, that it predicts the data. But why does it work? Well who knows? No one can answer that. Understanding even simple math expressions can be quite difficult.
One biologist put his data into the program, and found, to his surprise, that it found a simple expression that almost perfectly explained one of the variables he was interested in. But he couldn't publish his result, because he couldn't understand it himself. You can't just publish a random equation with no explanation. What use is that?
I think the best method of understanding our models, is not going to come from making simpler models that we can compute by hand. Instead I think we should take advantage of our own neural networks. Try to train humans to predict what inputs, particularly in images, will activate a node in a neural network. We will learn that function ourselves, and then it's purpose will make sense to us.
There is a huge amount of effort put into making more accurate models, but much less into trying to interpret them. I think this is a huge mistake, because understanding a model lets you see it's weaknesses. The things that it can't learn, and the mistakes it makes.
I appreciate the sentiment of your comment, but, what part of a neural net isn't interpretable? Indeed, they do require more careful examination compared to traditional learning techniques. You can examine the receptive field of each node to infer what it detects for.
> We're using these things and we're not even sure how they work. Love it.
Similar to the human brain in that respect. Ironic that the human brain used itself without understanding how it works to eventually create something it uses without understanding how it works.
To label so-called causative factors or even actual relationships (in a shifting...virtual...hyperspace) among potential relationships is a separate task than to make meaningful predictions or predictable changes. The Universe is inherently a system-less set of potentials. The strongest system is the one that is indeterminate in its methodologies. Systems are survivors of reduction processes.
In principle a shallow NN (1 hidden layer) can approximate any function. But it has a tendency to overfit and just "memorize" the inputs. The basic idea of adding additional layers, is that the early layers can learn very low-level features of the data, and later layers combine the low-level features into higher-level features. This tends to make the models generalize well.
A standard example is for a face detection algorithm. The first layer will do edge detection, the next layer will combine edges into corners and simple shapes, the next layer will maybe use those shapes to look for features like eyes, noses, mouths, etc., and then the next layer will maybe combine those features to look for a whole face.
I am no expert but I think it allows for a higher order function to be arrived at. An example would be the output of a simple net, where the output is a linear combination of features. This would be extremely shallow and while this will work for some things, there are going to be some instances where this doesn't capture nuanced scenarios.
in a shallow net, maybe college student selection based on sat scores gets a heavy weight/low threshold/whatever. in a shallow linear combo, this will likely always carry a large weight.
in a deeper net, it might be able to learn that SATS are a great predictor except for when X Y Z or some combination of those are some particular value, in which case it might be wholly irrelavent. The deeper it is, the longer it will take to train, but the more it can handle exception cases/trends and approximate reality
No one really knows, there was a paper by Max Tegmark on HN yesterday with some new ideas and results, but I haven't had time to read it yet. http://arxiv.org/abs/1608.08225
The other responses to your question in this thread are as good I as could give you, but I would feel like I am recounting ideas that may be true but for which there is little evidence.
In general, it allows a better approximation of the solution function for far less hidden neurons. Sure, you could get arbitrarily close using a single hidden layer, but that hidden layer might need to be unfathomably large. Same idea for network topology in multilayer nets - a network could eventually learn to set a lot of the weights to zero, but training is a lot faster and more effective if you know a good problem-specific topology to start with. Deep nets make problems more tractable. Recurrence is the real game-changer, since then you've moved from non-linear function approximators up to Turing completeness (at least over the set of all possible RNNs).
Think of each layer as an opportunity to perform a level of abstraction or categorization. Concepts are built out of smaller concepts which are built out of smaller concepts; lots of layers allows lots of hierarchy in the concepts.
The first layer might recognize and respond to pixels in particular parts of an image, the next layer will group certain of those pixels-responses together into an abstraction you might call a "line", the next layer will respond to certain groupings of lines and add a level of wiggle-room regarding where the lines are in the image, and the final layer will judge whether a combination of groupings constitutes the letter "A". Or at least, if you spent a bit of time poking at a deep network, giving it slightly different inputs, you might eventually conclude that this is what the layers were doing.
Without layers, you're basically just approximating a simple function or mapping with one level of abstraction.
Adrian Thompson's 1996 paper was about Genetic Algorithms. A poor overfitting example considering the whole article is prominently about Artificial Neural Networks. Thompson's FPGA components were trained at room temperature and the creatures were unable to function well when the temperature deviates too much from 10 deg. C.
[+] [-] hyperion2010|9 years ago|reply
[+] [-] colllectorof|9 years ago|reply
Show some child a cat and ask why it's not a crocodile. They will be able to explain it through the differences in shape, color, behavior and other features of those animals. Whether you consider it post-hoc is unimportant. The explanation is still real and relevant.
So people can explain their deliberate reasoning. Also, reason about reasoning. If they truly can't, it's called gut feeling and no one rational trusts those for solving complex problems with important outcomes. Especially at scale.
Neural networks, at least right now, have no capabilities of this sort. There are some attempts at visualization, but they are inherently limited, because they must make assumptions about the domain of the problem.
[+] [-] kilotaras|9 years ago|reply
One example of this is Anton-Babinski syndrome [1]. The patient can't see but is completely sure he can.
[1] https://en.wikipedia.org/wiki/Anton%E2%80%93Babinski_syndrom...
[+] [-] chiliap2|9 years ago|reply
[+] [-] venning|9 years ago|reply
Humans have a very hard time uncovering so-called "lurking variables" [2] and identifying such paradoxes. I don't see how a neural network (or other machine learning tool) could do so on their own, but I don't know that much about machine learning. So, I guess I have two questions for the experts out there:
* If all training data is affected by a confounding variable, can a machine learning algorithm identify its existence, or is it limited by only knowing a tainted world?
* Once we have identified such lopsided data and understood its cause, how do you feed that back into your algorithm to correct for it?
---
[1] https://en.wikipedia.org/wiki/Simpson%27s_paradox
[1a] https://en.wikipedia.org/wiki/Simpson%27s_paradox#Kidney_sto...
[2] https://en.wikipedia.org/wiki/Confounding
[+] [-] Inlinked|9 years ago|reply
This is tackled in the recently popular field of study called 'counterfactual inference'.
http://leon.bottou.org/talks/counterfactuals
[+] [-] Houshalter|9 years ago|reply
In cases where it's very likely the doctor would make a different decision, it should flag it for human review.
[+] [-] rdlecler1|9 years ago|reply
Once you start to trim out the spurious connections you start to see that you are left with a logic design with integration/threshold circuits instead of straight binary circuits that we're used to seeing. There are even certain universal network patterns what will emerge to perform different functions just like in binary circuit design.
I wrote a paper about this in 2008 that's now been cited about 150 times. It's using Artificial Gene Regulatory Networks instead of Artificial Neural Networks, but the math is the same and the principle still holds:
http://m.msb.embopress.org/content/4/1/213.abstract
[+] [-] phaedrus|9 years ago|reply
[+] [-] Inlinked|9 years ago|reply
Just like an International Master commentator can explain most of the moves of a Super GM, so can an interpretable simple model explain the predictions of a very complex black box model.
The work by Caruana referenced in this article actually culminated in a method to get both very accurate models and still retain interpretability.
https://vimeo.com/125940125
http://www.cs.cornell.edu/~yinlou/projects/gam/
More recently there was LIME:
https://homes.cs.washington.edu/~marcotcr/blog/lime/
And there are workshops:
http://www.blackboxworkshop.org/pdf/Turner2015_MES.pdf
We will get there. 'Permanent' is a very long time and in the grand scale of things, deep learning is relatively new.
[+] [-] AndrewKemendo|9 years ago|reply
Critically though, I will say something to the effect of "but if you try and break the net open and see how this specific net came to it's result, it will look like spaghetti"
So it's a roundabout way of saying "junk in; junk out." That holds true for any learning system, including human animals. The thought process of humans is inscrutable thus far, and I think that future computing will be similarly inscrutable if we do it correctly.
[+] [-] yoav_hollander|9 years ago|reply
See [1] for a discussion of both.
[1] https://blog.foretellix.com/2016/08/31/machine-learning-veri...
[+] [-] dave_sullivan|9 years ago|reply
> As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.
Nonsense! Cross validation. Develop hypotheses, develop subsets of data to prove or disprove given hypotheses, observe how the network reacts. All of these people complaining about not being able to understand what's going on are either reporters, bloggers, or machine learning dabblers looking to say something seemingly unconventional.
From your linked article, which gives more specifics as to the argument:
> Because ML systems are opaque, you cannot really reason about what they do.
Yes, it is possible to reason about a system even if it is "opaque"; the discipline is called reverse engineering. Or the scientific method.
> Also, you can’t do modular (as in module-by-module) verification.
You can do "modular verification" in a variety of ways. Start with analyzing the behavior of each layer and how that changes as you incorporate more layers. It's beyond the scope of this comment to go into it beyond surface level, but there are a lot of papers written about it, google "analyzing neural network hidden activations" or something.
> And you can never be sure what they’ll do about a situation never encountered before.
Humans can never be sure what they'll do about a situation they haven't encountered. Or engineers. They can simulate the events that they can think of, but we can also do that with a neural network.
> Finally, when fixing a bug (e.g. by adding the buggy situation + correct output to the learning set), you can never be sure (without a lot of testing) that the system has fixed “the full bug” and not just some manifestations of it.
Fixing the "full bug" is often not something that can be done in traditional software development "without a lot of testing". Machine learning works the same way.
If you want if/then statements, use a decision tree. If you want strong accuracy on predictions, use a neural network. It helps to know what you're doing when verifying results. You will run into trouble if you don't know what you're doing, as per common sense.
[+] [-] dharma1|9 years ago|reply
I think eventually, within a couple of decades, we will have AI that correlates well enough with human thought process, and has enough knowledge of the world, to be able to introspect and explain in various levels of detail, in natural language, images and other human readable constructs, why it has reached a certain conclusion. And we will be able to experimentally verify those explanations.
[+] [-] euske|9 years ago|reply
Eventually it might grow into a full-fledged science, but it will probably take an awful lot of time.
[+] [-] Udo|9 years ago|reply
Where we pass into alchemy though is the interplay of these basic components with each other and the parameters they encounter while running, this is where complexity happens. Part of this lies in the very nature of the tasks we use them for: we basically push a cart of raw data in front of a set of "AI" solvers and expect them to do something with it. When that doesn't work, start over, tweak parameters, and try again.
I agree that there is no sufficiently useful intellectual framework for creating these artificially intelligent components, and that shows not only in the uneven success rates and performance, but also in the surprising fact that experts in very different AI systems can usually create components with similar performance characteristics for a given problem, despite using very disparate strategies.
[+] [-] Houshalter|9 years ago|reply
Machine learning research is very empirical. With a philosophy of doing lots of experiments and tests, and discovering what methods work best. That's what science is. No scientific field starts off with complete knowledge and understanding, they have to do a lot of experiments to discover the general laws.
Some people dislike empiricism and want pure, provable math. Machine learning isn't a field of mathematics, at least not in practice. But that doesn't mean it's not a science.
[+] [-] yummyfajitas|9 years ago|reply
This means that under many circumstances, you can build a model that satisfies certain abstract properties, test it, and have a high probability of generalizability. I.e. we've circumvented the "understanding" stage.
In fact, I'd say that the core of machine learning (as opposed to merely statistics) is exactly this.
[+] [-] unabst|9 years ago|reply
So it is up to us to fill the gap in our understanding because that is what machine learning ultimately says about the subject. It tells us what we don't know. If we knew all about the subject, our predictions would match the predictions of the machine because there is only one reality we're both observing. But if there is any gap, then the machine is telling us what we don't know, not what it (of all things) knows. It's just crunching numbers. It doesn't "know" anything.
[+] [-] MrQuincle|9 years ago|reply
However, I think the human should not be in the loop. The network should have another semantic layer that serves communication. It can be done from the ground up like Steels or Vogt have been doing.
In other words, yes we need insight, but I prefer it through introspective networks. The network should be able to explain itself.
[+] [-] sgt101|9 years ago|reply
[+] [-] wlievens|9 years ago|reply
[+] [-] Houshalter|9 years ago|reply
But still, the results are very difficult to interpret. Yes you can verify that the equation works, that it predicts the data. But why does it work? Well who knows? No one can answer that. Understanding even simple math expressions can be quite difficult.
One biologist put his data into the program, and found, to his surprise, that it found a simple expression that almost perfectly explained one of the variables he was interested in. But he couldn't publish his result, because he couldn't understand it himself. You can't just publish a random equation with no explanation. What use is that?
I think the best method of understanding our models, is not going to come from making simpler models that we can compute by hand. Instead I think we should take advantage of our own neural networks. Try to train humans to predict what inputs, particularly in images, will activate a node in a neural network. We will learn that function ourselves, and then it's purpose will make sense to us.
There is a huge amount of effort put into making more accurate models, but much less into trying to interpret them. I think this is a huge mistake, because understanding a model lets you see it's weaknesses. The things that it can't learn, and the mistakes it makes.
[+] [-] matk|9 years ago|reply
[+] [-] Animats|9 years ago|reply
[+] [-] rnhmjoj|9 years ago|reply
[+] [-] jomamaxx|9 years ago|reply
At least we should have a standard for characterizing their accuracy or something like that ...
[+] [-] mastre_|9 years ago|reply
Similar to the human brain in that respect. Ironic that the human brain used itself without understanding how it works to eventually create something it uses without understanding how it works.
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] monadai|9 years ago|reply
Think Different! O'well.
[+] [-] ajcarpy2005|9 years ago|reply
[+] [-] hour_glass|9 years ago|reply
[+] [-] antognini|9 years ago|reply
A standard example is for a face detection algorithm. The first layer will do edge detection, the next layer will combine edges into corners and simple shapes, the next layer will maybe use those shapes to look for features like eyes, noses, mouths, etc., and then the next layer will maybe combine those features to look for a whole face.
I wrote a more detailed answer here:
http://stats.stackexchange.com/questions/222883/why-are-neur...
[+] [-] dmritard96|9 years ago|reply
in a shallow net, maybe college student selection based on sat scores gets a heavy weight/low threshold/whatever. in a shallow linear combo, this will likely always carry a large weight.
in a deeper net, it might be able to learn that SATS are a great predictor except for when X Y Z or some combination of those are some particular value, in which case it might be wholly irrelavent. The deeper it is, the longer it will take to train, but the more it can handle exception cases/trends and approximate reality
[+] [-] sgt101|9 years ago|reply
The other responses to your question in this thread are as good I as could give you, but I would feel like I am recounting ideas that may be true but for which there is little evidence.
[+] [-] qrendel|9 years ago|reply
[+] [-] moridinamael|9 years ago|reply
The first layer might recognize and respond to pixels in particular parts of an image, the next layer will group certain of those pixels-responses together into an abstraction you might call a "line", the next layer will respond to certain groupings of lines and add a level of wiggle-room regarding where the lines are in the image, and the final layer will judge whether a combination of groupings constitutes the letter "A". Or at least, if you spent a bit of time poking at a deep network, giving it slightly different inputs, you might eventually conclude that this is what the layers were doing.
Without layers, you're basically just approximating a simple function or mapping with one level of abstraction.
[+] [-] nurettin|9 years ago|reply
[+] [-] Cortez|9 years ago|reply
[+] [-] SteveNuts|9 years ago|reply
[+] [-] jessaustin|9 years ago|reply
This seems analogous to 90% of (random, unreplicable) science these days.