The author calls out med students for approaching physics through rote memorization. It reminds me of an experience my older brother and I had with a doctor friend.
Our friend, an OB/GYN, mentioned how hard her work is, because "the average baby is born at 3am."
We laughed, but then my brother asked, "What does 'average' mean when you have a 24-hour clock? It must mean the modal time or something like that."
I contributed that this is an issue in defining average wind directions, as well. The basic problem is that if you record times on a 0-24 hour scale, or wind directions on a 0-360 degree scale, and then naively average the numbers, you get meaningless results (for example, 180 degrees if the wind steadily rotates through every point of the compass).
A quick glance at our doctor friend showed she had checked out of the conversation entirely. Possibly she just felt slighted that we were not bowing down in awe at the terrible hours she keeps. But my main impression was that she lives in a world where one receives a piece of information, notes it, and stores it away. And when repeating that received information, one's listeners duly note and store it away.
Chasing down the source of the information, calling it into question, relating it to other things in the world-- these just weren't things she seemed to find pleasurable.
It’s actually quite possible to define quantities like mean and variance on a circular dimension like hours or angles, with definitions analogous to the conventional ones on the reals; it just takes a bit of mathematical cleverness. (The general field is sometimes called “circular statistics”, and for the mean specifically, Wikipedia has an article here http://en.wikipedia.org/wiki/Mean_of_circular_quantities)
That the two of you ignored your friend’s point and started talking about mathematics instead could be insulting† and she was may have been quite justifiably annoyed. Interpreting that annoyance as evidence that doctors are uncritical/uncurious/uncreative is pretty narrow-minded, IMO.§
† depending on your existing relationship and usual interactions.
§ obviously I wasn’t there and don’t know your friend, so can’t really judge. YMMV
The charitable interpretation (for your OB friend) is that she was not speaking literally, and got bored when you started perseverating on her hyperbole. The more likely explanation is that your analysis is indeed correct.
Of course you can average times of day, as well as wind directions. Express them as vectors on a 2-D plane, then separately average the components in each dimension. Although this is somewhat obvious in the case of wind measurements, it works for times of day as well if you think of them as points on a 24-hour clock face.
Your medical friend was not checked out. She was busy stifling impolite laughter at your unfamiliarity with Cartesian coordinate systems.
The doctor is far more knowledgeable about a VERY complex engineering system (the female reproductive system) than you or your brother are ever likely to be in any sphere of knowledge. She probably hadn't, as you put it, checked out, she was just being polite and waiting for you to stop babbling pedantic trivia. Perhaps amusing herself by wondering if you might have Aspergers.
Your final 'impression' of her thought processes is banal and insulting. And people wonder why geeks don't have girlfriends.
Chasing down the source of the information, calling it into question, relating it to other things in the world-- these just weren't things she seemed to find pleasurable.
Do you really want your doctor to be that sort of person? Do you want the person treating your kid for rickets to stand up and challenge the establishment and test a new hypothesis on the disease?
I don't know about you, but I'd prefer it if the medical researchers did the challenging, questioning and validating, and the practicing doctors (in general) went with the established knowledge. In my opinion practice and research (of most any subject) are fundamentally divided, and so long as a person can maintain perspective I think no worse of them for preferring one side to the other.
Or the doctor was splitting the clock in half hour chunks (or similar). "born at 3am" probably means "born between 2:45am and 3:15am". If a baby was born at 3:02am, the next day they will say "I got no sleep, a baby was born at 3am last night!".
I'd laugh a lot harder when people struggle for days and then reach a half-assed piece of some algorithm that's completely well know if I hadn't been there a dozen times myself.
Programmers are especially vulnerable to this. Who hasn't made a 4 page case statement when 3 lines of recursion would have done it, especially when starting out? Then again, I've never named my case statements after myself.
No matter how brilliant one is, its ridiculously hard to know what you don't know. In fact, sometimes being very advanced in one field makes it doubly hard to think of in one you're poor in.
Sure, it's completely fair. We all reinvent the wheel sometimes.
That hilarious part is that this paper was published in a peer-reviewed journal---and none of the reviewers realized that he'd rediscovered some 17th century math.
No matter how brilliant one is, its ridiculously hard to know what you don't know.
This cannot be overstated. And when ego is the cause...oy vey! I believe that one of the greatest intellectual challenges to overcome when one over identifies as being "brilliant" is the, oft youthful, focus on pedantry. How embarrassingly ironic to display one's ignorance as a result of flaunting one's intellect. As a recovering pendant well in to those years that separate true youth from "decidedly middle-aged" let me sincerely recommend to some of our (mostly) younger members that they put down the Bertrand Russell long enough to pick up some William Blake.
When used properly, I have found that one of the most powerful phrases I can use to build confidence, trust, and credibility with a client is "I don't know."
I hear that. I once spent the better part of a day reinventing ActiveRecord's serialize class method (with tests!) only to be told by a friend that I had, uh, reinvented an existing method.
I can't imagine spending months on a peer-reviewed paper accomplishing the same amount of nothing. That would be disheartening to say the least.
I'm a doctor who majored in physics, and I agree with this post. Watching these folks come up with formulas in physiology was excruciating. People get their names on things that physicists wouldn't even bother noticing as something other than a single step in a derivation. Hacker News and Python have been come my group therapy and secret addiction, respectively.
Just to be clear, and this might be tangential, but she is not a physician - I think she is a dietitian. I think its a fallacy to assume that only physicians publish in medical journals and there is a negligible link between over-the-top premeds and this article.
I can't access the article, so I can't make any comment on the actual methods, but I think it seems a little presumptuous to flippantly make broad strokes about a paper from a different field solely by looking at the abstract.
He does make a good point about overeager premeds (and for good reason), but this post seems to be more airing out grievences and stereotypes than an argument about education or differences between diciplines.
We found the same link... and you're correct, she isn't a physician.
In the Topics in Clinical Nutrition 1992 paper, she is listed as having an MS and an EdD. So, it's safe to say that she probably didn't realize that she was describing integration.
The entire post had a "I'm smarter than you" chip on your shoulder type of vibe.
It should also be noted that this paper had a number of letters to the editor about it, so in this case, I'd say that the process works (even if it got by the editors).
For my next New England Journal paper, I'm going to use a random number generator to simulate whether conditional, probabilistic health outcomes occurred or not.
I'll cycle through this thousands of times to obtain stable estimates, and then call this the Monte Carbocation method.
This does happen a lot in science. Between fields and even within a field, for example a biologist discovers a handy 'new' data structure that is useful for storing DNA samples. Which is simply a basic binary tree. As the reviewers of a biology paper are usually biologists and not computer scientists, they might not notice the obvious.
Similarly, within computer science I've seen cases like this as well. A researcher in scientific visualization built an awesome visualization algorithm based on a 'new' data structure. Then it turned out this data structure was already discovered in the 90's by a theoretical computer scientist.
In neither of my cases it undermined the underlying research, it's basically just a missing reference. It's inherent in science that some things are rediscovered once in a while and it is very hard to follow articles from a completely other field.
However, something as blatant as rediscovering integration reviewers from each field should have noticed....
Peer-review failed here. It might be forgivable that a medical researcher doesn't know Calculus (maybe..), but if an article is making a mathematical claim, the journal should find appropriate reviewers. And this is not even remotely advanced math.
looking up the paper on pubmed reveals a flurry of letters to the editor published in the subsequent issue that call out the 'tai method' for what it is. i would actually bet a good number of the 70 citations that so worry 'flip tomato' are actually criticisms or commentary papers like this, as opposed to earnest citations.
I often help (good) researchers with experimental design and statistical analysis of quasi-experimental data, and it's shocking how little they understand. It pains me to think how much waste there is in science at the moment because the researchers do not have the statistical or numerical background to even know what questions are possible.
My brother is a medical researcher. Much of his work involves statistics, but he's never taken a statistics course nor read an intro book. So a lot of his results are just basic high school stats and pretty graphs, nothing deeper. It would be funny if it weren't medicine.
I'm a biochemistry grad student, and my school is just now considering offering a (bio)statistics course for the first time... But parent poster is right, chi-squared is usually as complex as it gets.
I'm a second year math student about to enter his third year. I enter a lot of the definitions, theorems, etc. into a flash card software (Anki) for memorization. I combine this with doing tons of proofs and problems from various textbooks depending upon the course I'm studying. I would say from personal experience that rote memorization has definitely helped me: (1) understand the math better; (2) excel in exams, and; (3) able to solve extra and harder problems from books.
So I'm struggling to see why rote memorization is bad. Is not memory useful for justifying knowledge? I'm not saying memorization is the only thing. Just that it seems to build the foundation for everything else, as per Bloom's cognitive taxonomy: https://secure.wikimedia.org/wikipedia/en/wiki/Bloom%27s_Tax...
Come on now, many of you proudly tout how you were taught integration in secondary education. Big deal. This person discovered it for themselves, and that is an achievement to be celebrated.
I think Michael Williams (third comment) has the right idea when he says "I’m sure you can find plenty of physicists saying spectacularly naive things about medicine...". Of course OP's discovery is amusing - even alarming - but approaching it with an air of condescension won't do much to advance either field.
Calculus is taught in high school and expected to be basic knowledge for any physician, even if they don't use it often, and especially so for researchers. Calculus is fair game on their admissions exam, even! On the other hand, there's never an expectation that physicists know medicine.
My outsider's opinion is that I think that a lot of cited articles are not always thoroughly examined, or of they are examined they are used to confirm the biases of a particular researcher.
I recently became interested in the idea of possible anesthetic neurotoxicity in infants and looked at a number of papers. The basic research seems solid, but the conclusions drawn are strangely inconsistent.
Neonatal rat, mouse and pregnant guinea pig models are used, and recent studies have been done on monkeys. It appears that there is a high incidence of cell death after exposure to anesthesia, but there is a relatively narrow window of vulnerability, which apparently peaks at 7 days postnatal in rats and rapidly diminishes. 5 day old monkeys were affected by prolonged exposure to ketamine, and 35 day old monkeys were not. Similar results were seen in guinea pigs.
What strikes me, is that this window of vulnerability is differently equated to human development by researchers, despite years of research into ethanol neurotoxicity (anesthetic studies seem to be more recent). Estimates for 7 to 14 day old rat-human equivalents range from pre-term infants to full-term newborns, to mid-gestation human fetuses and to children up to 3 years old. Two monkey papers, one using ketamine, and another using isoflurane also came up with different vulnerability periods based on similar data by using different sources of information on neurodevelopment, one published in the 1970's and one more recent.
I cannot understand how so many studies could have statements about possible windows of human neurotoxicity, without any certainty about what phase in neurodevelopment they were dealing with. And, oddly enough, the paper describing the model that is used to claim a mid-gestation vulnerability (based on a "bioinformatics approach") clearly states that it cannot be used to predict the "coordinated surge in synaptogenesis just prior to birth in primates", which is hypothesized to be the peak period of vulnerability to anesthetic-induced cell death. So why is it used as a source?
To extend my comment, there are dozens of citations for the 1970's era paper that assert that the "brain growth spurt" extends from the third trimester to the first few years of life. It is then equated with synaptogenesis or "peak synaptogenesis", even though this association may be unclear. The papers then further equate peak synaptogenesis with the period of vulnerability to anesthesia. Many then postulate mechanisms for anesthesia-related neurotoxicity in infants related to mechanisms of synaptogenesis. Not being an expert in the field, I can't refute this argument, but I do find the links between these phenomena to be rather shaky, especially when based on a throwaway reference to a decades old paper.
The lack of interdisciplinary collaboration is one of the major flaws of the US university system (I can't speak to other countries). The grad students I knew each had a specific toolkit that they had learned in their field but there was little or no sharing of those toolkits from domain to domain. That is unfortunate. Of particular importance in today's world are a toolkit of mathematical techniques (calculus, statistics, differential equations are probably the top three categories) and a category of basic programming skills (the ability to automate routine number crunching in particular, maybe "scripting" is a more appropriate word than "programming" - even recording and writing macros in Excel VB would go a long way).
Slightly off topic, but the other day someone asked how to do well in academia. Well, interdisciplinary work like this is a great way to get many well-cited papers - be well-versed in two or three fields (a lot of work, but not very hard) and apply things from one to the other(s). Don't call it '<your name>'s Method' but just present it as something groundbreaking (which it even may be, in that new field).
You can generate a paper mill out of this after 10 or 15 years of studying the various fields (including undergrad and grad school) - it doesn't require much hard thinking, just a lot of work.
[+] [-] dmlorenzetti|15 years ago|reply
Our friend, an OB/GYN, mentioned how hard her work is, because "the average baby is born at 3am."
We laughed, but then my brother asked, "What does 'average' mean when you have a 24-hour clock? It must mean the modal time or something like that."
I contributed that this is an issue in defining average wind directions, as well. The basic problem is that if you record times on a 0-24 hour scale, or wind directions on a 0-360 degree scale, and then naively average the numbers, you get meaningless results (for example, 180 degrees if the wind steadily rotates through every point of the compass).
A quick glance at our doctor friend showed she had checked out of the conversation entirely. Possibly she just felt slighted that we were not bowing down in awe at the terrible hours she keeps. But my main impression was that she lives in a world where one receives a piece of information, notes it, and stores it away. And when repeating that received information, one's listeners duly note and store it away.
Chasing down the source of the information, calling it into question, relating it to other things in the world-- these just weren't things she seemed to find pleasurable.
[+] [-] jacobolus|15 years ago|reply
That the two of you ignored your friend’s point and started talking about mathematics instead could be insulting† and she was may have been quite justifiably annoyed. Interpreting that annoyance as evidence that doctors are uncritical/uncurious/uncreative is pretty narrow-minded, IMO.§
† depending on your existing relationship and usual interactions.
§ obviously I wasn’t there and don’t know your friend, so can’t really judge. YMMV
[+] [-] jfager|15 years ago|reply
[+] [-] carbocation|15 years ago|reply
[+] [-] bluesnowmonkey|15 years ago|reply
Your medical friend was not checked out. She was busy stifling impolite laughter at your unfamiliarity with Cartesian coordinate systems.
[+] [-] epo|15 years ago|reply
The doctor is far more knowledgeable about a VERY complex engineering system (the female reproductive system) than you or your brother are ever likely to be in any sphere of knowledge. She probably hadn't, as you put it, checked out, she was just being polite and waiting for you to stop babbling pedantic trivia. Perhaps amusing herself by wondering if you might have Aspergers.
Your final 'impression' of her thought processes is banal and insulting. And people wonder why geeks don't have girlfriends.
[+] [-] sliverstorm|15 years ago|reply
Do you really want your doctor to be that sort of person? Do you want the person treating your kid for rickets to stand up and challenge the establishment and test a new hypothesis on the disease?
I don't know about you, but I'd prefer it if the medical researchers did the challenging, questioning and validating, and the practicing doctors (in general) went with the established knowledge. In my opinion practice and research (of most any subject) are fundamentally divided, and so long as a person can maintain perspective I think no worse of them for preferring one side to the other.
[+] [-] rmc|15 years ago|reply
[+] [-] beambot|15 years ago|reply
von Mises on Wikipedia: http://en.wikipedia.org/wiki/Von_Mises_distribution
[+] [-] nostrademons|15 years ago|reply
[+] [-] noonespecial|15 years ago|reply
Programmers are especially vulnerable to this. Who hasn't made a 4 page case statement when 3 lines of recursion would have done it, especially when starting out? Then again, I've never named my case statements after myself.
No matter how brilliant one is, its ridiculously hard to know what you don't know. In fact, sometimes being very advanced in one field makes it doubly hard to think of in one you're poor in.
[+] [-] kwantam|15 years ago|reply
That hilarious part is that this paper was published in a peer-reviewed journal---and none of the reviewers realized that he'd rediscovered some 17th century math.
[+] [-] Grinnmarr|15 years ago|reply
This cannot be overstated. And when ego is the cause...oy vey! I believe that one of the greatest intellectual challenges to overcome when one over identifies as being "brilliant" is the, oft youthful, focus on pedantry. How embarrassingly ironic to display one's ignorance as a result of flaunting one's intellect. As a recovering pendant well in to those years that separate true youth from "decidedly middle-aged" let me sincerely recommend to some of our (mostly) younger members that they put down the Bertrand Russell long enough to pick up some William Blake.
[+] [-] aaronbrethorst|15 years ago|reply
I can't imagine spending months on a peer-reviewed paper accomplishing the same amount of nothing. That would be disheartening to say the least.
[+] [-] enneff|15 years ago|reply
[+] [-] moconnor|15 years ago|reply
I bet you created a half-assed linked-list implementation and prefixed the class name with your initial, though ;)
[+] [-] sudont|15 years ago|reply
I just so happen to be starting out. Do you mean generating cases like "prefix-[i+1]” ?
[+] [-] niels_olson|15 years ago|reply
[+] [-] alphaoverlord|15 years ago|reply
Mary M. Tai
www.ajcn.org/cgi/reprint/54/5/783.pdf
http://journals.lww.com/topicsinclinicalnutrition/Citation/1...
I can't access the article, so I can't make any comment on the actual methods, but I think it seems a little presumptuous to flippantly make broad strokes about a paper from a different field solely by looking at the abstract.
He does make a good point about overeager premeds (and for good reason), but this post seems to be more airing out grievences and stereotypes than an argument about education or differences between diciplines.
[+] [-] mbreese|15 years ago|reply
In the Topics in Clinical Nutrition 1992 paper, she is listed as having an MS and an EdD. So, it's safe to say that she probably didn't realize that she was describing integration.
The entire post had a "I'm smarter than you" chip on your shoulder type of vibe.
It should also be noted that this paper had a number of letters to the editor about it, so in this case, I'd say that the process works (even if it got by the editors).
http://www.ncbi.nlm.nih.gov/pubmed/8137688
[+] [-] carbocation|15 years ago|reply
I'll cycle through this thousands of times to obtain stable estimates, and then call this the Monte Carbocation method.
[+] [-] jrockway|15 years ago|reply
[+] [-] gclaramunt|15 years ago|reply
[+] [-] hrq|15 years ago|reply
[+] [-] kenjackson|15 years ago|reply
"Tal's Formula is the Trapezoidal Rule"
A rebuttal doesn't get much blunter than that.
[+] [-] wladimir|15 years ago|reply
Similarly, within computer science I've seen cases like this as well. A researcher in scientific visualization built an awesome visualization algorithm based on a 'new' data structure. Then it turned out this data structure was already discovered in the 90's by a theoretical computer scientist.
In neither of my cases it undermined the underlying research, it's basically just a missing reference. It's inherent in science that some things are rediscovered once in a while and it is very hard to follow articles from a completely other field.
However, something as blatant as rediscovering integration reviewers from each field should have noticed....
[+] [-] mbm|15 years ago|reply
http://www.sph.unc.edu/nciph/jane_monaco_1990_1984.html
[+] [-] mbm|15 years ago|reply
The integrated area under the curve (AUC) analysis for glucose and insulin was determined according to the formula of Tai et al.
Damn.
[+] [-] blasdel|15 years ago|reply
Seems like a great way to both pad out your citations and troll your readers!
[+] [-] abhikshah|15 years ago|reply
[+] [-] philelly|15 years ago|reply
[+] [-] sciboy|15 years ago|reply
[+] [-] silverlake|15 years ago|reply
[+] [-] gilesc|15 years ago|reply
[+] [-] manicbovine|15 years ago|reply
[+] [-] whatwhat|15 years ago|reply
Why is rote memorization frowned upon in math?
I'm a second year math student about to enter his third year. I enter a lot of the definitions, theorems, etc. into a flash card software (Anki) for memorization. I combine this with doing tons of proofs and problems from various textbooks depending upon the course I'm studying. I would say from personal experience that rote memorization has definitely helped me: (1) understand the math better; (2) excel in exams, and; (3) able to solve extra and harder problems from books.
So I'm struggling to see why rote memorization is bad. Is not memory useful for justifying knowledge? I'm not saying memorization is the only thing. Just that it seems to build the foundation for everything else, as per Bloom's cognitive taxonomy: https://secure.wikimedia.org/wikipedia/en/wiki/Bloom%27s_Tax...
[+] [-] rue|15 years ago|reply
[+] [-] mhartl|15 years ago|reply
[+] [-] brianlash|15 years ago|reply
[+] [-] frisco|15 years ago|reply
[+] [-] dannyb|15 years ago|reply
Plus, the guy deserves an academic weggie for naming a method after himself...
[+] [-] fluidcruft|15 years ago|reply
[+] [-] jasonkester|15 years ago|reply
"A week in the lab will save you an hour in the library every time."
[+] [-] finton|15 years ago|reply
I recently became interested in the idea of possible anesthetic neurotoxicity in infants and looked at a number of papers. The basic research seems solid, but the conclusions drawn are strangely inconsistent.
Neonatal rat, mouse and pregnant guinea pig models are used, and recent studies have been done on monkeys. It appears that there is a high incidence of cell death after exposure to anesthesia, but there is a relatively narrow window of vulnerability, which apparently peaks at 7 days postnatal in rats and rapidly diminishes. 5 day old monkeys were affected by prolonged exposure to ketamine, and 35 day old monkeys were not. Similar results were seen in guinea pigs.
What strikes me, is that this window of vulnerability is differently equated to human development by researchers, despite years of research into ethanol neurotoxicity (anesthetic studies seem to be more recent). Estimates for 7 to 14 day old rat-human equivalents range from pre-term infants to full-term newborns, to mid-gestation human fetuses and to children up to 3 years old. Two monkey papers, one using ketamine, and another using isoflurane also came up with different vulnerability periods based on similar data by using different sources of information on neurodevelopment, one published in the 1970's and one more recent.
I cannot understand how so many studies could have statements about possible windows of human neurotoxicity, without any certainty about what phase in neurodevelopment they were dealing with. And, oddly enough, the paper describing the model that is used to claim a mid-gestation vulnerability (based on a "bioinformatics approach") clearly states that it cannot be used to predict the "coordinated surge in synaptogenesis just prior to birth in primates", which is hypothesized to be the peak period of vulnerability to anesthetic-induced cell death. So why is it used as a source?
[+] [-] finton|15 years ago|reply
[+] [-] pge|15 years ago|reply
[+] [-] nickolai|15 years ago|reply
http://www.ncbi.nlm.nih.gov/pubmed/7677819
Tai's formula is the trapezoidal rule.
Monaco JH, Anderson RL.
Comment on:
[+] [-] praptak|15 years ago|reply
[+] [-] roel_v|15 years ago|reply
You can generate a paper mill out of this after 10 or 15 years of studying the various fields (including undergrad and grad school) - it doesn't require much hard thinking, just a lot of work.