Still has all the problems that counting citations do: how do you count multiple-author papers (particle physics has given up ranking authors and just put the hundreds of authors in alphabetical order, IIRC) and it encourages people to write piecemeal papers, because why say something in one paper if you can say it in two -- and get twice the citations?
Also, my favorite: What if you cite someone to say that they were completely wrong when they approached the same problem? Should that be counted as a positive for them?
Presumably, no one is going to try to prove something wrong that has already been proven wrong, so unless there are many obscure flaws in one's work that take multiple papers to uncover, it shouldn't affect the citation count much.
this is almost pagerank for academic publications, and would probably be improved by being more like pagerank (where it also took into account how often citing papers were cited)
There are a lot of people trying to use pagerank for academic journals, but so far it hasn't worked well for various reasons.
Part of the problem is that the metaphor breaks down: a paper is like an individual webpage, but a journal is like a company -- it has a much longer time-line, and its impact varies over time. Also, unlike web links, citations don't go away; they just accumulate over time. Since the point of these citation metrics are to rate the journals (and maybe the scientists), pagerank has some difficulties in the domain. It works better for ranking individual papers than for scientists or their journals.
This shouldn't be too surprising: TechCrunch (for example) probably has a good rank on many pages, but pagerank doesn't tell us anything about Michael Arrington's reputation.
Unfortunately, both systems are too easy to game by those who happen to be more unprofessional than average.
In the end, these sorts of systems are foisted on us by the paid bureaucrat-class that pays itself quite well for doing all that really hard work of managing academics. Figuring out whether someone is a hotshot scientist would mean reading his papers, and that's way too much work.
1) Whether someone is a hotshot scientist is a subjective matter of opinion.
2) The question they are asking is not "How important are your contributions to science?", but "How important do your peers think your contributions to science are?"
I'd probably go as far as to claim that it's nonsensical to search for an objective value metric for contributions to science. Scientific contributions are extremely heterogeneous, and value judgements are equally varied.
[+] [-] lutorm|17 years ago|reply
Also, my favorite: What if you cite someone to say that they were completely wrong when they approached the same problem? Should that be counted as a positive for them?
[+] [-] natrius|17 years ago|reply
[+] [-] sachmanb|17 years ago|reply
[+] [-] timr|17 years ago|reply
Part of the problem is that the metaphor breaks down: a paper is like an individual webpage, but a journal is like a company -- it has a much longer time-line, and its impact varies over time. Also, unlike web links, citations don't go away; they just accumulate over time. Since the point of these citation metrics are to rate the journals (and maybe the scientists), pagerank has some difficulties in the domain. It works better for ranking individual papers than for scientists or their journals.
This shouldn't be too surprising: TechCrunch (for example) probably has a good rank on many pages, but pagerank doesn't tell us anything about Michael Arrington's reputation.
[+] [-] amichail|17 years ago|reply
[+] [-] thras|17 years ago|reply
In the end, these sorts of systems are foisted on us by the paid bureaucrat-class that pays itself quite well for doing all that really hard work of managing academics. Figuring out whether someone is a hotshot scientist would mean reading his papers, and that's way too much work.
[+] [-] knowtheory|17 years ago|reply
2) The question they are asking is not "How important are your contributions to science?", but "How important do your peers think your contributions to science are?"
I'd probably go as far as to claim that it's nonsensical to search for an objective value metric for contributions to science. Scientific contributions are extremely heterogeneous, and value judgements are equally varied.