top | item 44232371

Scientific Papers: Innovation or Imitation?

66 points| tapanjk | 8 months ago |johndcook.com

30 comments

order

matusp|8 months ago

In my experience, the publication pressure in today's science is to large extent inhibiting innovation. How can you innovate when you need to have X papers every year, otherwise you will not get that position of funding. To fulfill the quota, the only rational strategy is to focus on simple iterative papers that are very similar to what everybody else is doing. There is simply no time to innovate or be brave, you have to comfort. There is also barely time to make sure they what you are doing is actually methodologically correct. If you spend too much time, you will get scooped and forgotten.

Case in point, everybody is doing AI research nowadays and NIPS has like 15k submitted papers. But the innovation rate in AI is actually not that much higher than 10 years ago, I would even argue that it is lower. What are all these papers for? They help people build their careers as proofs of work.

atrettel|8 months ago

I completely agree that "publish or perish" harms innovation. Funding and research positions have become so predicated on rapid and consistent publication that it incentives researchers to focus on incremental and generally low-risk ideas that they can propose, develop, and publish quickly and predictably. Nobody has the time or energy anymore to focus on bigger and braver (your word) ideas that are less incremental and cannot be developed in predictable time frames.

I agree that many fields essentially have papers as "proof of work", but not all fields are like that. When I worked as a mechanical engineer, publication was "the icing on the cake" and not "the cake itself". It was a nice capstone you do after you have have completed a project, interacted with your customers, built a prototype, filed a patent application, etc. The "proof of work" was the product, basically, and you can build your career by making good products.

Now that I am working as a scientist, I see that many scientists have a different view of what their "product" is. I have always focused on the product being the science itself --- the theories I develop, the experiments and simulations I conduct, etc. But for many scientists, the product is the papers, because that it what people use to evaluate your career. It does not have to be this way, but we would have to shift towards a better definition of what it means to be a productive scientist.

jltsiren|8 months ago

AI is a special case of a special case. First you have the weird CS publication culture with conference papers and a heavy focus on selecting a (small) subset of winners. And then you have a subfield with giant conferences, a lot of money, and a lot of people doing similar things.

A typical approach to science is finding your niche and becoming a person known for that thing. You pick something you are interested in, something you are good at, something underexplored, and something close enough to what other people are doing that they can appreciate your work. Then you work on that topic for a number of years and see where you end up in. But you can't do that in AI, because the field is overcrowded.

kevinventullo|8 months ago

Follow-up papers by other authors which “only extend or expand on the specific finding in very minor ways” have a secondary benefit. In addition to expanding the original findings, they are also implicitly replicating the original result. This is perhaps a crucial contribution in light of the replication crisis!

tgv|8 months ago

If only. I worked in cog/neuro sci, and the career builders there produce small variations on the original. Variations on the Stroop task, which dates back to 1935(!), are still being published, despite the fact that there is no explanation for the effect. And when you consider that null results are rarely published, and that many aspects of the methodology are flawed, a new paper cannot be considered a replication: it's just wishful thinking upon wishful thinking.

Daub|8 months ago

Maybe. But that is a generous reading. I used to attend many computational aesthetic conferences. The sheer volume of non photorealistic rendering cross hatch algorithms was almost laughable.

kj4211cash|8 months ago

So much of academic life revolves around bringing in grant money. This is particularly true in STEM fields and at the best research schools. There are ever increasing administrative hoops to jump through to bring in that grant money. And grants nowadays are often given out for research on very specific topics often chosen by bureaucrats. These topics are, almost by definition, not innovative. The NSF is an exception but there are very few NSF grants given out, relative to the number of researchers. My assessment is that the most famous, most published researchers can still afford to explore, if they have the time and inclination, but the rest cannot.

bonoboTP|8 months ago

Yes, also the ERC grants allow quite broad freedom in the EU.

But grants have been annoying for long. Even decades ago the common wisdom, in fields like biology, was to write grant applications promising things you've already done recently, then spend the money on doing something new. But this doesn't work in super fast moving fields like AI/ML where a grant length is an eternity relative to change in the field.

Also the bureaucrats want something sexy at the end, so academics overpromise, then there's a bitter taste among the funders that the super fancy thing didn't materialize, high impact factor publications can sweeten this bitterness somewhat, as can awards etc. Also a rising tide lifts all boats so just by keeping up with SOTA, they can woo the bureaucrats.

agarttha|8 months ago

Random thoughts from physics researcher: - Too much imitation delays innovation. - For all the emphasis on high risk research, the system doesn't reward it. - Creativity isn't valued as much as it should be. - negative results and failed experiments hold back careers but are signs of attempts at innovation - the VC world may understand that only 1/100 projects will be novel and perhaps successful, but funding agencies don't

Daub|8 months ago

For a few years I worked closely with computer engineers in a S E Asian university. I got to know quite well the sort of stuff they published. Some of the dodgy stuff i saw:

Recycling. Some papers seemed to be near duplicates of prior work by the same academic, with minor modification.

Faddishnes. Papers featuring the latest buzz technologies regardless of whether they were appropriate.

Questionable authorship. Some senior academics would get their name included on publications regardless of whether they had been actively engaged with that project. I saw a few academics get involved in risky and potentially interesting subjects, but they all risked their careers in doing so.

But most of all, there was a dearth of true innovation. The university noticed this and established an Innovation Centre. It quickly became full of second hand projects all frustratingly similar to projects in the US from a few years ago.

Of course there were exceptions, and learning from them was a genuine growth experience fir which I am grateful.

bonoboTP|8 months ago

It's not just about the academics but the expectations from higher ups and funding agencies in order to keep your job and have a chance at continuing your career. Over the last few decades the expected amount of papers at good and even mediocre institutions has exploded. Profs who want to be seen as productive and who want good funding publish 30-50 papers per year and sometimes "supervise" dozens of PhD students at the same time (who agree to the deal to get the brand name of the big prof, not for any real supervision).

Funding agencies can't evaluate the research itself, so they look at numbers, metrics, impact factors, citations, h-index, publication count etc. They can't simply say "we pay this academic whether he publishes or not because we trust he is still deep in important work when he is not at a work stage to publish" because people will suspect fraud and nepotism and bias, and often the funding is taxpayer money. Not that the metrics prevent that of course. But it seems that way. So metrics it is, so gaming the metrics via Goodhart's law it is.

I don't think it's super bad, but it increases administrative work and busywork overhead on top of the actual research. The progress slows somewhat per person, as the same work has to be salami sliced and marketed in chunks, but there's also way more people in it, but of course most of them produce vary low quality stuff but it's not a big loss because these people would not even have published anything some decades ago, they would just have some teaching professorship and publish every few years perhaps just in their national language. It increases the noise but there are ways to find the signal among it, and academics figure out ways to cut through the noise. It's not great, not super easy, and it pushes a lot of people out who dislike the grind but there are plenty who see it as a relatively good deal to move to a richer country and do this.

tokinonagare|8 months ago

I've seen faddishnes and questionable authorship in a top-3 Japan university too. The lab I was in was a paper mill, the professor even explicitly told student than quantity > quality. I'm glad in France things are getting a bit slower but deeper (from my observations).

mpascale00|8 months ago

The thesis here is not well elaborated on. Ref [1] for example seems to me to miss more recent progress on our understanding of working memory, and in linguistics, while Chomsky's work is foundational, we have a much better idea now of how the requisite compositionality of behavior necessary for language might arise in the human brain.

mpascale00|8 months ago

To add to that, I agree with the basic sentiment of the article - but it just doesn't seem to be the reflections of an academic. Ending with optimism about AI, to me, makes me think the author believes AI will solve a problem they are not well acquainted with.

Perhaps one expects overgeneralization in consulting blogs though

birn559|8 months ago

The process is far from perfect, but it works well enough mid-term and works pretty well long-term.

It's also better than any alternatives, as far as I know. Haven't heard people pushing the idea of restructuring the process, the only exception being that journals shouldn't cost (that much) money and instead institutions should pay for publishing a paper. This wouldn't however change the foundation of the process.

agumonkey|8 months ago

What about the publish or perish effect ? no ideas on how to rebalance things to avoid it ?

richarlidad|8 months ago

Imitation precedes creation.