Academic publishing keeps you on the straight and narrow of everyone else’s ideas?
Who’da thunk?
Bias against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators by Jian Wang, Reinhilde Veugelers, Paula Stephan – #22180 (LS PR)
Abstract:
Research which explores unchartered waters has a high potential for major impact but also carries a higher uncertainty of having impact. Such explorative research is often described as taking a novel approach. This study examines the complex relationship between pursuing a novel approach and impact. Viewing scientific research as a combinatorial process, we measure novelty in science by examining whether a published paper makes first time ever combinations of referenced journals, taking into account the difficulty of making such combinations.
We apply this newly developed measure of novelty to all Web of Science research articles published in 2001 across all scientific disciplines. We find that highly novel papers, defined to be those that make more (distant) new combinations, deliver high gains to science: they are more likely to be a top 1% highly cited paper in the long run, to inspire follow on highly cited research, and to be cited in a broader set of disciplines. At the same time, novel research is also more risky, reflected by a higher variance in its citation performance. In addition, we find that novel research is significantly more highly cited in “foreign” fields but not in its “home” field.
We also find strong evidence of delayed recognition of novel papers and that novel papers are less likely to be top cited when using a short time window. Finally, novel papers typically are published in journals with a lower than expected Impact Factor. These findings suggest that science policy, in particular funding decisions which rely on traditional bibliometric indicators based on short-term direct citation counts and Journal Impact Factors, may be biased against “high risk/high gain” novel research. The findings also caution against a mono-disciplinary approach in peer review to assess the true value of novel research.
It’s not just citations — A big problem with novel research in Australia is that it is very hard to get grants with because someone will always dislike it. Hence they should change the name of “Discovery” projects to “Discovered” projects.
The other problem is that people that churn out large amounts of stuff are rewarded (which is getting unlike the rest of the world), and the typical way this is done is to churn through PhD students, many of whom don’t have the ability to novel stuff or are simply acting as cheap RAs on projects that are designed to churn out lots of stuff, and hence the problem perpetuates.
Exploring “unchartered” waters is the purview of aspiring accountants.
The failures in the research system – of measurement and incentives – are just huge. Bias against novelty is one aspect, and the abstract does not even mention that novel interdisciplinary research will be very hard to publish in the first place.
But consider how research is measured: number of pubs or cites of author is the basic measure. But papers/cites per year? Hardly ever seen it. Divided by number of authors? I have never seen it either, though sometimes people do approvingly note single author papers. But generally not. Is there another industry where the total output of an organisation/country is not equal to the sum of outputs of the workers? This is the case in academia. Share pubs with your colleagues (mates) and you all get full credit.
And normalising by costs? Never ever have I seen this! If you are on a Future Fellowship you get 4 years no teaching and support for research assistants to work on your thought bubbles. You would think you would be held to a higher standard of research output over the 4 years – say at least double or triple the output of someone working alone and spending half their time teaching. But you don’t.
As a first pass, an appropriate measure of research output would be citations divided by years and by authors and by monetary support of the gubbment. You would see a very different ranking and a very different research landscape would emerge from the incentives it would create.
\endrant
The inputs as outputs thing is truly one of the most bizarre aspects of the research system. You get a million dollars and produce nothing, and you get promoted, unlike any private business where you would be fired for the same thing and promoted for the opposite.
Another one of the crazy things you may have missed given your employer is the ‘environment’ section which you inevitably get a poor rating for if you are not at a GO8 (even if your environment is better). Despite this, the ARC still thinks they should send you reviews to do even though according to them, you couldn’t do them well due to your crappy environment. This would be like a business finding out that a provider was giving them poor quality goods made with cheap labor and then going to the same provider to make sure that wasn’t the case.
/end rant 2
Thanks Chris,
As you say, the abstract just highlights one small fact among such a huge number of problems with the current system.
I’m afraid I have my doubts even about the purged measure you cite. It strikes me as a category mistake to think that you can measure outputs of what is supposed to be the highest intellectual achievement in some way that enables people without any knowledge of the field to just count up some metric and use it as a measure.
I’m a consultant, not an academic, but I work a lot with academics.
The abstract from this article does not surprise me in the least. It just highlights the fact that academic incentive structures are inadvertently designed to deliver unnovation (no that isn’t a typo – I just made up a word for the opposite of innovation). Produce something similar and a tiny incremental step forward from your last academic paper and you will be published and subsequently promoted (based on your long list of publications). Produce something that might have an impact (genuinely clever, step-wise change in thinking, and innovative) and you will probably be ignored, not funded, and ultimately unemployed.
I’d love to see some form measure of potential research ‘impact’, ‘usefulness’, or ‘reach’. As a crude measure, perhaps something like….. for every paper published, how many unsolicited contacts from potential users of that research, or potential investors in the idea were received (excluding other academics)? I’m guessing, but I suspect most papers would produce less than a handful unsolicited contact from outside academia (if any at all). Of course a measure like this is open to gaming, but it might be an interesting self reflection exercise for the academic community.
together with Benno Torgler, I have a working paper just out on the issue of how to try and address the issue of relevance:
http://ftp.iza.org/dp9894.pdf
As to truly innovative contributions, they will by design come up against resistance in any system. The blog above still speaks of papers that were all published well and hence, by design, probably not that innovative.
Paul
Like it. I could see a natural extension of the market for other technical reports too. Often it is the grey literature that contains much of the innovation, but the innovators don’t have the time to repackage their work as academic publications aimed at journals.
Paul,
Please try to constrain your comments to matters that have already arisen on the blog.
Don’t want you going off topic ;)
+1