This post follows on from a discussion begun by Paul Fritjers and continued HERE. Most human activity has changed drastically over our lifetimes. And the rate of change is increasing see for instance the next generations user interface for computers. You would hope academics would live at the forefront of change. But the double refereed hard copy journal system has not changed for a century. I think it is a bad system. In fact, I wonder why we need journals at all.
It is important at the outset to identify journal publication as it currently stands as serving two distinct purposes. One is to disseminate information amongst the research community. The other is to keep a record of research output for career advancement in order to assist the HR departments of universities in promotions and appointment. I will focus on the first one, since I think that is the main game. But I will come back to the second one at the end.
My idea is that draft papers in (say for instance) mathematical statistics are uploaded to a website lets call it the Mathematical Statistics Research Network. Submission format is standardized to some extent and also involves filling out a form of keywords, selecting from a list of say 50 subject categories and you input the MSRN code of cited papers. There is a modest fee for uploads to cover the cost of the site but there is no fee for downloads. Only registered users can use the site. Registration is free. The rest follows very easily indeed, especially for anyone who is familiar with using E-Bay.
The first point to note is that electronic papers will, all other things held constant, be better than hardcopy. Why? Because there are fewer media imposed limits on the author. Papers could include colour graphics, data links and even video. Most importantly, the author is not writing to a word limits and can use exactly as much space as is required to explain the ideas. The second point is that there is not a two year delay from first draft to “publication”. Of course, premature publication has its risks. Don’t worry. I will get to that.
The purpose of the site is to give you, the user, free access to good and recent research in your field of interest. When you log-on you select from the list 50 subject categories for instance clinical trials & exact tests, and you see a list of papers in the intersection of these categories. The key to making the system work well is the provision of a powerful and personalisable sorting algorithm. There are various characteristics of the paper that need to be measured to filter the sorting. Some are easy to measure, some are not. One easy one is the date of upload, so you might initially order papers by how recent they are. Another might be a keyword, such as intention to treat which would promote these papers to the head of the list, but still time ordered.
But the main problem that this system would face is the same one that journals face. How do you identify the good papers from the bad? First, you can measure how often a paper has been downloaded. If a paper is currently hot, then I might want to see it. But download count is not the same as quality. How do you measure quality? This is where we learn from Ebay. Registered users can review and rate papers, on say a 1-10 scale. The reviews are not anonymous. And each rating links to a required document giving reasons for the rating. Reviews can be withdrawn or edited by the reviewer. In fact, after reading dissenting opinions there may well emerge a consensus view. So for each paper, I can see two more fields number of downloads and average rating as proxies for importance and quality. As a user, I can set average rating as the primary sort criterion, at a single click, and see the highest rated paper in my chosen area this month. I click on the average rating itself and I see a list of who rated the paper and find out why. And reading these possibly conflicting reviews will help me in digesting the main issues in the paper.
What about capricious or malicious reviews? E-bay again. Reviewers can be rated. This might partly come through registered users rating individual reviews, so if a reviewer offers reviews that are consistently rated lowly, then the reviewer is rated lowly by the system. This defines a weight that feeds into the average rating measure applied to papers. So as a user, when you look at the list of reviews you can order them by reliability weighting. Of course, you can set weights for reviewers yourself. For instance, you might give automatic high weight to any review by Nicholas Gruen in the area of “information markets in the health system”. And you might set zero weights to those who you have seen write stupid reviews in the past or people who you just have a low opinion of. (I won’t name names!)
At a certain point, the author can choose to withdraw the paper or to upload a final version, taking into account the comments of the reviewers. At this point the paper becomes anchored and cannot be changed. The final paper also attracts reviews and ratings.
Those who want to publish in a hardcopy journal could still do so if they wanted to though I am not sure how long journals publication two years hence would remain attractive. If the journal system can survive the market then so be it. Journals must presumably then have a use. And the journals who did remain will actually benefit from the new system. How? When a paper is submitted just look at MSRN and see the on-line reviews. This will make life for editors (whose job is to reject about 2/3 of papers from their own limited judgement) much easier. And it could also give referees a flying start.
Lastly, what about career advancement? The current system of promotion or appointment involves saying how many papers N you have published, how many X are in top journals, and recommendations from three referees. Only the N and the X would change. Instead you would have a list of your papers ranked perhaps by downloads and/or average ratings. Your referees and the institutions considering you could easily log-in to MSRN and look at the reviews and look at other rankings. Citation information would be easier to collect automatically within the system (though it is available now) including various half life measures that take into account the fact that older papers have more citations than newer papers.
To set up such a system in the first place would be pretty expensive. But it can be done. Look for instance at the Social Science Research Network who have made a start in this direction. Another interesting site is Berkeley electronic press where my school is likely to outsource its entire research webpage very soon. Neither of these sites are currently implementing the ideas above, nor does uploading your research to these sites currently “count” either for promotion or even informally amongst your peers. But it seems to me that it is only a matter of time.
There are all sorts of objections that one might try to make to such a system. I reckon I have thought of most of them. But the real question should be not whether the system is perfect. The sensible question to ask is
What cost/benefits does traditional double-blind review have over the system I am proposing?
If you have a technical bent, you may be interested in Connotea, aimed at researchers (esp Biomed) and reviewed by folk from Nature in:
Social Bookmarking Tools (II) – A Case Study – Connotea D-Lib Magazine, April 2005, Volume 11 Number 4, ISSN 1082-9873
(part 1, a general review, is here)
http://connotea.org is the main site (and yes, the sample paper is the Watson and Crick half-pager)
As it turns out, I just got some spam about “economics”, a new electronic economics journal that seems to implement some of the ideas you mention. See:
http://www.economics-ejournal.org
Or, on the review process:
http://www.economics-ejournal.org/about-economics/two-stage-publication-process