Saturday, January 12, 2013

Academic journals are an obsolete historical appendage of academia

I don't know if it's in my best interest to write this, but it's been a sad day and I feel like it needs to be said, again and again, by everyone who's been affected.

Historically, when information was spread exclusively via ink on paper, academic journals provided a crucial service to the academic community: distribution.

Over the past few decades, the advent of the internet has fundamentally changed the way we use and share information, but traditional journals remain a staple of academia - not because they continue to contribute unique value, but because they're entrenched in the system and careers still depend on publishing in them.

The internet makes publication of information easy and efficient. Modern academic journals don't provide any additional value that could not be easily and cheaply replicated, and because of their history as a physical medium they have been slow to realize the full potential of the publication of information via the internet. Journals still publish discrete issues, enforce page limits, and sometimes charge to print color figures. In the digital age, all of these practices are unnecessary - laughably so. They also charge exorbitant amounts for access to articles by anyone who isn't affiliated with a subscribing university.

What services do modern journals provide?

  1. Peer review. Journals don't provide this; volunteer academics do, for free. Journals simply organize it and then profit from it. Any impartial third party could serve the same function and provide their stamp of approval to a paper. There are also numerous alternatives to the current model of peer review employed by journals. One is to use post-publication review and encourage public dialogue on research publication websites. This is already done on journal websites and others like arxiv.

    Peer review and some degree of expert filtering is important, but many are frustrated with the current system for good reason. Peer review should have a well-defined and limited scope. Reviewers should check research for soundness and rigor and, where possible, replicate results. This is one of the pillars of the scientific method. Yet, modern peer review typically fails to attempt any sort of replication. It is also subjective and can be driven by political or ideological conflicts instead of validity. Given a medium such as the internet with practically unlimited storage space, reviewers should not be questioning the potential impact of work; they should reject bad science, and nothing else.

  2. Name recognition. You score more points for publishing in some journals than others. Is this a good thing? Papers should be judged by their measurable impact and the ensuing discussion, not which company deemed them worthy of publication. The current system makes it easy to scan someone's list of publications and instantly judge their quality as a researcher. Maybe it shouldn't be so easy.

  3. Topical organization. I love reading a few specific journals, like Global Ecology and Biogeography - it's a topic that interests me and so I find most of the papers within it interesting. But publishing separate journals for different subjects is hardly necessary. The internet already knows how to categorize and organize information. Wikipedia, Google, and Reddit are three very different examples of how this has been done successfully.

  4. Formatting and typesetting. This is not a concern with content published online - just use markdown or HTML. Even if a physical printout is required, thanks to tools like LaTeX, these tasks are simpler than ever.
Academic journals provide marginal, replaceable value to the dissemination of research, and by doing so they somehow earn the right to profit from and control access to the results of research (often publicly funded research) they had no part in. This is astonishingly unethical, and more people need to start challenging the status quo like Aaron did.

The rising generation that grew up in the age of the internet believes strongly that information should be free and available, not guarded for profit. Something needs to be done to disrupt academic publishers. The results of scientific research should be freely available to everyone, and private publishers should not act as gatekeepers to knowledge.

I reject the idea that any corporation can profit by publishing publicly funded research that should inherently be free. I want to see widespread rejection of this idea. Share your PDFs. They can't arrest all of us.

Aaron Swartz's Open Access Manifesto


  1. I'd like to focus on just one of your points: name recognition, or impact factor more generally. This does matter. In theory, we should all read all the relevant papers in our field, plus sample from other fields. In practice, there is just too much. But how to select which ones? Imagine, for simplicity, that everyone decided to publish in PLOS one. You can only consume 20 papers a day. Do you pick the ones closest to yours based on some metric (shared citations, perhaps)? How would you notice a major trend in a related field? Do you read anything by your favorite scientists? How does new talent get noticed? Do you see what is popular in tweets by colleagues? The most tweetable may not be the most important. The advantage of something like Science is that anyone on the planet can submit something, have it looked at, and get it published for free. And people are guaranteed to notice the paper and this new author. PLOS ONE could do something like star the most important papers (while still accepting ignoring impact) so that people can see which ones to be sure to glance at. Another useful aspect of binning papers into impact categories (such as by journal, but not always by that) is when trying to judge applicants. New assistant faculty position, 200 applicants from a variety of fields: they haven't had time to get cited much, so h-index is not useful yet, so you can read 600 papers and compare them or do some filtering: "she has three Science/Nature papers, he has two papers in the Journal of the Springfield County Bird Club: let's look at her application first." I think there are ways to include impact of papers other than relying on commercial journals, but having some way to communicate this at the time of a paper's publication is more important than you say above.

    1. Brian, thanks for commenting - you raise some valid points.

      Finding appropriate papers to read seems to me like a great machine learning problem, and increasingly is being treated as one. For example, Google Scholar can be used as a recommendation engine to find papers similar to your own publications ( Its suggestions are based on content so you can get journals and authors that you wouldn't have sought out. But you're right, I think these tools need to evolve further before they become as convenient and consistent as browsing each issue of the top topical journals in an RSS reader.

      I'm curious just how successful new, relatively unknown authors are at breaking into journals like Science and Nature. Sure, anyone on the planet can submit, and in an ideal world only the quality of their work would dictate where they published, but is that really the case?

      Altmetrics ( are an important complement to open access. Some very smart people are attempting to address the issues you raise, and they seem to have some great ideas, including transforming and expanding peer review so that some indication of an article's quality is available shortly after publication. I haven't really seen this implemented yet so it remains to be seen how well it will work.

    2. Ben, thanks for a great piece. I think Brian has hit the nail on the head -- while other purposes of journals are less relevant, journals control the reputation economy, and that is why we submit to them. I think Richard Price puts it rather well in the recent techcrunch piece: Richard predicts that individual reputation will eventually replace journal reputation, and (for better or worse) I think he is right again.

      It is already true for folks like Jonathan Eisen, who can place a paper Nature would have jumped on (4th Domain) in PLoS ONE, announce it on his blog which is read by most leading science journalists as well as colleagues, and have the paper covered in the NY Times, the Economist, and scores of other outlets all the same. His networking and personal branding have allowed him to replace the traditional reputation economy.

      In other media forms, I think personal reputation has largely replaced the value of brand reputation (publisher, broadcaster, record label, etc), and can be built in many ways that do not depend on those things, so it is perhaps unsurprising that this may happen to science as well.

      Brian's concerns about if that is good or bad for the little guy may be all the more valid in such a world, but it will be hard to say. Any system has its winners and losers.

  2. Thanks for your post Ben. I couldn't agree more with what you said. Seems like this is going to be a very slow changing problem because of all the money wrapped up in it, e.g. Elsevier can lobby congress etc. more persuasively than lowly academics can.

    It seems like the common refrain is that younger academics have more to lose by not trying to get into high impact factor journals as they still need to get their tenure track jobs. That's probably true for many people? This publishing fiasco gives a good reason to leave academia IMO.