The slowing down of the biggest scientific journal

PLoS ONE started 11 years ago to disruptively change scholarly publishing. By now it is the biggest scientific journal out there. Why has it become so slow?

Many things changed at PLoS ONE over the years, reflecting general trends in how researchers publish their work. For one thing, PLoS ONE grew enourmously. After publishing only 137 articles in its first year, the number of articles published per year peaked in 2013 at 31,522.


However, as shown in the figure above, since then they have declined by nearly a third. In 2016 only 21,655 articles were published in PLoS ONE. The decline could be due to a novel open data policy implemented in March 2014, a slight increase in the cost to publish in October 2015, or a generally more crowded market place for open access mega journals like PloS ONE (Wakeling et al., 2016).

However, it might also be that authors are becoming annoyed with PLoS ONE for getting slower. In its first year, it took 95 days on average to get an article from submission to publication in PLoS ONE. In 2016 it took a full 172 days. This renders PLoS ONE no longer the fastest journal published by PLoS, a title it held for nine years.


The graph below shows the developemtn of PLoS ONE in more detail by plotting each article’s review and publication speed against its publication date, i.e. each blue dot represents one of the 159,000 PLoS ONE articles.


What can explain the increasingly poor publication speed of PLoS ONE? Most people might think it is the sheer volume of manuscripts the journal has to process. Processing more articles might simply slow a journal down. However, this slow down continued until  2015, i.e. beyond the peak in publication output in 2013. Below, I show a more thorough analysis which reiterates this point. The plot shows each article in PLoS ONE in terms of its time from submission to publication and the number of articles published around the same time (30 days before and after). There is a link, for sure (Pearson r = .13, Spearman rho = .15), but it is much weaker than I would have thought.


Moreover, when controlling for publication date via a partial correlation, the pattern above becomes much weaker (partial Pearson r = .05, partial Spearman rho = .11). This suggests that much of PLoS ONE’s slow down is simply due to the passage of time. Perhaps, during this time scientific articles changed, requiring a longer time to evaluate whether they are suitable for the journal.

For example, it might be that articles these days include more information which takes longer to be assessed by scientific peers. More information could be mirrored in three values: the number of authors (information contributors), the reference count (information links), the page count (space for information). However, the number of authors per article has not changed over the years (Pearson r = .01, Spearman rho = .02). Similarly, there is no increase in the length of the reference sections over the years (r = .01; rho = -.00). Finally, while articles have indeed become longer in terms of page count (see graph below), the change is probably just due to a new lay-out in January 2015.


Perhaps, it takes longer to go through peer-review at PLoS ONE these days because modern articles are increasingly complex and interdisciplinary. A very small but reliable correlation between subject categories per article and publication date supports this possibility somewhat, see below. It is possible that PLoS ONE simply finds it increasingly difficult to look for the right experts to assess the scientific validity of an article because articles have become more difficult to pin down in terms of the expertise they require.


Having celebrated its 10 year anniversary, PLoS ONE can be proud to have revolutionized scholarly publishing. However, whether PLoS ONE itself will survive in the new publishing environment it helped to create remains to be seen. The slowing down of its publication process is certainly a sign that PLoS ONE needs to up its game in order to remain competitive.

Wanna explore the data set yourself? I made a web-app which you can use in RStudio or in your web browser. Have fun with it and tell me what you find.

— — —
Wakeling S, Willett P, Creaser C, Fry J, Pinfield S, & Spezi V (2016). Open-Access Mega-Journals: A Bibliometric Profile. PloS one, 11 (11) PMID: 27861511

Why are ethical standards higher in science than in business and media?

Facebook manipulates user content in the name of science? Scandalous! It manipulates user content in the name of profit? No worries! Want to run a Milgram study these days? Get bashed by your local ethics committee! Want to show it on TV? No worries. Why do projects which seek knowledge have higher ethical standards than projects which seek profit?

Over half a million people were this mouse.

Just as we were preparing to leave for our well-deserved summer holidays this year, research was shaken by the fall-out to a psychological study (Kramer et al., 2014) which manipulated Facebook content. Many scientists objected to the study’s lack of asking for ‘informed consent’, and I think they are right. However, many ordinary people objected to something else. Here’s how Alex Hern put it over at the guardian:

At least when a multinational company, which knows everything about us and controls the very means of communication with our loved ones, acts to try and maximise its profit, it’s predictable. There’s something altogether unsettling about the possibility that Facebook experiments on its users out of little more than curiosity.

Notice the opposition between ‘maximise profit’ which is somehow thought to be okay and ‘experimenting on users’ which is not. I genuinely do not understand this distinction. Suppose the study had never been published in PNAS but instead in the company’s report to share holders (as a new means of emotionally enhancing advertisements), would there have been the same outcry? I doubt it. Why not?

Having issues with TV experimentation versus scientific experimentation?

Was the double standard around the Facebook study the exception? I do not think so.  In the following youtube clip you see the classic Milgram experiment re-created for British TV. The participants’ task is to follow the experimentor’s instructions to electrocute another participant (who is actually an actor) for bad task performance. Electro shocks increase in strength until they are allegedly lethal. People are obviously distressed in this task.

Yesterday, the New Scientist called the classic Milgram experiment one of ‘the most unethical [experiments] ever carried out’. Why is this okay for TV? Now, imagine a hybrid case. Would it be okay if the behaviour shown on TV was scientifically analysed and published in a respectable journal? I guess that would somehow be fine. Why is it okay to run the study with a TV camera involved, not when the TV camera is switched off? This is not a rhetorical question. I actually do not grasp the underlying principle.

Why is ‘experimenting on people’ bad?

In my experience, ethical guidelines are a real burden on researchers. And this is a good thing because society holds researchers to a high ethical standard. Practically all modern research on humans involves strong ethical safe guards. Compare this to business and media. I do not understand why projects seeking private gains (profit for share holders) have a lower ethical standard than research. Surely, the generation of public knowledge is in the greater public interest than private profit making or TV entertainment.

— — —

Kramer AD, Guillory JE, & Hancock JT (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences of the United States of America, 111 (24), 8788-90 PMID: 24889601

Milgram, S. (1963). Behavioral Study of obedience The Journal of Abnormal and Social Psychology, 67 (4), 371-378 : doi: 10.1037/h0040525

— — —

picture: from