The growing divide between higher and low impact scientific journals

Ten years ago the Public Library of Science started one big lower impact and a series of smaller higher impact journals. Over the years these publication outlets diverged. The growing divide between standard and top journals might mirror wider trends in scholarly publishing.

There are roughly two kinds of journals in the Public Library of Science (PLoS): low impact (IF = 3.06) and higher impact (3.9 < IF < 13.59) journals. There is only one low impact journal, PLoS ONE, which is bigger in terms of output than all the other journals in PLoS combined. Its editorial policy is fundamentally different to the higher impact journals in that it does not require novelty or ‘strong results’. All it requires is methodological soundness.

Comparing PLoS ONE to the other PLoS journals then offers the opportunity to plot the growing divide between ‘high impact’ and ‘standard’ research papers. I will follow the hypothesis that more and more information is required for a publication (Vale, 2015). More information could be mirrored in three values: the number of references, authors, or pages.

And indeed, the higher impact PLoS journal articles have longer and longer reference sections, a rise of 24% from 46 to 57 over the last ten years (Pearson r = .11, Spearman rho = .11), see also my previous blog post for a similar pattern in another high impact journal outside of PLoS.


The lower impact PLoS ONE journal articles, on the other hand, practically did not change in the same period (Pearson r = .01, Spearman rho = -.00).


The diverging pattern between higher and low impact journals can also be observed with the number of authors per article. While in 2006 the average article in a higher impact PLoS journal was authored by 4.7 people, the average article in 2016 was written by 7.8 authors, a steep rise of 68% (Pearson r = .12, Spearman rho = .19).


And again, the low impact PLoS ONE articles do not exhibit the same change, remaining more or less unchanged (Pearson r = .01, Spearman rho = .02).


Finally, the number of pages per article tells the same story of runaway information density in higher impact journals and little to no change in PLoS ONE. Limiting myself to articles published until late november 2014(when lay-out changes complicate the comparison), the average higher impact journal article grew substantially in higher impact journals (Pearson r = .16, Spearman rho = .13) but not in PLoS ONE (Pearson r = .03, Spearman rho = .02).



So, overall, it is true that more and more information is required for a publication in a high impact journal. No similar rise in information density is seen in PLoS ONE. The publication landscape has changed. More effort is now needed for a high impact publication compared to ten years ago.

Wanna explore the data set yourself? I made a web-app which you can use in RStudio or in your web browser. Have fun with it and tell me what you find.

— — —
Vale, R.D. (2015). Accelerating scientific publication in biology Proceedings of the National Academy of Sciences, 112, 13439-13446 DOI: 10.1101/022368


Who dunnit? The avoidable crisis of scientific authorship

Brigham Young professors

Who is allowed to appear?

This year, Germany’s highest court reached a damning verdict concerning academic pay. It is so low that it is in breach of the constitution. Why do research then?

One reason is that it gives you prestige – which often precedes money. Brain areas are still talked about in terms of Brodmann areas and not Smith areas because it was the former who first suggested today’s neurological orientation system. Similarly, the Flynn effect will forever be associated with its eponymous discoverer.

This system of authorship is in crisis. The problem goes right to the heart of why there are still young people willing to risk a career in science. It also eats away at the trust that is crucially important for science to work. Withdraw authorship and you withdraw the future of scientific discovery.

Problem 1: undeserved authorship
A survey of authors who published in the best known biomedical journals found that 18% of articles had honorary authors (Wislar et al., 2011). These people took credit for work they did not actually do. Previous surveys discovered similarly high numbers. A 1996 study found 19% of articles had honorary authors.
For a young researcher (like me) these are incredibly frustrating numbers. One is told to work hard in order to reach authorship on an interesting paper. But one could have it all for free. Apparently, with the right connections one can get one’s name on papers other people would dream of authoring.
Problem 2: deserved authorship not granted
invisible man

The problem with invisible researchers.

Wislar et al.’s survey suggests that a full 8% of articles included a ‘ghost author’, someone deserving authorship but not receiving it. Even if you do the work, you may be denied the appropriate recognition. Previous survey results roughly agree with this number: 11% in biomedical journals in 1996.
These are obviously unacceptable actions. The danger lies not only in de-motivating young scientists, it also makes the scientific process obscure. If you cannot know whether a (unknown) contributor to a finding had a hidden agenda or whether the literature list of your professor is inflated by honorary authorships, then the trust which is central to the scientific process itself is betrayed.
And these are survey results of contributors who did end up on the author list in the end. What about those who don’t? Seeman and House (2010) conducted a survey among US academic chemists and found that half the respondents felt they had at least once been denied appropriate credit. Interestingly, half the respondents also reported to have asked to be removed from the author list of at least one paper. Thus, authorship issues go both ways even for people not ending up on the paper.
The most frustrating thing is that clear guidelines for authorship exist (reviewed in Eggert, 2011). The first thing to do is to talk about authorship before the project starts. While the project changes re-evaluating authorship may become necessary but the worst cases of misconduct can probably be avoided with this simple measure.
In times of increasing numbers of graduates competing for an unchanged or even declining number of science jobs a fair system of authorship attribution is more important than ever. Furthermore, in order to make the diverse contributor groups of large interdisciplinary projects possible authorship issues need to be resolved.
At heart, this is a matter of trust. The trust of young researchers that their work will be credited. The trust of readers that author lists are correct. Trust that science is not a ‘who dunnit?’ game.


Eggert, L.D. (2011). Best practices for allocating appropriate credit and responsibility to authors of multi-authored articles. Frontiers in psychology, 2 PMID: 21909330

Seeman, J.I., & House, M.C. (2010). Influences on authorship issues: an evaluation of receiving, not receiving, and rejecting credit. Accountability in research, 17 (4), 176-197 PMID: 20597017

Wislar, J.S., Flanagin, A., Fontanarosa, P.B., & Deangelis, C.D. (2011). Honorary and ghost authorship in high impact biomedical journals: a cross sectional survey. BMJ, 343 PMID: 22028479


image :

1) By Eustress (Own work) [Public domain], via Wikimedia Commons

2) By Geoffrey Biggs (Gilberton) [Public domain], via Wikimedia Commons