Downward spiral of SBPR and journal IF

Cookies

This webpage uses cookies. If you continue to use it, you agree to our cookie policy. More info here. OK

Downward spiral of SBPR and journal IF

Tomas Zvolensky, 1 February 2024

Research journals using single blind peer review (SBPR) combined with the importance of impact factor (IF) create an environment where reviewers can reject papers almost arbitrarily or provide tangential, misguided, or unreasonable feedback to the authors.

Generally, every journal has an ambition to increase its IF. The higher it is, the more popular the journal is among authors. It suggests your paper might be cited more. Statistically, authors do cite papers from higher IF journals more. Nevertheless, the funding decision processes are increasingly disconnecting from the ‘numbers game’, thankfully. IF is only a number in the end. It posesses ceratin informational value, but it's hard to pinpoint and mainly serves the journals, not the authors. Mostly, also project proposal evaluation goes beyond it, as it should.

SBPR gives reviewers anonymity which can result in strange behavior sometimes. I have experienced it, and maybe you have too. Rejection of a paper despite it is clearly within the scope of a journal (with multiple papers on the same topic). Review comments lacking sense, or making it obvious the paper was not read properly. Even outright shady requests to cite one or more particular papers.

Obviously, bad players are a minority and get attention in any system. But hands down, few can say that they never experienced some oddity that ends up delaying the publication of a paper by months. Which can be disenchanting to say the least. Especially the earlier one is on an academic career path.

IF of a journal grows as more papers published in it get cited. Maintaining IF takes preventing papers one can guess would get cited less from being published. Based on less known authors, less known universities, research institutions.. The signs are cumulative and easy to hash out.

With the explosion of papers published in the 20th century, preventing papers from being published likely became a bigger task than letting the potentially high-citation papers through. Leading to increasing frequency of the behavior mentioned above. This is something felt more by junior researchers and / or researchers who are not the top of their field. 

The filtering of ‘sub-standard’ papers is a necessary editorial duty. But the amount of publication output is causing the system to tip more towards the ill conditioned state. How to get out of the downward spiral?

For example by embracing open peer review, making the publishing process completely transparent. Openness nurtures caution when writing a review and minimizes cases when reviewers can get away with unreasonable rejections or reviews. Can you be critical when not anonymous? Of course, physics, logic, and facts can be communicated in a dry, matter-of-factual way. It takes more effort, but think about your research group meetings for example. You make an effort to express your criticism in a way that the receiver takes the best out of it. Why should it be different when reviewing papers written by others? Research is not a zero-sum game, if someone is publishing a paper on a topic you are working on, it doesn’t invalidate your work.

The worry that the authors of the paper one publicly reviews will retaliate in the future is somewhat logical, but unfounded. If the criticism is communicated with manners and civility, there is no reason to worry. Just as in research group meetings. Even if a reviewer of your paper is someone who’s paper you reviewed in an open peer review process in the past and they are clearly retaliating, the solution is very simple. Authors can ask for replacing the reviewer providing both, the past review they did and the comments they’re getting now. On that basis it is easy to conclude possible bias and assign the review to someone else.

This also requires adoption of the ‘let’s help authors publish a great paper’ approach rather than ‘let’s see how we can turn this down.’ The ambition to maintain and grow IF is in direct contradiction to this mindset, though. Given that the tenure decision processes at universities are increasingly embracing the narrative criteria over numerical measures, hopefully the IF importance will fade into oblivion. Eventually, IF should be called Journal Vector instead, because it does not really speak of an impact your paper can / does have.

A resistance to dismantling IF can come from those directly benefiting from it - the publishers, owners of journals, etc. Academics, authors, editors, reviewers.. They essentially get nothing from high or low IF of a journal they are involved in. Beyond filtering the sub-nominal quality papers, editorial work should be motivated by helping the authors. Let the IF go.

#AcademicLife, #MakePublishingGreatAgain, #AcWri, #GetYourManuscriptOut, #AmWriting, #AmReading, #PhDchat, #ECRchat, #ScholarSunday, #Frelsi, #PublishOrPublish #publishtoflourish

Comments

You can comment when you sign in.