Science has a Nasty Photoshopping Problem. Elisabeth Bik, NYT

Jaybee00

Senior Member (Voting Rights)
Excellent article on fraud in scientific publications by image sleuth Elisabeth Bik.

https://www.nytimes.com/interactive...MhNQMpZkC1IsbCU5yjC5sXdZJZxonA&smid=share-url

“Since childhood, I’ve been “blessed” with what I’m told is a better-than-average ability to spot repeating patterns. It’s a questionable blessing when you’re focused more on the floor tiles than on the person you’re supposed to talk to. However, this ability, combined with my — what some might call obsessive — personality, helped me when hunting duplications in scientific images by eye.”
 
That's a fascinating article, very eye-opening.

This bit had a ring of familiarity as regards the resistance various people have experienced in trying to get dodgy ME papers retracted or corrected (my bolding):
The article said:
I have analyzed more than 100,000 papers since 2014 and found apparent image duplication in 4,800 and similar evidence of error, cheating or other ethical problems in an additional 1,700. I’ve reported 2,500 of these to their journals’ editors and — after learning the hard way that journals often do not respond to these cases — posted many of those papers along with 3,500 more to PubPeer, a website where scientific literature is discussed in public.
So this reluctance by journals to correct bad science is apparently pervasive across the board and not just in the dodgy world of ME 'evidence-based' 'science'. Worrying to see how untrustworthy much of what is published in journals as 'evidence' may be.

I wonder if anyone has posted any of the many dodgy ME studies to the site she mentioned, PubPeer? Could get interesting.
 
Since childhood, I’ve been "blessed" with what I’m told is a better-than-average ability to spot repeating patterns. It’s a questionable blessing when you’re focused more on the floor tiles than on the person you’re supposed to talk to. However, this ability, combined with my — what some might call obsessive — personality, helped me when hunting duplications in scientific images by eye.

A good example of how autistic traits can be valuable to society.
 
This is horrifying. 4%--1 in 25--of the papers she reviewed appeared obviously edited?! We need fundamental reforms to the scientific process, if people are even tempted to do this. For example, encouraging published work to be quality over quantity, or regarding negative results as equally publishable.
 
This is horrifying. 4%--1 in 25--of the papers she reviewed appeared obviously edited?! We need fundamental reforms to the scientific process, if people are even tempted to do this. For example, encouraging published work to be quality over quantity, or regarding negative results as equally publishable.

It would be interested to know if this had always been a problem or if it has increased since numbers of papers published in peer reviewed journals has become a quantified outcome measure influencing university department budgets.
 
Interesting what the article said about peer review being non-adversarial, and not set up to catch fraud.

As for it being non-adversarial, in the specific case of biomedical ME research applications, there may be examples to the contrary.
 
Last edited:
That's a fascinating article, very eye-opening.

This bit had a ring of familiarity as regards the resistance various people have experienced in trying to get dodgy ME papers retracted or corrected (my bolding):

So this reluctance by journals to correct bad science is apparently pervasive across the board and not just in the dodgy world of ME 'evidence-based' 'science'. Worrying to see how untrustworthy much of what is published in journals as 'evidence' may be.

I wonder if anyone has posted any of the many dodgy ME studies to the site she mentioned, PubPeer? Could get interesting.
It's pretty clear that the system is set up to make it worse for a journal to admit and correct a mistake than to keep it there, since as long as the mistake remains, most people assume it's because there is no mistake, otherwise they would have corrected it.

So literally one of the foundations of science is broken in more ways than one. The publication process is broken, journals take no responsibility for what they publish. Peer review is broken, fraudulent papers and basic mistakes make it through regularly. There is no correction process after peer review, as it's clearly more damaging to reputation.

In most fields of science it slows down progress. In medicine it's massively deadly and harmful. It's very likely in medicine where this problem is worse, and the people suffering the consequences not only have no recourse, the mere act of pointing out mistakes can be used as evidence against the complaint, somehow on the basis that doctors know best and the process of science works. Both clearly false assumptions.

But this is like fixing an electoral process: the people who get in leadership from a system do not want to change that system since it might affect their status. In the end it's all about #1: everyone has to put their self-interest first, at the expense of results, because this is what the system demands, very similar to highly toxic work environments where teams are pitted against one another. It always leads to terrible results, but there's zero accountability here, so it just keeps on and on.

Whatever may have ever made academia special, and it was likely an illusion, if not a delusion, it's gone. Forever. Any system devoid of accountability ends up dysfunctional.
 
Last edited:
It would be interested to know if this had always been a problem or if it has increased since numbers of papers published in peer reviewed journals has become a quantified outcome measure influencing university department budgets.

I’ll try to answer this hopefully without sounding patronizing. I do think this has increased recently. I think a lot of it has to do with developing countries adopting performance indicators such as number of publications and number of citations from countries in North America and Western Europe without the researchers in the developing countries having the same level of training and infrastructure as their colleagues in the developed countries. In other words administrators from developing countries import the evaluation criteria from the West because they believe that it is these performance criteria that made the research institutions in the developed countries productive.

Then these criteria are imposed upon researchers in the developing countries who may lack the facilities, training, infrastructure of their colleagues in the more developed world and then consequently there is a lot of pressure on these researchers to produce, which can lead to fraudulent research being submitted.
 
Last edited:
Back
Top Bottom