Preprint: "Fallibility in science: Responding to errors in the work of oneself and others", 2017, Dorothy V Bishop

Andy

Retired committee member
Abstract
Fallibility in science cuts both ways: it poses dilemmas for the scientist who discovers errors in their own work, and for those who discover errors in the work of others. The ethical response to finding errors in one's own work is clear: they should be claimed and corrected as rapidly as possible. Yet people are often reluctant to 'do the right thing' because of a perception this could lead to reputational damage. I argue that the best defence against such outcomes is adoption of open science practices, which help avoid errors and also leads to recognition that mistakes are part of normal science. Indeed, a reputation for scientific integrity can be enhanced by admitting to errors.

The second part of the paper focuses on situations where errors are discovered in the work of others; in the case of honest errors, action must be taken to put things right, but this should be done in a collegial way that offers the researcher the opportunity to deal with the problem themselves. Difficulties arise if those who commit errors are unresponsive or reluctant to make changes, when there is disagreement about whether a dataset or analysis is problematic, or where deliberate manipulation of findings or outright fraud is suspected. I offer some guidelines about how to approach such cases. My key message is that for science to progress, we have to accept the inevitability of error. In the long run, scientists will not be judged on whether or not they make mistakes, but on how they respond when those mistakes are detected.

Author Comment
This is a preprint of a commentary commissioned for Advances in Methods and Practices in Psychological Science (AMPPS), and is based on a talk given on 7th July 2017 at a meeting on Reprodcible Science for Early Career Researchers, organised by David Mehler at the University of Cardiff.
https://peerj.com/preprints/3486/
 
interesting bit:
I turn now to those unfortunate situations when it is hard to avoid concluding that a researcher is acting in bad faith. A particularly insidious kind of behaviour involves selective citation of the literature, or 'cherry-picking'. Unless an author has specified clear criteria for which studies are included in a review, it can be hard to detect distortion of evidence, unless one is an expert in the area. Worse still are cases where the cited literature is selectively or inaccurately portrayed, giving the impression of a large body of work supporting a given position. This is a standard ploy by those promoting pseudoscientific views (Grimes & Bishop, 2017) and needs to be robustly challenged.
Seems to happen a lot in the psyc papers I read.

And this:
Although one would hope that academic institutions would take seriously an accusation of fraud against a staff member, they can be slow to act; it is, of course, important that they consider the possibility that they are dealing with an unjustified attack by those with vested interests or fixed ideas. These do occur, but malign intent should not be the default assumption, unless there are several 'red flags' of the kind noted by Lewandowsky and Bishop (2016)
Old timers will remember that one example of those "red flags" offered in that 2016 paper was vexatious CFS patients with an anti-science agenda.
 
Bishop said:
Awful and embarrassing as it is to admit to error, the alternative, hiding a known error, has to be worse. The person who does this is entering into a Faustian pact to reject science in favour of personal ambition. As data fraudster Diederik Stapel openly admitted, once you embark on this process, it is difficult to stop, but it creates considerable internal conflict

Publishing psychologists will be particularly aware of this consequence of ego over truth - won't they? o_O
 
Last edited:
Old timers will remember that one example of those "red flags" offered in that 2016 paper was vexatious CFS patients with an anti-science agenda.

For those interested:

http://www.nature.com/news/research...2&spJobID=843636789&spReportId=ODQzNjM2Nzg5S0

Orchestrated and well-funded harassment campaigns against researchers working in climate change and tobacco control are well documented3, 4. Some hard-line opponents to other research, such as that on nuclear fallout, vaccination, chronic fatigue syndrome or genetically modified organisms, although less resourced, have employed identical strategies.

Their 10 red flags are ridiculous, and seem to have been really thoughtlessly constructed too.

Here's the related Royal Society meeting they did on 'threats to science', featuring Crawley and White complaining about the mean people who do not trust them: https://figshare.com/articles/RS_scienceandsociety_September_2015_pdf/2061696

This is how COPE described that Bishop and Lewandowsky piece:

Most recently, questions about the legitimate requests for and re-use of data have been explored systematically and thoughtfully by Lewandowsky & Bishop (2016).

https://publicationethics.org/forum-discussion-topic-comments-please-6

I keep finding new annoying things related to this.
 
Last edited:
This is how COPE described that Bishop and Lewandowsky piece:

https://publicationethics.org/forum-discussion-topic-comments-please-6

There were some nice comments here. I enjoyed this one:
I disagree with Lewandowsky and Bishop. Irony apart -- Lewandowsky has a reputation for hiding sloppy research, Bishop played a small but key role in the harassing of Tim Hunt -- they argue for reduced transparency so as to protect researchers against naughty outsiders. This does not work. If they want to beat you, they will find a stick. Hiding your data just hands them a bigger stick. At the same time, reduced transparency is effective in protecting naughty researchers.
This article, mentioned in the comments, was worth a read: https://politicalsciencereplication...9/getting-the-idea-of-transparency-all-wrong/. Some powerful bits:
Apparently, the trend towards more transparency has left all of us researchers in despair. It’s nearly damaging all of science! The comment published in Nature yesterday starts off by listing all things bad around data sharing:
  • endless information requests
  • complaints to researchers’ universities
  • online harassment
  • distortion of scientific findings
  • threats of violence
This can happen to all of us … beware!
Trying to persuade us of transparency’s menace to the public, the Nature comment uses a type of language that really does not belong in the discussion about data sharing (but provides for an entertaining read):
  • Orchestrated and well-funded harassment campaigns against researchers
  • risk making science more vulnerable to attacks
  • masquerade as scientific inquiry
  • Increasingly … calls for retraction are coming from people who do not like a paper’s conclusions
It may be true that some researchers, e.g. in climate change (apparently a good example for all of science), are subject to attacks by opponents. But that has nothing to do with data sharing and openness. Bringing such language and examples into the openness debate is a distortion of the discussion. Anyone working on climate change who does not share their data is automatically subject to criticism, because opponent can claim the author is ‘hiding something’. Being transparent on how conclusions were reached is exactly the right way to protect yourself from criticism. Holding back your data won’t make opponents go away. And if you really did some mistakes in the analysis, then why should that remain undiscovered? Or wait, I’m not sure I understand science anymore. What was the goal again?

The comment goes on to provide a list of red flags about researchers and red flags about the so-called critics, pretending to provide a balanced view of benefits of transparency versus withholding your data. Again, the list suggests that anyone asking for replication materials could potentially be an amateur, have a financial interest in publishing a failed replication, might plan personal attacks, hack p-values, or insist on access to confidential data.

The list is absurd, and almost comical.

If it weren’t published in Nature, and if it didn’t have the potential to lend harmful arguments to opponents of transparency, I would not have taken it seriously.
 
I just saw that Esther Crawley's co-author, Jonathan Sterne, was promoting that Lewandowsky/Bishop piece to Andrew Gelman: http://andrewgelman.com/2016/05/19/will-transparency-damage-science/

There was also a discussion in the comments about Sterne's involvement in the Cochrane Statistical Methods Group discussion list, SMGlist. It seems that they don't like people being impolite.

Keith O'Rourke says:
May 19, 2016 at 11:50 am
Erik: But this is a downside the resulted from my making a comment regarding a comment by the same Jonathan Sterne.

Personal names removed.
Sent: Wednesday, June 27, 2012 3:45:32 AM
Subject: Cochrane SMG discussion list and recent email exchange

Dear Dr O’Rourke,

We have taken the decision to suspend you from the Cochrane Statistical Methods Group discussion list, SMGlist.
We have previously warned you about sending confrontational emails to the list. Your most recent posts to the list about noninferiority trials were not politely worded, and we have received several off-list adverse comments from long-standing list members. We are concerned that the tone of such exchanges detracts from the usual collaborative spirit and may deter younger list members from contributing and may lead to resignations from the email list. As you have not modified the tone of your posts since previous warning, we feel we had no option but to suspend you from the list.

The SMG co-convenors,

My emails to get Jonathan’s thoughts regarding this banning (or the methodological concerns I raised) were never returned.
(Now we have met and interacted many times before this incident.)

Here was my email (think it would be unfair for me to post Jonathan’s here) “

“With all due respect, I believe I need to be very direct if not impolite about this.

I think this is ludicrous – “no reason to view results from noninferiority trials differently”

In noninferiority trials (as in all indirect comparisons) Rx versus Placebo estimates require one to make up data (exactly as it occured in earlier historical Placebo controlled trials) but rather than being explicite about that making up of the data (e.g. calling it an imformative prior) vague assumptions are stated that would justify the making up as essentially risk free. Unfortunately there is no good way to check those assumptions and they are horribly non-robust.

Exactly how much less worse noninferiority studies are than observational studies is a good question – the earlier multiple bias analysis literature, especially R Wolpert did address this but it seems to have been forgotten. Also Stephen Senn has written a somewhat humorous paper for drug regulators (with huge historical trials the made up SE will almost be zero)
– hopefully they wont miss the point.
Cheers
Keith”

That makes me feel worse about Cochrane.
 
When the anti-transparancy stuff came out it seemed to me more about supporting academic careers than good science. There were arguments about those who collect data having 'rights' over the data to be the first to publish however slow. To me that seems unethical - permissions (and funding) were given to do research to further knowledge and help patients not to further academics publishing records and careers.

I think its interesting that in the comments that @Esther12 refers to there is a reference to the type of discussions that happen on the linux kernel mailing list which are much more robust. This would be more my experience of things. If something is broken then people point it out and expect it to be fixed. Working in a world of security people spend a lot of time trying to break stuff. Some of it makes our analysis of BPS papers seen unobsessive. I read a paper recently where someone had reverse engineered and AMD processor so that they could understand and adapt the microcode (really not an easy thing and involving expensive equipment and acid). Other people just go after the easy stuff where there are bug bounties. But companies acknowledge issues and fix them - no one thinks badly of a company with occasional issues as long as they respond well. Of course repeated issues do lead to bad reputations.

It seems to me that the world that Bishop lives in is somewhat different from the rest of the world and expects to be isolated. But may be they don't expect to do anything useful either? Just publish papers that no one will read or use.
 
Back
Top Bottom