Article: Highlight negative results to improve science

Andy

Retired committee member
Publishers, reviewers and other members of the scientific community must fight science’s preference for positive results — for the benefit of all, says Devang Mehta.

Near the end of April, my colleagues and I published an unusual scientific paper — one reporting a failed experiment — in Genome Biology. Publishing my work in a well-regarded peer-reviewed journal should’ve been a joyous, celebratory event for a newly minted PhD holder like me. Instead, trying to navigate through three other journals and countless revisions before finding it a home at Genome Biology has revealed to me one of the worst aspects of science today: its toxic definitions of ‘success’.

Our work started as an attempt to use the much-hyped CRISPR gene-editing tool to make cassava (Manihot esculenta) resistant to an incredibly damaging viral disease, cassava mosaic disease. (Cassava is a tropical root crop that is a staple food for almost one billion people.) However, despite previous reports that CRISPR could provide viral immunity to plants by disrupting viral DNA, our experiments consistently showed the opposite result.
https://www.nature.com/articles/d41586-019-02960-3
 
Only publishing 'positive' results is itself a form of selection bias. As absurd as publishing the UK °C temperatures, but discarding all negative values.

Feels like there should be a very different publishing infrastructure in place. One where a trial's preregistration includes undertakings to publish - by the scientists, by their institutions, and by the publishers - ensuring results are published without fear or favour, wherever they fall. If this crucial step not followed, then the trial does not get funded. But probably a pipe dream.

I fully appreciate that funders have hugely vested interests, and therefore this sort of bias is exactly what they expect and demand. If the trial they funded shows their new magic potion helps millions of people, then they obviously intend as many people to know about it. If it didn't, or worse if maimed half the people (thalidomide?) then they clearly want to squash it. The bottom line is their thing, not philanthropy.
 
Negatives can be as, if not more, illuminating as positives.

Often in design people know what they don't want rather than what they do. Often you can get to a solution by knowing what not to do. It's an elimination process as well as a creative one.

The literature is seriously incomplete without honestly reported null results.

How much funding could be garnered and / or targeted if we had a fuller picture.
 
As information technology develops options for sharing information grows.

Historically journals were by necessity selective, with, at their best, the aims of selecting the most informative and reliable studies progressing understanding in their specific field. However [as said] this can have the unintended side effects of focusing on positive results and on research within the current prevailing prejudices.

This is not to say there is not still a place for high profile journals promoting by selective publication practices specific potentially important studies. However there is also the opportunity now for total blanket publication of all grant funded research, even those with negative or ambiguous results. It would be possible for academic institutions or specific research fields to publish on line write ups of all research undertaken that was grant funded or required ethical approval.

Presumably the researchers/authors would at some point make a decision with specific research to go down the traditional journal route or to prepare write ups for this fall back blanket publication system. This system could note all prospectively registered research, grant awards or ethical approval requests, then either provide links to subsequent journal articles, host write ups of those studies that did not make the conventional journal publication pathways or provide statements about research that for whatever reason was not completed or not written up.

Because of the costs involved, such a system might not be able to have any editorial oversight or peer review, so research published this way might require more critical examination than that going through the current journal system, but, as readers here will know, editorial oversight and peer review does not always guarantee unbiased or accurate reporting. However it would ensure access to negative or inconclusive results, that may be as important to understanding an issue as the more hyped positive findings.
 
Last edited:
a failed experiment

An experiment is only a failure if it is not conducted rigorously. A negative result is not a failure in itself.
IOW, the point of an experiment/observation is to get a clear (or clearer) result. It doesn't matter if the result is negative or positive, or if it confirms or refutes the hypothesis, as long as it is an accurate result.

Clarity (removing or reducing ambiguity) in experimental/observational outcomes is the aim. Lack of clarity is the failure.

Say there is indirect evidence that there may be a lake at X location. So you go to X, carefully survey the area, and find there is no lake.

That negative result is not a failed experiment/observation, it is in fact a highly successful one, because it answered the question unambiguously.
 
How many unpublished trials are there for ME,?
From their hypotheses/ subject matter what does this say?

I had hoped Amalok Bansal' s glucocorticoid receptor trial would be published as the theory made a lot of sense from our experience.

Then of course are BPS ones - non publication may have interesting indications.

Is there a list of unpublished trials?
 
Back
Top Bottom