Blog: Hilda Bastian, "Science Heroes and Disillusion"

Andy

Senior Member (Voting rights)
“Never have heroes” – I’ve heard some version of that a lot in the last couple of months, from people disillusioned by John Ioannidis’ contributions to the research and debate around Covid-19. Meanwhile, others are still calling him an EBM or science hero.

It sounds like it should be a contradiction in terms, doesn’t it? – “EBM hero”. The essence of evidence-based medicine (EBM), after all, is meant to be the rejection of eminence-based medicine – believing something just because someone up high says it is so, with a bunch of cherry-picked citations.

But I think there can be, and indeed are, EBM and science heroes. I found a 2011 conceptual analysis of heroism and altruism by Zeno Franco and colleagues helpful for thinking this through. Which is seriously ironic, since one of the authors is Philip Zimbardo – an emeritus Stanford professor of notoriety because of that prison experiment!

Reading the article, though, crystallized how I, personally, think of heroism: it’s prosocial behavior, that’s altruistic, struggling against odds, and despite serious risks to the individual. It’s not hard to think of scientists, and healthcare providers struggling to ensure care is evidence-based, who fit that bill and are totally worthy of the label, hero.
https://absolutelymaybe.plos.org/2020/06/30/science-heroes-and-disillusion/
 
What is she referencing with Ioannidis and COVID-19? I've not considered him a hero, he's written about the problems in the nutrition field which has been nice, but at the same time he had accepted some of the bad evidence from the science he was criticising and to me it came out a bit weird.
 
I must admit I find the concept of needing to have "heroes" or even the concept of "evidence based medicine" a bit daft. I am talking about the concept here - not people who agree with it & not the author.

There have been people in my life I have had a great deal of respect for. This shouldn't mean we suspend our critical judgement or be unable to question their ideas. Everyone is fallible.

In work if I blindly took someone's advice without applying my own critical thinking and it turned out to be wrong, I would still be held accountable. So personable accountability for one's own beliefs is pretty important. Or should be.

If a colleague thought they'd seen a flaw in one of my designs or an error made in troubleshooting and didn't speak up I would be really, really annoyed. That responsibility to make sure things are done right is on everyone involved - no matter relative seniority.

This is the importance of the focus being where it should be- on the job in hand & not on the person doing the job.

If your standard practice is to do the best job you can within whatever constraints you have to deal with then your reputation should look after itself.

As for evidence based medicine.... If there isn't good, solid evidence from research using the best practices of the time then it's just an opinion & should be regarded as such.

If there is solid evidence then we still have to remember that with further research, new techologies etc our understanding can change. So it's understanding based on the evidence to date. Don't go carving it in stone.

Maybe I'm missing something?
 
What is she referencing with Ioannidis and COVID-19?
If you follow the link Hilda gives to the Stuart Richie article:
https://unherd.com/2020/06/why-there-should-never-be-heroes-in-science/
You will see a description of the major mis step Ioannidis made with a supposed epidemiology study of covid in the USA that got its methodology badly wrong - using social media to invite people to have a covid test, and treating that as a representative patient sample, which it clearly wasn't since people are more likely to volunteer if they think they have covid.

First, in mid-March, as the pandemic was making its way to America, Ioannidis wrote an article for STAT News where he argued that we should avoid rushing into big decisions like country-wide lockdowns without what he called “reliable data” on the virus. The most memorable part of the article was his prediction — on the basis of his analysis of the cursed cruise ship Diamond Princess — that around 10,000 people in the US would die from COVID-19 — a number that, he said, “is buried within the noise of the estimate of deaths from ‘influenza-like illness’”. As US deaths have just hit 125,000, I don’t need to emphasise how wrong that prediction was.

So far, so fair enough: everyone makes bad predictions sometimes. But some weeks later, it emerged that Ioannidis had helped co-author the infamous Santa Clara County study, where Stanford researchers estimated that the number of people who had been infected with the coronavirus was considerably higher than had been previously supposed. The message was that the “infection fatality rate” of the virus (the proportion of people who, once infected, die from the disease), must be very low, since the death rate had to be divided across a much larger number of infections. The study became extremely popular in anti-lockdown quarters and in the Right-wing populist media. The virus is hardly a threat, they argued — lift the lockdown now!

But the study had serious problems. When you do a study of the prevalence of a virus, your sample needs to be as random as possible. Here, though, the researchers had recruited participants using Facebook and via email, emphasising that they could get a test if they signed up to the study. In this way, it’s probable that they recruited disproportionate numbers of people who were worried they were (or had been) infected, and who thus wanted a test. If so, the study was fundamentally broken, with an artificially-high COVID infection rate that didn’t represent the real population level of the virus (there were also other issues relating to the false-positive rate of the test they used
 
If you follow the link Hilda gives to the Stuart Richie article:
https://unherd.com/2020/06/why-there-should-never-be-heroes-in-science/
You will see a description of the major mis step Ioannidis made with a supposed epidemiology study of covid in the USA that got its methodology badly wrong - using social media to invite people to have a covid test, and treating that as a representative patient sample, which it clearly wasn't since people are more likely to volunteer if they think they have covid.
I can't believe I missed that. Thanks :)

@Invisible Woman put it quite nicely.
 
The essence of evidence-based medicine (EBM), after all, is meant to be the rejection of eminence-based medicine
The scientific method does that far better. EBM as far as I can see does the exact opposite, it's nothing but eminence-based medicine. It even allows woo, voodoo and blatant fraud, facilitated by siloed-off mutual admiration societies. Encourages it, in fact. On ME and other chronic illnesses it delivered catastrophic failure worse than doing nothing at all. And decades of failure in practice don't even inform the evidence, which is completely absurd.

So what now? The failure is on the scale of the war on drugs: doesn't accomplish any of its primary goals, amplifies all harms meant to be reduced while creating entirely new ones. But let's just chug along pretending it works? Why? Inertia? The scientific method already delivers that. Removing the need to actually falsify hypotheses is probably the biggest flaw in EBM, never have to explain anything, black boxes are fine and dandy.

Why is it so important to keep going? Was anything actually gained other than the hope that it would deliver something one day in the indefinite future? Because it hasn't delivered and the solution seems to be to lower the bar even further, guaranteeing more failure.

Who actually benefits from this? Certainly not the patients. It seems all this does is make it easier to publish opinions and ideologies. That only helps bottom-tier scientists and flawed models that lack a connection to reality. I'd vote yes on ending it. Retract everything. Start over. Escalation of commitment is not a good basis for such major decisions. There's clearly stagnation in improving patient outcomes, I'm pretty sure that's not a coincidence.
 
My evidence-based medicine heroes are the ME patients like Alem, Tom and Bob who have challenged the medical establishment with scientific and legal arguments, and the few disinterested academics and scientists like David and Jonathan who have chosen to stand up for people with ME, truth and the principles of science, not for promotion or personal gain, but because it is the right thing to do.

Some quotes from Hilda’s article (my bolds):
Ioannidis’ work has always been of variable quality, but his status and reputation meant that would often be overlooked, or his rebuttals of criticisms too easily accepted.

Back in the before times, when we used to have lots of conferences, I would often either be in the audience when Ioannidis was speaking, or, from time to time, on the podium speaking alongside him. At first, I’d just be beguiled and energized by the rousing and erudite performances. But then a crack would appear – a statement I knew to be strongly contradicted by the evidence – and then another, and another. And they’d all be based on self-citations, building the case he was arguing.

To me, the moral of these stories isn’t to not have heroes. It’s to learn a few things: to pick your heroes more carefully; to be wary of the champions of causes as well as anyone who is “against” something on the regular; to watch carefully how people respond to their critics; and to be on guard against the effects of charisma. And if you have a hero, don’t give their science a free pass.
I’m looking forward to Hilda’s promised follow up on that “collective ad hominem attack” statement.
 
Last edited:
The Unherd article by Stuart Ritchie that Hilda links to (“There should never be heroes in science”) is very interesting. Here are some quotes (my bold):
In my own field of psychology, one of the most prominent examples of an uber-critic was Hans Eysenck. From the 1950s all the way to his death in 1997, Eysenck wrote blistering critiques of psychoanalysis and psychotherapy, noting the unscientific nature of Freudian theories and digging into the evidence base for therapy’s effects on mental health (I should note that Eysenck worked at the Institute of Psychiatry, now part of King’s College London, which is my employer).

In one typically acrimonious exchange in 1978, Eysenck criticised a study that had reviewed all the available evidence on psychotherapy. Eysenck argued that this kind of study — known as a “meta-analysis” because it tries to pool all the previous studies together and draw an overall conclusion — was futile, owing to the poor quality of all the original studies. The meta-analysis, he said, was “an exercise in mega-silliness”: a “mass of reports—good, bad, and indifferent—are fed into the computer”, he explained, “in the hope that people will cease caring about the quality of the material on which the conclusions are based.”

Whether or not this was a sound argument in the case of psychotherapy, Eysenck had put his finger on an important issue all scientists face when they try to zoom out to take an overall view of the evidence on some question: if you put garbage into a meta-analysis, you’ll get garbage out.

Healthy science needs a whole community of sceptics, all constantly arguing with one another — and it helps if they’re willing to admit their own mistakes. Who watches the watchmen in science? The answer is, or at least should be: all of us.

“Stuart Ritchie is a Lecturer in the Social, Genetic and Developmental Psychiatry Centre at King’s College London. His new book, Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science, is published on July 16.”

As a psychologist at KCL (where Professors Wessely and Chalder work) I wonder if Richie thinks the scientific criticism of PACE and other BPS research is healthy. And I wonder if any of it will get a mention in his book, or whether he may chose to overlook it because of the type of biases that his book appears to be about.

[edit – added title of article]
 
Last edited:
The scientific method does that far better. EBM as far as I can see does the exact opposite, it's nothing but eminence-based medicine. It even allows woo, voodoo and blatant fraud, facilitated by siloed-off mutual admiration societies. Encourages it, in fact. On ME and other chronic illnesses it delivered catastrophic failure worse than doing nothing at all. And decades of failure in practice don't even inform the evidence, which is completely absurd.

So what now? The failure is on the scale of the war on drugs: doesn't accomplish any of its primary goals, amplifies all harms meant to be reduced while creating entirely new ones. But let's just chug along pretending it works? Why? Inertia? The scientific method already delivers that. Removing the need to actually falsify hypotheses is probably the biggest flaw in EBM, never have to explain anything, black boxes are fine and dandy.

Why is it so important to keep going? Was anything actually gained other than the hope that it would deliver something one day in the indefinite future? Because it hasn't delivered and the solution seems to be to lower the bar even further, guaranteeing more failure.

Who actually benefits from this? Certainly not the patients. It seems all this does is make it easier to publish opinions and ideologies. That only helps bottom-tier scientists and flawed models that lack a connection to reality. I'd vote yes on ending it. Retract everything. Start over. Escalation of commitment is not a good basis for such major decisions. There's clearly stagnation in improving patient outcomes, I'm pretty sure that's not a coincidence.
This. (and by the way @rvallee you are my current top hero. Don't let me down, I couldn't take the disappointment - no pressure). The whole point of doing an experiment, especially a trial, is to GENUINELY TRY AND PROVE THE HYPOTHESIS THAT A TREATMENT WORKS IS WRONG. That's why it's called a trial, not a marketing campaign. The word equipoise seems to have been erased from the EBM dictionary rather than something ABSOLUTELY CENTRAL to it having any value at all.
 
What is she referencing with Ioannidis and COVID-19? I've not considered him a hero, he's written about the problems in the nutrition field which has been nice, but at the same time he had accepted some of the bad evidence from the science he was criticising and to me it came out a bit weird.
Yeah - the big disappointment for me (another hero bites the dust) was him co-authoring a network meta-analysis on anti-depressants https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)32802-7/fulltext. In my view, if there's one thing less likely to get near the truth about whether a treatment works or not than a meta-analysis, it's a network meta-analysis - assuming most of the included studies are biased in one way or another.
 
From the discussion about the new Cochrane review of exercise for ME/CFS, the impression given to me was that there is an unwillingness to say that a treatment isn't good enough. No matter how small the effect, no matter how much doubt there is that it's more than just a placebo effect, nobody wants to say that this isn't good enough for patients.

This favors excessive treatment, unnecessary or useless treatments. Good for people selling these treatments, but often harmful for patients.
 
Last edited:
From the discussion about the new Cochrane review of exercise for ME/CFS, the impression given to me was that there is an unwillingness to say that a treatment isn't good enough. No matter how small the effect, no matter how much doubt there is that it's more than just a placebo effect, nobody wants to say that this isn't good enough for patients.

This favors excessive treatment, unnecessary or useless treatments. Good for people selling these treatments, but often harmful for patients.
Cochrane was originally intended as a stand against the journals that were full of pharma advertising disguised as trials, and that was laudable. And the reviewers were unpaid volunteers, which was seen as a stand for citizen science. However, could the volunteer model partly explain why Cochrane finds it so hard to take issue with reviewers? Maybe they feel inhibited about criticising volunteers, and don't want potential new reviewers to be put off.
 
This favors excessive treatment, unnecessary or useless treatments. Good for people selling these treatments, but often harmful for patients.

Of course the irony being it favours excessive treatment by those claiming ME patients shouldn't undergo "excessive" testing to find a potentially treatable cause for their symptoms because the testing harms patients by reinforcing their belief that they are ill.
 
This. (and by the way @rvallee you are my current top hero. Don't let me down, I couldn't take the disappointment - no pressure). The whole point of doing an experiment, especially a trial, is to GENUINELY TRY AND PROVE THE HYPOTHESIS THAT A TREATMENT WORKS IS WRONG. That's why it's called a trial, not a marketing campaign. The word equipoise seems to have been erased from the EBM dictionary rather than something ABSOLUTELY CENTRAL to it having any value at all.
Hehe thanks :)

Frankly seeing the latest beauty from Jo Daniels, literally PACE with a few search-and-replace changing "unhelpful illness beliefs" to "health anxiety" with everything else the same makes that point impossible to ignore. Along with the CODES trial. And the CBT music one. And frankly too many now. It's just a deluge of copy-paste pseudoscience with no sense or reason. I don't see how so-called evidence-based medicine can legitimately continue. The whole system is so broken that all evidence produced under it should be considered suspect. Not just possibly suspect but likely invalid. The issue isn't replicability, it's that invalid things coexist with possibly valid ones and no one can tell the difference because everything is laundered the same way.

Like BPS it's a problem of execution. But the execution problem is complete and total, basically holographic since you find the whole in every small part. A few bad apples have effectively spoiled the whole bunch. At the very least by itself, like BPS, it is effectively harmful, misleading and promotes blatant quackery. Maybe moderated by actual science it can serve a purpose, but alone it only serves to enable quackery, in fact it's clearly the only thing it reliably delivers.

Evidence-based medicine has run its course. It was a massive mistake. Let's go back to the good old scientific method. It works. It literally works. For millions of lives' sake, end this corrupt nightmare.
 
Cochrane was originally intended as a stand against the journals that were full of pharma advertising disguised as trials, and that was laudable. And the reviewers were unpaid volunteers, which was seen as a stand for citizen science. However, could the volunteer model partly explain why Cochrane finds it so hard to take issue with reviewers? Maybe they feel inhibited about criticising volunteers, and don't want potential new reviewers to be put off.
This hits the nail on the head. One of the responses I got to my complaint about the Exercise review was that Cochrane has a "duty of care" to its contributors. This seems to now override its duty of care to its alleged "beneficiaries" as a charity.
 
Back
Top Bottom