1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 8th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Who Agrees That GRADE is (a) unjustified in theory and (b) wrong in practice?

Discussion in 'Other research methodology topics' started by Jonathan Edwards, Mar 4, 2021.

  1. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Ironically the medical profession is using technology all the time that is designed, manufactured and tested to the most rigorous safety standards, and would rightly complain bitterly if such medical equipment failed unsafely.
     
  2. Sean

    Sean Moderator Staff Member

    Messages:
    7,151
    Location:
    Australia
    Speaking as a former medical equipment tech, they sometimes even complain when it is working perfectly.
     
    FMMM1, Louie41, Michelle and 5 others like this.
  3. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    If the experts judging are those producing the evidence then that doesn't work.
    At least if that happens if it isn't a fatal flaw then it is up to the others to give a coherent case as to why the argument is wrong. If they were engineers looking at say a bridge design they would have multiple opinions look at it and discuss and this would help get at any flaws. In security we have people who specialize in 'offensive security' i.e. breaking systems and finding all the flaws - so perhaps we need people in the medical world who specialize in this. The trick is to do this early duing the design phase rather then when results are out (when it could be too late). But then when they published the PACE protocol we know it was flawed
     
    Hoopoe, Louie41, Michelle and 5 others like this.
  4. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    Sometimes the equipment doesn't tell them the result that they know is right through 'clinical experience'
     
  5. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Just imagining what medicine would be like if the medical devices in ICU had been designed and developed by "BPS engineers".

    Instead of medics looking at the reliable readings they are used to, they would probably have to press a button and then an electronic voice would pipe up "please all draw a graph of what you think the patient's cardiac output looks like. I will then aggregate your results and present them to you on my screen, which in doing so you will then know you can trust."
     
  6. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    But that is true of all user of tech that us engineers produce :).
     
  7. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Which comes back to the principle of striving to demonstrate that something does not work, and in failing to do that have confidence that it does. The very thing the BPS folk cannot get their heads around, nor their motives.
     
  8. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    320
    I think I agree with you that the pseudo-arithmetic borders a bit on being pseudoscientific. I still think you need some kind of strict systematic methods to prevent bias though. I don't get the impression that it's that hard to put together a committee of people that have the same blind spots and may broadly agree with eachother, and it may not often be that obvious where that blindspot is.

    It's a tough balance between slightly pseudoscientific pseudo-arithmetic and using systematic methods to prevent bias. GRADE could probably do with some deal of optimisation, but it seems to me that it's barking up the right tree at least
     
    Louie41, Michelle and SNT Gatchaman like this.
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,439
    Location:
    London, UK
    At least NICE seemed to concede that they didn't get the competing interest issue right in 2007. They didn't seem to get it quite right this time but the final committee complement may have turned out to be politically advantageous. Conset to the guidelines was obtained even from those with interests.
     
    FMMM1, Louie41, Michelle and 6 others like this.
  10. Trish

    Trish Moderator Staff Member

    Messages:
    52,196
    Location:
    UK
    But what if the arithmetic itself has built in bias. For example, a clinical trial that bases its primary outcomes entirely on subjective questionnaires, and is unblinded should have an arithmetical weighting that puts it immediately in the very low quality evidence category. That does not appear to be the case. So human bias is there even with arithmetic.
     
  11. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,439
    Location:
    London, UK
    Yet GRADE specifically says it is not this. It says it is subjective, not objective - it needs interpretation.
    So when it comes to the bias bit it opts out!!

    So I don't think it is. GRADE does not even attempt to deal with bias in interpretation. And the pseudo arithmetic is not slightly, it is rubbish. Moreover, I see no need for it. You make decisions based on rational argument, not pseudo sums. There is actually no need at all for 'grades'. All you need to say is that something is completely unreliable. GRADE has a 'translation' of its grades to that sort of effect but why? Why not say it directly?
     
    FMMM1, Louie41, Michelle and 6 others like this.
  12. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    Something I don't get is why down weight evidence which is of high risk of bias. This is still considering it and giving it weight even though it is crap and if there is nothing else it starts to look good. Would an approach where we represented evidence in terms of bounds be better where uncertainty is a value somewhere between 0 and 1 and low value evidence would barely close that gap. Then have some sort of baysian evidential update rule?

    I guess I'm just thinking there are ways for assessing and combining evidence under uncertainty that don't just involve weighting stuff.
     
    Hutan, Louie41, Michelle and 5 others like this.
  13. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    320
    I suspect GRADE opts out there because claiming to be objective would be a very bold claim to make, and it still needs to be kept in mind that there's allowance for subjectivity. I don't think a truly objective method for assessing evidence could ever be found, but that doesn't mean that the idea of using strict systematic methods to reduce bias where possible should be given up on.

    When I think about systematic reviews that have been done, doing them non-systematically would have just allowed an incredible amount of potential limitations and questionable methods to be used in them to flourish, in a way that could almost make them completely useless. You'd almost have to redo the systematic review to get an idea of where all its limitations actually are. I just don't see how you could stop using systematic methods without leading to a lot of situations like that.
     
    Snow Leopard and Hutan like this.
  14. Invisible Woman

    Invisible Woman Senior Member (Voting Rights)

    Messages:
    10,280
    As someone who is affected by but is quite outside these GRADE systems and processes they seem a bit absurd. The HitchHiker's Golgafrinchams come to mind.

    The much vaunted PACE trial was very badly designed from the start. Even with all the obvious errors that should have stacked the results in favour of the treatment it failed. £5 million pounds to demonstrate nothing beyond recovery roughly in line with the natural recovery rate if you did nothing.

    So the treatment itself doesn't work. Regardless of hypothesis as to.causation.

    There is no objective evidence. Instead we have anecdote and are expected to rate clinicians anecdote above the anecdotal evidence of thousands of patients. Even though clinicians don't do long term follow ups, don't even define harms meaningfully let alone record them. Patient surveys - 4 of them show roughly consistent results.

    So, if we are to take "clinical" experience into account from the clinicians perspective, then we need to take the patients ' experiences into account. Especially those who are beyond the honeymoon placebo, short term influence of how to answer a questionnaire.

    If new evidence is being presented to NICE, or reinterpretation of old evidence then that evidence is still based on research that was poorly done.

    If new evidence is based on clinical experience then one has to ask where this evidence has come from & question the quality of it because we know they're record keeping is dire.

    Everything else seems to just be an attempt to muddy the water.
     
  15. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,439
    Location:
    London, UK
    But GRADE does not address the bias of the assessors. The permitted reasons for downgrading are applicable to just about anything if you want to. And it constrains downgrading, as Trish and Adrian say, so it is worse than useless in a sense.

    The only strictly systematic approach is to use the right arithmetic - which as Adrain says is Bayes. The problem is that there are too many unknown unknowns to get anywhere near numbers that you could feed into a Bayes equation. Moreover, the goalposts will move. If you have a set of rules based on Bayes then researchers, like Maxwellian Demons, can redesign their studies in such a way that the equation no longer applies.

    So for instance you might put in to the equation an observation on past studies that shows that fabricating data is rare so the probability of it making data unreliable is too small to bother. So all a researcher now has to do is fabricate data and get away with it. We have seen 'pragmatic studies' designed to get round the need for tricky ethical committee approvals and the need for prestated primary outcome measures. People will be gaming GRADE in just the same way and however close a system of rules gets to the real probabilities now it will be different from then on.

    And the decision on CBT and GET is so easy. None of the trials satisfy minimum standards for being free of fatal bias. All the detailed trawling is useful to provide evidence that they really don't work but to be sure that there will be no grounds for maintaining they do work all you need to do is look at the abstracts in reality.
     
    FMMM1, Simbindi, Snow Leopard and 7 others like this.
  16. Invisible Woman

    Invisible Woman Senior Member (Voting Rights)

    Messages:
    10,280
    I would agree that the best way to.look at evidence fir anything is to organise and look at it in a systematic way. Especially when you are looking at a large amount of information.

    However, a system to organise data to make sure that nothing is overlooked and the resulting opinion discusses all relevant angles is not the same thing as a system that creates an arbitrary rating system and then people make judgements of that.

    An arbitrary rating system may be quite wrong. People can tend to focus more on the system than the underlying data and this can cause errors to be easily overlooked or for the underlying data to be misinterpreted. Apply a rating system to something that is of very poor quality can hide just how poor the underlying evidence is.
     
    Snow Leopard, Michelle, Barry and 4 others like this.
  17. Wonko

    Wonko Senior Member (Voting Rights)

    Messages:
    6,682
    Location:
    UK
    It's much, much, worse than that.

    The clinicians we are expected to trust the clinical experience of are the very same clinicians who either didn't spot that CBT/GET didn't work in their own trial (a 'poorly designed' trial designed to prove they did work, no matter what happened), or who fraudulently altered not just the results, and then suppressed them when even that didn't work, and either deliberately made false statements about the trials 'success', or knowingly allowed others to do so, and benefited, substantially in some cases, from doing so.

    In short, they have shown that, in this case, either their clinical judgement cannot be trusted, or that they can't - I suspect both.

    These are the people who are now holding us to ransom, these are the people we are supposed to trust, with our lives.

    These are the people we are now supposed to sit down, and get all chatty with, to further their (not our) interests.

    I f'ing hate politics.
     
    Lilas, JohnM, Michelle and 18 others like this.
  18. Simbindi

    Simbindi Senior Member (Voting Rights)

    Messages:
    2,746
    Location:
    Somerset, England
    I actually felt that was how the NIHR HTA panel I was on worked. However, that was very different than the other panels that had a high percentage of therapists, psychiatrists and psychologists assessing their research proposals (there were none on mine).
     
  19. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    320
    This just seems like more of an argument to use even stricter systematic methods, rather than give up on them altogether
    I agree. Some of the most compelling reasons for dismissing the evidence for CBT and GET don't really fit into the methods of GRADE, which demonstrates a lot of issues with it, and how it can be manipulated. But I don't think that means there isn't a very useful place for systematic methods.
     
    Michelle and Hutan like this.
  20. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,439
    Location:
    London, UK
    But what would this 'systematic' be other than to systematically apply every relevant piece of reliable evidence we have about trials and make a rational inference? If there is a paper from 1987 showing, surprisingly, that what looks like a good method for measuring something is useless then that piece of information should be applied. If there is a paper that suggests that Dutch therapists are better at CBT than UK ones (I think there is) then that should be applied. It seems to me that 'systematic' just means 'using every valid piece of evidence at our disposal'.
     
    FMMM1, Snow Leopard, Michelle and 3 others like this.

Share This Page