Causal Inference About the Effects of Interventions From Observational Studies in Medical Journals, 2024, Bibbins-Domingo et al

Discussion in 'Research methodology news and research' started by rvallee, May 10, 2024.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,919
    Location:
    Canada
    Causal Inference About the Effects of Interventions From Observational Studies in Medical Journals
    https://jamanetwork.com/journals/jama/fullarticle/2818746

    Importance
    Many medical journals, including JAMA, restrict the use of causal language to the reporting of randomized clinical trials. Although well-conducted randomized clinical trials remain the preferred approach for answering causal questions, methods for observational studies have advanced such that causal interpretations of the results of well-conducted observational studies may be possible when strong assumptions hold. Furthermore, observational studies may be the only practical source of information for answering some questions about the causal effects of medical or policy interventions, can support the study of interventions in populations and settings that reflect practice, and can help identify interventions for further experimental investigation. Identifying opportunities for the appropriate use of causal language when describing observational studies is important for communication in medical journals.
     
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,919
    Location:
    Canada
    One problem is with the highlighted portion above: well-conducted. As we have seen, there is a widespread problem in academia where "this is a great study" = "I like the conclusions" and "this is a bad study" = "I don't like the conclusions". If anything, almost all pragmatic trials are very poorly conducted, even horribly designed, with clear intent and purpose to prove a conclusion rather than know whether it is right or wrong. They can never account for all other things being equal, have huge amount of bias and confounders, but often make clear causative assertions anyway.

    So although they point out the problem in observational studies, the problem is actually far larger in clinical trials, especially pragmatic trials, which they actually cite as a better way of doing things. Which is really concerning, to point at a place where the problem is even worse as an example of where things are better.

    What's worse is that in evidence-based medicine, the problem is largely side-stepped by using duplicitous language that makes clear causal inference while being framed in an implausible deniable way. I say implausible deniability because it's done so systematically that it can't seriously be said that it's not the intent and purpose of this language, to overhype not just mediocre results, but sometimes negative results, turning them into false positives by the mere use of shady language that can be mumbled out in a classic Motte and bailey fallacy.

    For sure this would help with some of the terrible observational evidence base out of psychosomatic medicine, although I pretty much assume it would be exempted from this, but the problem is even worse with pragmatic evidence-based medicine trials because where observational studies fail to establish clear causative factors (but argue them anyway), they simply rely on biased pragmatic trials as an excuse. They'll say "we're not saying that this is psychological, we're just saying that psychologically-informed behavioral adjustment is a safe and effective treatment", which is the exact same thing using different words.
     
    alktipping, Sean, Kitty and 4 others like this.
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,963
    Location:
    London, UK
    I haven't looked at the paper beyond the abstract but this sounds like special pleading, as you say.

    There is a very simple solution to the problem. Stop using causal language at all. A trial should be reported and readers should be able to conclude what they like about cause and effect. When I published my first trial of rituximab the title was 'improvement in RA following a protocol designed to deplete b lymphocytes'. I did not even think I should imply that the treatment would have caused the improvement via B cell depletion if there was a causal relation. I simply stated the fact that they did improve. The detailed pharmacodynamics made it sufficiently implausible for there not to be a causal link to make a drug company switch from providing no funding for the work to allocating $2M overnight. Someone read the data and decided what it meant.

    Nobody should be writing in causal language in trial papers. It isn't even necessary. Medical care is not based on what the authors of a paper claimed to occur. It is based on a careful analysis of some data.
     
    rvallee, MeSci, FMMM1 and 8 others like this.
  4. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,963
    Location:
    London, UK
    The other point is that the blood text mentions randomised trials, and as we saw with PACE, randomisation isn't actually what makes a trial bulletproof. Adequate controlling is far more important and can quite often be achieved without randomising. Pragmatic trials are more or less by definition not adequately controlled - that is what makes the downgraded to pragmatic. Failure of randomisation is one way to screw up adequate controlling but only oe of many.
     
    rvallee, MeSci, FMMM1 and 8 others like this.
  5. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,292
    What exactly do they mean when they call it "a pragmatic trial"? Is it one somehow piggy-backed on clinical practice? What exactly is the short-cut they're taking?
     
    MeSci, Yann04, FMMM1 and 3 others like this.
  6. Sean

    Sean Moderator Staff Member

    Messages:
    7,488
    Location:
    Australia
    Completely agree. Randomisation is just one means of control. A good one, to be used wherever possible, but hardly sufficient on its own.
    Never been clear to me either. But as far as I can tell it is just an excuse to avoid having to deal with that pesky core requirement of science to establish causation.
     
  7. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,963
    Location:
    London, UK
    The pragmatic trials unit at Queen Mary says:

    What is a Pragmatic Clinical Trial?

    Schwartz and Lellouch were the first to use the word “pragmatic” in relation to clinical trials in 1967. They defined a pragmatic trial as a trial designed to help chose between care options, as opposed to an explanatory trial which is used to test causal research hypotheses, for example about biological processes.

    About 30 years later Roland and Torgerson made the distinction between these two types of trial in a slightly different way, explaining that explanatory trials evaluate efficacy, the effect of treatment in ideal conditions and pragmatic trials evaluate effectiveness, the effect of treatment in routine clinical practice.

    In the twenty-first century it has been recognised that there is, in fact, a spectrum of trials with very explanatory trials at one end and very pragmatic trials at the other end.


    But of course this begs the question as to what makes the difference such that pragmatic trials cannot evaluate efficacy. I have seen a definition of this bunted to find it.
     
    alktipping, Trish and Peter Trewhitt like this.
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,963
    Location:
    London, UK
    The Wikipedia entry on pragmatic trials is informative. It is also intriguing to see that in this sort of case entries are always written by people who like the idea. Those who don't cannot be bothered with pointing out the poor logic.

    A pragmatic trial is one that in some way forfeits the ability to determine cause and effect relations by deliberately not using procedures needed to avoid 'confounders' - essentially biases.

    The enthusiasts waffle on about effectiveness rather than efficacy and correlation rather than causation. But there is a clear bait and switch involved. The standard pragmatist's line, which Chris Burton parroted to me at the NICE committee, is that pragmatic trials cannot decide causal issues where the example is that drug X causes this effect through process Y. What that avoids saying is that pragmatic trials equally cannot decide if X causes any effect.

    As far as I can see the original legitimate basis for suggesting pragmatic trials was to get away from over-restricted RCTs that showed that X causes a positive effect in ideal cases under ideal conditions. RCTs have often been 'over-sanitised' by excluding people over 65 or with kidney disease or who have had previous treatments of a certain class. The results may well not be representative of a real population wanting treatment.

    To me the obvious conclusions are 1. Over-restriction of RCTs, where driven by commercial or otherwise ill-justified reasons should be minimised anyway. 2. You still need adequately controlled trials that deal with confounders to show X has an effect. You may then want to do further pragmatic trials to see if in clinical practice that effect translates into a useful service. If breast cancer screening is shown to reduce mortality in a closely observed study, yo may want to see whether in practice anyone comes for screening. The value is probably mostly int terms of best use of financial resources in keeping people well.

    But there is absolutely no doubt that the pragmatic triallists want to be allowed to do trials to show that X cases an effect, without needing to deal with the biases that might confound the conclusion. And where do they mostly want to do this - for therapist-delivered rehabilitative treatments.
     
  9. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,761
    One of the big game changers(?) re CBT, & exercise, type interventions is actimetry Fitbit type devices. E.g. my limited understanding is that you can get relatively good information re time spent upright i.e. as well as number of steps. In addition there are indicators like ability to participate in education, work --- function normally.
    If you're delivering an expensive public service (i.e. funded by tax) then it's reasonable to ask [EDIT - "objectively assess"!] - X people are incapacitated, we pay Y therapists to deliver Z service - (as per Jonathan*) is it value for money -
    • how many people significantly improved functioning (ability to work etc)?
    • how long did the improvement last ----?
    • compared to "do nothing" option - is it value for money?
    Of course there are lots of good reasons not to do the above - it doesn't give the correct result i.e. justify the therapist!

    *"You may then want to do further pragmatic trials to see if in clinical practice that effect translates into a useful service. If breast cancer screening is shown to reduce mortality in a closely observed study, yo may want to see whether in practice anyone comes for screening. The value is probably mostly int terms of best use of financial resources in keeping people well."
     
  10. Sean

    Sean Moderator Staff Member

    Messages:
    7,488
    Location:
    Australia
    Exactly. What is the point of such trials? If they are not testing for causality then what the hell are they testing for?

    They seem to believe that testing for causality is the same as being able to describe the underlying causal pathway/s. But it isn't. You can test for the fact that A causes B, at least probabilistically, without knowing the details of how it happens. You can know that turning a light switch causes the light to come on, without understanding the full process behind how it happens. Which is exactly the current state of knowledge about quantum mechanics, which can make extremely accurate and precise predictions, to 11 decimal places. It is literally the best scientific theory we have in all of science. But we have very little idea how it works underneath.

    Adequate control of the relevant significant variables, including in all therapeutic trials, or get out of the game. Not negotiable.
     
  11. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    2,909
    Wheedling or post hoc justification by the sounds

    why is it that healthy people are often the lazy ones who then spend so much energy trying to sell why their laziness is necessary ok or good for others instead of just getting in with it


    Whilst step way above our limits doing this for them and they just want to step on us to keep us down. There’s a whole inverted thing going on here

    on the q of why observational vs clinical trials then unless they are focusing on making said observational take in increased robustness to avoid retrospective issues (like those who exercise ten miles a week have fewer diseases being given reverse causality) then it feels like a :banghead: and ‘you can’t teach stupid’ moment
     
    Sean, alktipping and Peter Trewhitt like this.
  12. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    2,909
    The big issue if that wasn’t done as a trial but a retrospective is that if something works for you then you keep using it but if it doesn’t or find something better you don’t

    I never see people posting strata if they’re going up the stairs at work and walking back from the pub now they’ve done their half marathon or the racing bike is back in the shed.

    And then there is the fishing and how you time chunk things

    for me/cfs I think the big deterioration is over a pretty long time period. Ie if you think of the GET+CBT as actually having been a don’t care about the collateral damage of the real illness ( cos of their beliefs there is ‘something else’) let’s force people to complete something that either fixes them or harms them in order to access life-sustaining things version / behavioural psychology-based predecessor of the 2-day CPET to confirm ME/CFS

    then it I think causes far more harm for continual pushing over limits than doing the 2-day CPET extreme as it is (but then you rest and recover) only once dies harm wise

    and in fact they couldn’t have come up with a more damaging and wangled timeframe than six months


    You cumulatively overdo fir that time, even by a bit seemingly successful to hold down your job and boy dies it whack you health wise in a way I’m not sure is easy to recover at all from vs eg doing a ‘peak few weeks’ etc


    So that’s why the tech is fine but the research design can’t just change because it exists
     
    Lou B Lou, Sean, alktipping and 2 others like this.
  13. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,761
    I think it's possible to assess whether your activity is normal/average - e.g. I recall Paul Garner was challenged about claiming he was ill - someone pointed out that his posts effectively stated that he was doing more than the average person his age (maybe 500 steps was average and he publicly stated that he was doing 800).
    There are other indicators e.g. being too fatigued to attend education/work.
    Yea, I take your point that:
    • the psychologists have decided it's psychological --- push through-- so the "indicators" have to confirm the result they know is correct - excluded actimetry!
    • you need to be careful that people aren't just doing one thing (i.e. the monitored outcome) and everything else is sacrificed to achieve that one thing!
    • duration is important - 18 months?
     
    Peter Trewhitt, Sean and alktipping like this.

Share This Page