The 'C' in RCT

Discussion in 'Trial design including bias, placebo effect' started by Barry, Dec 22, 2020.

  1. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    So now I'm still confused.

    https://www.nice.org.uk/glossary?letter=r
    Is this saying that a comparison group and control group are different names for the same thing? And in which case that an alternative intervention or no intervention can count as viable controls?

    Similarly here:
    https://www.ctu.mrc.ac.uk/patients-public/about-clinical-trials/what-is-a-randomised-clinical-trial/
    I can see that if the trial can be fully blinded, then comparing one treatment to another treatment will allow assessment of the differential between effectiveness of the two. And if that differential is all you are interested in then maybe OK, so long as each treatment is not itself accompanied by its own unique set of confounding variables.

    But how do you blind for no treatment at all (as compared to a dummy treatment).
     
  2. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,933
    Location:
    Australia
    You cannot.
     
    ahimsa, Peter Trewhitt, Ravn and 3 others like this.
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,394
    Location:
    London, UK
    With great respect, @Barry, you did not take note of what I said about language. Language does not follow the obvious rules we expect.

    A controlled trial is not a trial with controls. Just as a fastened bag is not a bag with fasteners. You can have a bag with fasteners that don't actually work.
    A controlled trial has to mean a trial with an adequate number of controls.
    The problem that you have identified is that this is never made explicit. The people who understand its importance through common sense design controlled trials with adequate controls. Those who do not, or prefer to bypass common sense for their own aims design trials without adequate controls.

    So a control and a comparison, or comparator, are words that can refer to the same thing. But they do not mean the same. A comparator is something the compares. A control is something that excludes at least one spurious effect. But it still does not make a trial controlled, just as one swallow does not make a summer.
     
  4. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,981
    Location:
    betwixt and between
  5. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,394
    Location:
    London, UK
    I now realise that you were quoting from NICE, @Barry.
    The people who run NICE do not understand this stuff. If they did, they would not use GRADE.

    But the people who run NICE are not the people who sit on committees and make the decisions so things are complicated.
     
    Last edited by a moderator: Jan 12, 2021
    ahimsa, Peter Trewhitt, Ravn and 4 others like this.
  6. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,602
    There is a view that that was an allegory about the dollar, the gold standard and the silver standard, so the meeting is not inappropriate.
     
    Ravn and Michelle like this.
  7. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Thanks Jonathan, that is very helpful.

    Yesterday evening, given yours and others comments here, I thought it might be helpful to come up with a short definition of what the "controlled" aspect of a RCT actually means. Then checked myself with the thought that I would be reinventing a wheel much better already created by others, so did a web search. I found the NICE definition, and realised that others defined it in much the same way, and got confused. The fact NICE has this definition made it all the more confusing - yet another nail in the coffin of my trust/faith in the medical establishment.

    So given the above I'm taking a shot at defining when a trial can be described as a controlled trial or not. (For those who do not already know me, please note I am not medically trained).

    For a clinical trial to be described as "controlled", there must be adequate suitable controls in place to allow any effects of interest to the trial, due to the intervention alone (beneficial or not), to be clearly distinguishable from any effects not due to the intervention alone.

    Note 1: Observe the need for "adequate suitable controls" - the presence of one or more controls in a trial does not, by itself, warrant a trial to be described as "controlled".

    Note 2: Take care to not conflate the effects of an intervention with the effects of participants' awareness of that intervention.
    EDIT: Modified the definition slightly, based on discussion in posts further down with @Jonathan Edwards. The change is to clarify that it is effects due to the intervention that are of interest to the trial, because there may be other effects due to the intervention that are not of interest. My wording could no doubt be improved upon.
     
    Last edited: Dec 24, 2020
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,394
    Location:
    London, UK
    I think you have it right. Maybe it should be written above the door at NICE head office (if it has one).
    One could quibble about 'due to the intervention' since we want to exclude effects due to an intervention that are not specifically due to some class of mechanism attributable to the intervention and not general or some such mouthful. But nobody needs to make things that complicated.
     
    Barry, Peter Trewhitt, Ravn and 2 others like this.
  9. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Thanks :). Yes, I was trying to pin things down a bit by saying "due to the intervention alone", but I appreciate that still does not cover the point you make.

    How about:

    For a clinical trial to be described as "controlled", there must be adequate suitable controls in place to allow any effects of interest to the trial, due to the intervention alone (beneficial or not), to be clearly distinguishable from any effects not due to the intervention alone.
    Really intended as a starting point if others wish to improve on it.
     
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,394
    Location:
    London, UK
    Not bad!
     
    Peter Trewhitt, Ravn and Barry like this.
  11. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    4,215
    I suppose when talking about an adequate ‘control’ it is important to ensure the right things are being controlled for.

    You could argue that a ‘no treatment control’ or a ‘treatment as usual control’ is ‘controlling for the passage of time’, demonstrating that any change in the treatment group is not simply just due to the passage of time. However given in the trials being talked about it is impossible to blind ‘no treatment’ potential bias becomes a very important issue in relation to any effect of the simple passage of time.

    There is the option of including a period of no treatment in the intervention for those who eventually receive what is being evaluated, so for example half of the treatment arm could receive the intervention straight away and half receive treatment after a six month delay, so people are their own control in relation to the passage of time. This has the advantage that matching controls is more complete, you ensure that any change observed is less likely to be due to ‘extraneous’ factors such as age or gender or level of education. A big problem with the heterogeneous groups being dealt with is deciding what is a relevant starting point for examining the passage of time, is it referral to the research project, is it the initial trial assessment date, is it the point of the original diagnosis or the actual onset of the condition. An issue in this discussion is that we have very limited data on ‘the natural history of ME’: does spontaneous improvement occur, does spontaneous improvement occur differently at different points in the course of the condition or does it occur differently for different people with the same condition? This is particularly relevant to the arguments of such as Turner-Stokes and Wade in their recent BMJ editorial, as they suggest that ‘rehabilitation’ must work with Long Covid because some people are ‘recovering’, despite it being generally believed that with post viral conditions it is any spontaneous recovery is most likely in the early stages and they make no effort to distinguish between spontaneous recovery and treatment effects.

    However any psychological/behavioural intervention involves lots of other things, beyond the mere passage of time that could impact on any result. So some sort of control involving interaction without what is seen as the active ingredient or the key aspect to the behaviour or psychological intervention is essential. It could simply be that talking to an interested person, getting out of the house for x minutes a week or the beliefs of the clinician is the cause of any hypothetical improvement. So here a solution would be to invent a treatment without your presumed active component, so with PACE they could have say compared GET with a made up ‘singing therapy’ or ‘chatting with nice people therapy’. Alternatively there is the potential solution of comparing two or more interventions, so you end up asking not ‘does your target intervention work’ but rather is it ‘better than other currently used alternatives’. In this last situation to avoid bias the different treatment arm subjects should have no idea which is the researchers’ targeted or preferred intervention. So in the PACE example it was a comparison between three treatment alternatives GET, CBT and Adaptive pacing, unfortunately they did not seek to keep subjects in the dark about their preferred interventions but the reverse, you could say that they subjected participants with repeated propaganda in favour of their preferred interventions.

    You could argue that any psychological or behavioural intervention implemented by human agents with human subjects in real life clinical settings is so horribly complex it is impossible to completely control for all extraneous factors. In the history of psychology this is the origin of animal research: the reason for running rats through mazes; the idea you look at simpler situations to identify the basic building blocks of psychology which once identified can then be related to more complex real life human behaviour. Unfortunately even after three quarters of a century running rats through mazes we are stuck with concluding that rats terribly complex creatures.

    There are a number of possible responses to the suggestion that real life clinical research is just too complicated. One is to give up on structured scientific investigation all together. One of the most famous people to do this was Sigmund Freud. He went from being what we would now describe as a neuropsychologist, and a brilliant one at that, who outlined limitations in modelling human behaviour that have not yet been fully answered today, however his solution was to immerse himself in clinical practice and come up a creative synthesis from his personal experience, but this is literature not science.

    A variant on Freud’s path is what Turner-Stokes and Wade suggest in their BMJ editorial is to evaluate complex interventions by ‘qualitative’ rather than ‘quantitative’ methods. This has the effect of elevating subjects subjective responses as evaluated by such as questionnaires or free ranging interviews. In theory it should get at what the people involved see as important, though as demonstrated by Turner-Stokes and Wade it is very subject to potential bias at all levels, as when they argue that patient reports of harm from CBT should be ignored because they are merely qualitative but that qualitative evidence is essential when it may support their preferred hybrid intervention. But also it fails to distinguish between what the researchers or the subjects believe to be important and what might actually work.

    When I was an undergraduate over forty years ago, the mainstream consensus was that in dealing with the complexity of real life psychological or behavioural interventions the solution was not to throw out the baby out with the bathwater by rejecting controlled experimentation all together, but by seeking out converging evidence. You conduct clinical trials that you control to the best of your current ability, you eliminate bias as much as possible at the same time as seeking to identify their flaws and limitations, but you also seek converging evidence from different experimental designs and other sources.

    So for example with the PACE study not only should they have looked at the interventions they did, but also explored their underlying rational by demonstrating the presence of their hypothesised ‘deconditioning’ in their target population, establishing physiological measures that could indicate any relationship between the degree of deconditioning and level of disability in the patient population and identified that their preferred interventions changed these physiological markers in both the normal population and the patient group, indicating that there was a continuity between non patients and patients on these dimensions rather than as many other non BPS researchers now believe that the physiology of people with ME in relation to physical activity involves different processes to that of the normal population, that the differences are not just in quantity but also in quality.

    Unfortunately the BPS researchers that so frustrate us have combined the worst of both worlds: they fail on properly conducted experimentation and they fail on establishing convergent evidence.
     

Share This Page