Can we influence trial and review methodology, eg open-label trials with subjective primary outcomes?

Of course the other group the information could be aimed at is trial participants.

Perhaps it's an area for one of the S4ME fact sheets? It might be a difficult balance to strike in communication terms—we don't want to be accused of organising a boycott—but it's worth considering.

Taking part in a trial is a lot of effort for someone with ME/CFS, and people deserve to be able to give properly informed consent. To my mind, that means knowing whether or not the trial is well enough designed to yield useable information. Most participants aren't experts on this; some of them may know almost nothing.

If we could find a way to set out some information about things to look for, the charities may even pick it up and push it.
Yes a good idea.

One thing we could do to move things forward, even if only a little, would be to utilise the factsheet project.

When reading the opening post & just glancing through the rest (sorry not been able to properly read it so apologies if i off base here) but while reading i was immediately remnded of what i was saying the other day on another thread (quoted below - I've greyed-out bits of my suggestion that arent directly relevant & possibly off topic), perhaps these 2 ideas could be amalgamated or connected in some way....
I'm highlighting this here because I think it may be useful in what I'd like to suggest, if it hasnt been suggested already ...

Which is that as part of our 'S4 factsheets' project we also do a factsheet with simple explanations of the issues/flaws in all the CBT/GET trials.

I say this because while I think I now grasp most of the main issues with PACE etc, (switching outcomes, subjective outcomes comb'd with lack of blinding etc) I have found the Indirectness argument the one that was easiest to understand, AND, cruicially, the one I've found that people in general seem to understand easiest when i talk to them.

That may well be because my ability to explain things is crap
- understanding something isnt the same as being able to teach it. But I suspect thats not the only reason, because people who're in jobs where they ought to understand the finer points of trial design also dont seem to get it.

I think the argument that 'they studied all fruit so the results cant be extrapolated to strawberries', is initially the easiest concept to get your head around, especially if you're still under the illusion that scientific studies are always accurate & reliable if they're peer reviewed & published.

But now I see that it's not the best argument at all! Because why wouldnt a study apply to strawberries, because strawberries are also fruit! (is this right Jonathan?)


So I'm now hoping we can get some kind of succinct description of the major flaws in BPS studies, the reasons they actually show that these interventions dont help anybody. On a fact sheet.

I suspect that the indirectness argument is more cognitively & emotionally comfortable in general (at least for me & the people i talk to), because the idea that 'well these studies exist & are ok, but they only show that they might help people who're merely fatigued, but we're different & they dont help us'.
That is much easier to take on board as a member of the general public, than the idea that huge swathes of so called scientific literature is so fundamentally flawed that were it all to be examined it would pull the rug out from under the majority of psychological science.


I think we especially need a factsheet on this because all outher sources seem to be using the the indirectness argument... it's the main one i see from the MEA etc.

Of course, all the relevant info is detailed & spread all over the forum scattered across thousands of posts, plus @dave30th Virology blogs, @Brian Hughes website & book, Graham's amazing videos etc etc etc. But i find the spread out nature of it all really difficult to utilise.

So I think it'd be amazing to have a 1 or 2 sided fact sheet with the main arguments on & links to more detailed info. Something a new patient, interested journalist, Dr, budding psychologist etc could read.
 
I have applied for funding to develop a way of ensuring patients are involved in designing trials - specifically choosing the treatments to be trialled, the outcomes to measure, and the ways to measure the outcomes objectively. Will find out whether I've got the funding in May. I am proposing to use ME as a case study. I have pasted the summary of the proposal below

Trialblazers: putting patients in the driving seat of clinical trials

Patients and health professionals need reliable evidence to make decisions. Randomized trials provide this because they have a standard design for reducing bias. However, studies of trials often reveal bad design, such as focussing on unimportant outcomes, or being too demanding for enough patients to take part.

Patient and public involvement can enhance research relevance and help build public trust. However, many trials are designed without involvement at the planning stage which can lead to research waste. Patient involvement often happens in academic or clinical settings which can be intimidating. Involvement often also requires commitment and energy which can be a burden for patients who are less likely to be able to bear it.

Trialblazers aims to increase the number of trials planned involving patients by creating less time-consuming and demanding ways to contribute. It also aims to enable larger numbers of patients to contribute than is possible with traditional involvement methods. We would also like to understand and address objections to the assumption that involvement in planning a trial means you can’t be a participant in it.

Our objective is to create a way a patient “crowd” can set the agenda for a trial with the “must-have” and desirable characteristics which would encourage them to participate in it themselves.
To develop the Trialblazers method, working with a patient advisory panel, we will summarise information from reports of patient involvement in trials, and identify areas where involvement has worked well, or where it could be improved. We will also conduct interviews with patients and researchers who have worked together on planning trials.

From previous work, we think areas which could be improved by timely patient involvement are choice of outcomes and outcome measures, which treatments to test, logistics, personnel and support, reducing participant burden, recruitment, and writing about the trial for the public. We will also explore additional aspects of trials where patients are not traditionally consulted, such as setting the smallest important effect size.
We will create tools, resources, and activities to support patient involvement in these aspects of trial planning. We will also explore how citizen science and crowd-sourcing methods could be used to involve larger numbers of patients.

To test the feasibility of the Trialblazers idea, we will recruit a group of patients to plan a trial of potential drug treatments for ME/CFS. This is in response to the UK Government’s recent delivery plan for ME/CFS calling for more patient involvement in research, and for research to address the James Lind Alliance’s Priority Setting Partnership Top 10 priorities. A patient-endorsed plan will be of direct practical use to funders in their ME research commissioning strategy, and to potential triallists applying for research funding in this area.

Future funding will allow us to explore how a generalisable citizen science platform could be developed to allow the crowd-sourcing of trial plans in other areas.

We hope that funders and regulators such as the NIHR and the HRA could endorse the Trialblazers method of trial planning as standard: For example, the NIHR could make the results of a Trialblazers planning exercise available to applicants for commissioned calls, and the HRA could require the completion of a Trialblazers planning exercise before ethics approval is given.

I can see this being sensible if applied well, but am I wrong or doesn't the whole concept depend on what kind of patients you're recruiting and willing to recruit? How does one ensure that patients are indeed educated on these matters and don't just follow whatever nonsense is currently trending on Twitter or whatever someone once told them? A lot of patients on S4ME seem quite well educated on these matters but what if you just end up getting patients that think the current processes surrounding the "lightening processes" are the culmination of methodological rigor? If someone wants to test the "lightening process" in a riducolously bad study setup, wouldn't they just choose a groups of patients who repeat their gospel and ignore any concerns about how a study should be conducted?

I know there's been a lot of talk in the US about trialling certain drugs for Long-Covid and certainly what was trialled without patient input wasn't very sensible at all, but I haven't seen good ideas by patient groups either that would go beyond things currently trending on social media.
 
I can see this being sensible if applied well, but am I wrong or doesn't the whole concept depend on what kind of patients you're recruiting and willing to recruit? How does one ensure that patients are indeed educated on these matters and don't just follow whatever nonsense is currently trending on Twitter or whatever someone once told them? A lot of patients on S4ME seem quite well educated on these matters but what if you just end up getting patients that think the current processes surrounding the "lightening processes" are the culmination of methodological rigor? If someone wants to test the "lightening process" in a riducolously bad study setup, wouldn't they just choose a groups of patients who repeat their gospel and ignore any concerns about how a study should be conducted?

I know there's been a lot of talk in the US about trialling certain drugs for Long-Covid and certainly what was trialled without patient input wasn't very sensible at all, but I haven't seen good ideas by patient groups either that would go beyond things currently trending on social media.
Yes absolutely. Was planning to stick to whatever trial questions were set by James Lind Priority setting partnerships. The methodological rigour would be provided by scientists without any conflict of interest and who understand that relying on subjectively reported outcomes in unblindable trials renders those trials useless
 
I'm not sure there is a sensible target (you could potentially address funders and ethics committees)
Upstream, @Sasha asked about whether there are guidelines for research ethics committees. There are. I think most countries have national health research guidelines. They present a big opportunity to move general practice in the right direction. We should be finding the guidelines for our country, becoming familiar with them, and referring to them when we are challenging anyone involved in bad research (researchers, funders, ethics committees).

We (or at least our patient charities) should be watching out for opportunities to provide feedback on the guidelines - most will periodically have consultation processes. e.g. does your national guideline document adequately reflect the recent changes in the Helsinki Declaration?

Both of those things can make a difference.

Of course the other group the information could be aimed at is trial participants.

Perhaps it's an area for one of the S4ME fact sheets? It might be a difficult balance to strike in communication terms—we don't want to be accused of organising a boycott—but it's worth considering.
I really like this idea.
 
We (or at least our patient charities) should be watching out for opportunities to provide feedback on the guidelines - most will periodically have consultation processes. e.g. does your national guideline document adequately reflect the recent changes in the Helsinki Declaration?

That's a good thought. 'Clinical trial breaches national research standards' is a much bigger news story that something about Cochrane, who only one in ten thousand people will have heard of.

Solid guidelines should also make it much easier to influence ethics committees at the stage where bad trials are trying to get passed.

In fact, attempting to influence the consultation process could be a great story in itself: 'Patients fight to get basic standards for clinical trials'.

@Jonathan Edwards, do you have any thoughts on this? Do you know when a UK guidelines review might be coming up?
 
Perhaps it's an area for one of the S4ME fact sheets? It might be a difficult balance to strike in communication terms—we don't want to be accused of organising a boycott—but it's worth considering.

I like this idea too. It might seem a bit technical but, as several members have said, once you get your head around the reasons they aren't too difficult to follow. I think it would be good to have on display a simple explanation of why so many trials are uninterpretable, written for the benefit of those who might be exploited by them but plain for everyone else to see.

The BPS people cannot argue against this stuff without making themselves appear even more stupid (e.g. "we only know how to do bad trials of this so you have to allow us to pretend they are good trials"). Maybe even people like Sonya and Charles will come to see that patients are not 'being negative'. They are keeping their eye on the ball.
 
@Jonathan Edwards, do you have any thoughts on this? Do you know when a UK guidelines review might be coming up?

Do you mean guidelines for trial practice?
I am doubtful that any attempt to target protocols and guidelines will have any impact. Protocols and guidelines are inherently flawed, incomplete and bendable. "Because it says so" is never a good argument. I think it is much better just to lay out the arguments for as many people as possible to see. I don't need to read a declaration of Helsinki to know what I think is ethical. If what I think is ethical is different from a declaration then either the declaration is wrong or someone needs to explain some more subtle arguments to me.
 
Do you mean guidelines for trial practice?

Yes, such as, 'Don't use subjective measures as primary outcomes in open-label trials'. That could have saved us a whole lot of trouble. :whistle:

Jonathan Edwards said:
I am doubtful that any attempt to target protocols and guidelines will have any impact.

Maybe not, but if there's a review coming up, it's a chance to try.

Jonathan Edwards said:
Protocols and guidelines are inherently flawed, incomplete and bendable.

Only to an extent, surely? Can't you think of maybe ten rules that, if applied to every trial, would raise medical research up out of all recognition? I'm thinking, 'Don't do open-label trials with subjective primary measures. Always publish your results according to your original analysis protocol, even if you provide additional analyses. Always publish all your planned analyses together in a single paper... etc.' (I'm basically taking PACE as my template for how to breach ethics.)

"Because it says so" is never a good argument.[/quote]
Jonathan Edwards said:
I think it is much better just to lay out the arguments for as many people as possible to see. I don't need to read a declaration of Helsinki to know what I think is ethical.

You don't need to read a declaration because you're an ethical person but the huge mess that is the clinical trial literature is full of unethical rubbish that could have been stopped in its tracks by ethical boards actually applying ethical principles, and apparently they're not capable of doing it without a checklist.

A checklist may not be perfect but isn't it better than the alternative? (Like democracy.)
 
You don't need to read a declaration because you're an ethical person but the huge mess that is the clinical trial literature is full of unethical rubbish that could have been stopped in its tracks by ethical boards actually applying ethical principles, and apparently they're not capable of doing it without a checklist.

The basic problem is that if ethics committees are not applying ethical principles it is either because they do not understand or because they, or their chairman, does not want to understand because it is politically inconvenient. In that situation it will make no difference how many checklists you have. The committee will either not understand how to apply them or not want to understand.

The issue of subjective outcomes in unblinded trials is something taught to all junior doctors. Just like they get taught that the thyroid gland is in the neck. Nobody needs it in a checklist. And the problem is that under the right circumstances unblinded trials with subjective outcomes can be entirely satisfactory - because the issue relates to the way human nature affects medical interactions and there are neat ways to offset it. You could do a dose response study in which there was no identifiable expectation for a best dose. So no bias. So if you get a consistent peak at one dose you can believe it.

Every trial has to be taken on its own merits from first principles.

The biggest problem with ethics committees is that chairman are deliberately chosen by the local physician community as those who will not rock the boat and cause difficulties. If the community is psychiatrists that means sticking to the status quo.

Remember the old adage.
You can always tell a Bart's man.
But you can never tell him anything.

I have spent my life pointing out bad practice. I have ended up about a million pounds poorer in terms of salary and pension increments for being a good chap. That is the reality.
 
I don't need to read a declaration of Helsinki to know what I think is ethical. If what I think is ethical is different from a declaration then either the declaration is wrong or someone needs to explain some more subtle arguments to me.

Though if researchers commit to abiding by the declaration of Helsinki, but don’t, this provides support in arguing the research is unethical and leverage for those arguing for better quality research.
 
Though if researchers commit to abiding by the declaration of Helsinki, but don’t, this provides support in arguing the research is unethical and leverage for those arguing for better quality research.

It might do. But I have now given up offering to review grant applications, unless I think they are really important ME/CFS ones, because I cannot bear the tedium and meaninglessness of wading through pages of stuff about conforming to Helsinki, PIP, diversity, and trying to answer why these have been satisfied on an online form that usually rubs itself out every half hour. The bureaucratic work I have to do, on top of the hoops that the applicants have to go through has made the whole process impossible for me. If the applicants simply presented their proposal and addressed the important ethical issues, as they used to, it would be fine.

If it gets any worse I will stop doing ME/CFS grants too!
 
Do you mean guidelines for trial practice?
I am doubtful that any attempt to target protocols and guidelines will have any impact. Protocols and guidelines are inherently flawed, incomplete and bendable. "Because it says so" is never a good argument. I think it is much better just to lay out the arguments for as many people as possible to see. I don't need to read a declaration of Helsinki to know what I think is ethical. If what I think is ethical is different from a declaration then either the declaration is wrong or someone needs to explain some more subtle arguments to me.
Ethics committees assess research against national guidelines. They do actually check the guidelines at times. In my country, there are processes such as ethics committees reporting which standards research applicants are failing on, and when approved research runs into problems. I know guidelines can be improved, yes, they are 'flawed, incomplete and bendable', but they do affect what ethics committees do and they do change practice over time. The guidelines remind people of issues that they might not have thought of - there is a lot to know.

An example is insurance to compensate study participants if they are harmed - there are details about national jurisdiction. If the insurance policy is in the US, then any patient harmed in a local arm of an international trial might have to fight for compensation in the US legal system, and that might be almost impossible if you are living outside the US and are not well resourced.

Perhaps if all ethics committees considering health research proposals were as ethical and smart and as informed as Jonathan, guidelines/standards would not be needed. But, they are not. And, even then, I think if Jonathan came to New Zealand to do a study, there might be new things to know - things like how research should be done in Māori and Pacific communities, differences in the importance of informing communities as well as participants, different legislation, different requirements for data management.

National health research guidelines and standards are a place where pressure can make and has changed practice. National guidelines and standards should represent the expectations of the society that made them. 'Because it says so' is actually ok sometimes because it's shorthand for 'because we've thought and talked about it and this is what we as a society believe is reasonable for these reasons, and so you don't get to do research if you don't comply'. Trying to make health research guidelines and standards better, and making sure that they are applied is one way to improve health research. I think it's one worth putting some effort into.


I read something somewhere about health research standards (and therefore ethics approvals) changing from being solely focussed on the safety of trial participants to being much more concerned about societal good - is this research a good use of resources? is it able to actually answer the question posed? Could an answer to the question make people's lives better? How do you know it could make people's lives better? We need more of that.



Maybe even people like Sonya and Charles will come to see that patients are not 'being negative'. They are keeping their eye on the ball.
Yes, I was thinking that, if we had a fact sheet for prospective trial participants, the patient charities might consider taking some action in relation to it. e.g. promoting the fact sheet with members, and checking and reporting on the points on it when they are helping to recruit for a study.
 
Last edited:
An example is insurance to compensate study participants if they are harmed - there are details about national jurisdiction. If the insurance policy is in the US, then any patient harmed in a local arm of an international trial might have to fight for compensation in the US legal system, and that might be almost impossible if you are living outside the US and are not well resourced.

I think if Jonathan came to New Zealand to do a study, there might be new things to know

Frankly, if I had had to think about all that when I was on an ethics committee for ten years I would have resigned overnight. This is the problem I was talking about. Being buried in issues that should be sorted by simple default legal rules.
 
they do not understand or because they, or their chairman, does not want to understand because it is politically inconvenient
this is my experience so far. They refuse to see that a badly designed study is just as unethical as a study which harms and/or inconveniences study participants during their participation. Because the harm comes later when the study results are manipulated and mis-used used to justify poor practice in the future which harms all patients ad infinitum. This, apparently, is not what we should be concerned about...it has been disappointing so far that we are constantly being told to neither look at the nitty gritty of study design (eg the suitability of outcomes and outcome measures), or the bigger picture.
 
Only to an extent, surely? Can't you think of maybe ten rules that, if applied to every trial, would raise medical research up out of all recognition? I'm thinking, 'Don't do open-label trials with subjective primary measures
I can think of one rule that would make a big difference: Apply the same standards to research of therapist delivered interventions (particularly in psychological medicine) that are applied to other types of medical research (particularly pharmaceutical interventions).

The basic problem is that if ethics committees are not applying ethical principles it is either because they do not understand or because they, or their chairman, does not want to understand because it is politically inconvenient.
I’m reminded of the reply from Barry et al to the JNNP Anomalies article, where they pointed out that Wessely was calling for the opposite of what he had been lobbing for in another context. There was also a paper that Peter White had written on some aspect of trial design which was completely disregarded in PACE. (Sorry, I can’t remember the details without looking them up but I’m sure someone will.) My impression is that in many cases it’s not that people don’t know how to do things properly, it’s more that doing things properly is not expedient.
 
Last edited:
Sorry, I can’t remember the details without looking them up but I’m sure someone will.
This is one of the examples I was thinking of:

From X: “Simon Wessely co-authored article criticising NICE decision to prioritise long term outcomes in assessing treatments for ME/CFS. In response Barry et al cite statement by Wessely, Geralda et al which calls on NICE to prioritise longterm outcomes in assessing treatments.”

 
I can think of one rule that would make a big difference: Apply the same standards to research of therapist delivered interventions (particularly in psychological medicine) that are applied to other types of medical research (particularly pharmaceutical interventions).
Something I was thinking about the other day, and how relevant it is to the growing calls to make mental and physical health equal. Of course no one who says that means it that way, that mental health care and research should follow the high standards of biomedical medicine, but of course it absolutely 100% should.

And although some would disagree, without much of a reason for it, but mental health care and research are massively, ridiculously lower quality, efficacy and reliability than, let's go with biomedical, medicine, and this is likely the main reason.

Especially since every single one of the breakthroughs on mental health have come out of biomedicine (or through social and technological progress) anyway, since the standards in psychosomatic care don't even allow to produce anything competent. Lower standards breed mediocrity, and sure enough we observe from the current standards not much above base mediocrity. Entirely as expected.

But I expect this to be rejected with lots of reasons that stomp around the low standards bush, because the goal isn't to implement parity between both, but to overrule the higher standards using the far lower ones from mental health. It's plainly obvious that the real idea behind it is to basically erase most disability support, to make it more restrictive than ever, as a short-term cost-reduction policy. One that is massively more expensive long term, but sadly it doesn't seem that more than a handful of people in the industry understand this, or at the very least almost no one who does understand it dares say it plainly.
 
Back
Top Bottom