Can we influence trial and review methodology, eg open-label trials with subjective primary outcomes?

But we work together (which is good for funding: it's multidisciplinary!). And having other fields being critical of behavior research makes it easy to argue to your in-group "those people just don't get what we do" and ignore the criticism like before.
I don't know. The criticism is that this research is massively substandard, not even worth looking at. If no one used it, it wouldn't matter. So this is exactly what we want. The only way this discipline will clean up its act is if it can't sell its BS anymore. Change definitely won't come from within. So it has to to come from the users of this research. Who need to be come non-users, on the basis that it's so substandard it's not even worth looking at in its current form.

To emphasize on a point I made in another comment, it's not necessary for an attorney to be a trial attorney to know that hearsay is invalid evidence. Trial attorneys should especially be aware of this, but this here is a case where they know better but still make heavy use of it, all because no one seems to object to something they know is invalid.

All health care professionals understand that this type of trial is useless. There's something else going on here that makes something everyone understands to be invalid be exempt from rules they apply strictly in most cases.
 
There is already an open data movement but it would be unusually beneficial for us because of the low quality of trials & ideological capture in the ME/CFS field - psychobehavioural triallists would have to be much more honest in their analyses if they know their data would have to be made freely & openly available.
Given this, I think it's far more likely that they'd just give up, as they know their data don't stand up to scrutiny :laugh:. Which is a win-win-win-win. Win for everyone.
 
All health care professionals understand that this type of trial is useless. There's something else going on here that makes something everyone understands to be invalid be exempt from rules they apply strictly in most cases.
Is it a sort of bias in that some of them think in the back of their minds: “mental health” and “functional/psychsomatic illness” isn’t as “real” as biomedical disorders and therefore it doesn’t really need as serious of a standard of evidence?

I often feel like we are seen as “waste of times” / “burden” / “unserious” by the medical community :/
 
I feel like it’s not rare to see people talk about the low standards of evidence in psychiatry and psychology. But people often accuse these people of “stigmatising mental health” so they don’t get much attention. (Kind of like how criticisms of FND are often silenced because people see it as an attack on the label they have co-opted).
And then there's the "most published research is garbage, but mine is great" people. People who generically agree with the idea, just never when it applies. That's most of them, basically. Most of the ideologues know to criticize identical research they disagree with on all those points In fact they routinely do. They just exempt work they agree with. Even if it's identical on all the points they criticize.

It's definitely ironic that it's actually the people who are fine with substandards in mental health who do the most harm and stigmatize it the most.
 
That was the thinking behind this proposition I made: Ideas for a Declaration to raise standards in evidence-based medicine.

As long as those substandards are accepted as normal, there really is nothing we can do, because the substandards will always be used to justify themselves. What's most absurd is how circular the reasoning to use substandard evidence:
  1. Pragmatic trials of psychological therapies 'show promise' (literally for decades) in subjective reports of benefits
  2. Which means that chronic fatigue is a psychological condition
  3. Which makes it OK to use (otherwise invalid) garbage quality pragmatic trials not only as evidence, but to assert #2, even though by design this is not allowed (pragmatic trials do not allow to infer cause)
Even though #1 is fake. All of this is invalid, but since it has been decided based on the unreasoning above, it's considered OK. Basically it's OK because it's OK. Because they want it to be.

Until the bar is raised to minimal acceptable standards, the only thing that can unblock things is a research breakthrough, but then it will only apply to a narrow slice of issues. But moving away from substandards would mean almost all of clinical psychology evidence is worthless. Which is accurate, but the medical profession is unable to let go of this "get that patient out of my face" button that opens a trap out of which we seem to disappear. Even though we don't actually disappear, we're just out of their sight, and out of their mind.

Event though technically speaking, all of this evidence is invalid. It's just that it has been decided that pseudoevidence of pseudoscientific concepts is good enough for pseudoillnesses. Which is wrong. This is exactly why standards are important, because if they can be sidestepped arbitrarily, they will be. They have. They are. They will continue to be until minimal standards are actually put in place.
I think the issue is that healthcare works as a triage service - and everyone is trying to avoid the elephant in the room of whose ‘ownership’ we’ve been out under

there are certain school rules of if it’s not your patients/issue and/or you aren’t wanting to take them on then keep your head down and don’t get involved. For us no one is putting their hand up - I think partly because those who think they own us are waving threats about is part of it.

it’s why I think putting us under scientists (real ones) where there aren’t /alongside biomedical clinicians instead of this behaviourist bs is probably going to have to be the interim ‘viable ask’ and then having nurses with advice from eg Caroline Kingdon on ‘management’ etc

the insight from the quid pro who fir ‘access’ on one of @MelbME projects has been insightful in how research is being blocked by access to subjects being controlled
 
This is exceedingly common in business as well. If an EU directive makes your life difficult, it’s easier to just not think about it or argue that it’s irrelevant. And if you didn’t have a good mentor, you might not even be aware of the directive or you don’t know how to pay attention to the regulations.

All of this is to say that this kind of behaviour seems to be a part of «human nature».

Finance tries to solve it through external and internal audits, but that has its own set of challenges.

In the end, it all comes down to the people involved and what kind of mindset they have. If you want to change something, swapping out the people might the most plausible solution. And that’s a difficult one.
It needs to be made compulsory as a module in order to be head of said committee and therefore people can feel they can report on such attitudes

until someone has had the position of their institution or professional body on the issue they don’t want to be creating one by interpreting it themselves before they’ve ’had word’
 
It needs to be made compulsory as a module in order to be head of said committee and therefore people can feel they can report on such attitudes
Who’s going to make it compulsory? That’s the key issue here. The people with the power to change things either don’t know about the issue, don’t care about it or don’t believe that it’s an issue at all.
 
Is it a sort of bias in that some of them think in the back of their minds: “mental health” and “functional/psychsomatic illness” isn’t as “real” as biomedical disorders and therefore it doesn’t really need as serious of a standard of evidence?
Exactly this. It's also much more difficult to do good trials in this area. Also funders don't readily invest money in trials of non-biomedical treatments. But when they do, they want the answer which will save them money. This is why PACE was unbelievably bad science, and gave the answer the funders wanted, especially the DWP funder. Any challenge saying that "right answer" was perhaps not right after all has not been taken seriously.
 
But when they do, they want the answer which will save them money. This is why PACE was unbelievably bad science, and gave the answer the funders wanted, especially the DWP funder. Any challenge saying that "right answer" was perhaps not right after all has not been taken seriously.
We’re getting to the core of modern power structures here. People just want a report or a piece of paper that says what they want it to say. The truth or facts doesn’t matter - or they believe that it’s all relative and it all comes down to opinions anyways.

Most people go along with it because they are more preoccupied with their own lives, or they do the same thing so they don’t want to call it out.
 
We’re getting to the core of modern power structures here. People just want a report or a piece of paper that says what they want it to say. The truth or facts doesn’t matter - or they believe that it’s all relative and it all comes down to opinions anyways.

Most people go along with it because they are more preoccupied with their own lives, or they do the same thing so they don’t want to call it out.
I mean everyone has interests, and the funders of research certainly do too! There’s reasons we had decades of research showing smoking helped your health etc.

The problem is that we have this social construct of science which is taught as if it is pure infallible truth, free from biases and purely objective. Then pretty much anything can be passed on under the guise of science. We saw how dangerous this could be with things like “scientific racism”.

The human construct of science doesn’t exist in a vacuum, it’s embedded in societal contexts and power structures, what gets advertised as science or not is often up to people in power, not some pure rationality.

We could argue whether there exists a conception “pure” science some sort of rational methodology that exists completely apart from human interests and biases — but it’s pretty damn clear that the current social conception of science isn’t that.
 
Science exists on a gradient like everything else. There’s excellent science and terrible science.

So yes, the problem occurs when people believe that science has an inherent value or that it represents an inherent truth.

There’s a quote that I often think of: If you manipulate the data, the lie will sell itself.

What’s even better is to manipulate the process that creates the data. Or just screw the data and claim that is says what you want it to. That claim will in itself become data in the form of an abstract or conclusion in a paper.
 
I think the target of the kind of letter/petition that I'm proposing needs thinking about very carefully.

I'm not sure there is a sensible target (you could potentially address funders and ethics committees)

How about a short article that could be targeted at a journal laying out the issues and better methodologies (or a statement of research needed for better methodologies (such as activity monitoring) and trying to get it as a consensus statement with a number of leading researchers/clinicians as authors (I think that would be hard to achieve!). But something that can raise the issue in a reputable place, + point to ways to mitigate the issue and also be used to help justify research where necessary to support better methodology,
 
Of course the other group the information could be aimed at is trial participants.

Perhaps it's an area for one of the S4ME fact sheets? It might be a difficult balance to strike in communication terms—we don't want to be accused of organising a boycott—but it's worth considering.

Taking part in a trial is a lot of effort for someone with ME/CFS, and people deserve to be able to give properly informed consent. To my mind, that means knowing whether or not the trial is well enough designed to yield useable information. Most participants aren't experts on this; some of them may know almost nothing.

If we could find a way to set out some information about things to look for, the charities may even pick it up and push it.
 
Last edited:
I have applied for funding to develop a way of ensuring patients are involved in designing trials - specifically choosing the treatments to be trialled, the outcomes to measure, and the ways to measure the outcomes objectively. Will find out whether I've got the funding in May. I am proposing to use ME as a case study. I have pasted the summary of the proposal below

Trialblazers: putting patients in the driving seat of clinical trials

Patients and health professionals need reliable evidence to make decisions. Randomized trials provide this because they have a standard design for reducing bias. However, studies of trials often reveal bad design, such as focussing on unimportant outcomes, or being too demanding for enough patients to take part.

Patient and public involvement can enhance research relevance and help build public trust. However, many trials are designed without involvement at the planning stage which can lead to research waste. Patient involvement often happens in academic or clinical settings which can be intimidating. Involvement often also requires commitment and energy which can be a burden for patients who are less likely to be able to bear it.

Trialblazers aims to increase the number of trials planned involving patients by creating less time-consuming and demanding ways to contribute. It also aims to enable larger numbers of patients to contribute than is possible with traditional involvement methods. We would also like to understand and address objections to the assumption that involvement in planning a trial means you can’t be a participant in it.

Our objective is to create a way a patient “crowd” can set the agenda for a trial with the “must-have” and desirable characteristics which would encourage them to participate in it themselves.
To develop the Trialblazers method, working with a patient advisory panel, we will summarise information from reports of patient involvement in trials, and identify areas where involvement has worked well, or where it could be improved. We will also conduct interviews with patients and researchers who have worked together on planning trials.

From previous work, we think areas which could be improved by timely patient involvement are choice of outcomes and outcome measures, which treatments to test, logistics, personnel and support, reducing participant burden, recruitment, and writing about the trial for the public. We will also explore additional aspects of trials where patients are not traditionally consulted, such as setting the smallest important effect size.
We will create tools, resources, and activities to support patient involvement in these aspects of trial planning. We will also explore how citizen science and crowd-sourcing methods could be used to involve larger numbers of patients.

To test the feasibility of the Trialblazers idea, we will recruit a group of patients to plan a trial of potential drug treatments for ME/CFS. This is in response to the UK Government’s recent delivery plan for ME/CFS calling for more patient involvement in research, and for research to address the James Lind Alliance’s Priority Setting Partnership Top 10 priorities. A patient-endorsed plan will be of direct practical use to funders in their ME research commissioning strategy, and to potential triallists applying for research funding in this area.

Future funding will allow us to explore how a generalisable citizen science platform could be developed to allow the crowd-sourcing of trial plans in other areas.

We hope that funders and regulators such as the NIHR and the HRA could endorse the Trialblazers method of trial planning as standard: For example, the NIHR could make the results of a Trialblazers planning exercise available to applicants for commissioned calls, and the HRA could require the completion of a Trialblazers planning exercise before ethics approval is given.
 
Back
Top Bottom