Article on replication crisis

They have, of course, been accused of being bullies, of harassment and of driving researchers away from psychology. Now where have we heard that before...

If they are harassing and driving away the kind of 'researchers' that feel threatened when you analyze their stuff properly this should not just be fine by me, but by literally everyone interested in psychological research. Heck, this is what proper peer review should have done in the first place.
 
If they are harassing and driving away the kind of 'researchers' that feel threatened when you analyze their stuff properly this should not just be fine by me, but by literally everyone interested in psychological research. Heck, this is what proper peer review should have done in the first place.

It does make you wonder. What are they all so worried about?
 
Aware is good but will they retest it (with honest testing and no biasing the results and pressuring patients to report improvement). And with the original criteria instead of the fake revised one. Even better if they use CCC or ICC instead of Oxford.

I think that replicating a poorly designed and hugely expensive study trial like PACE would be a bad use of resources.

I get the impression that a lot of the 'data thug' people are put off by the drama that surrounds PACE, and that they see it (and angry patients) as a potential liability to their own efforts. I think that they're wrong, and that PACE is a valuable case study partly because it illustrates how problems in research can lead to real world problems and undermine respect for the systems of academia, but there you go.
 
It does make you wonder. What are they all so worried about?
Am still reading it (unfortunately I have not completed a whole book in over three years, though I used to read three or four a week), however Brian Hughes' new book "Psychology in Crisis" deals extensively with the replication crisis in psychology and also with the PACE fiasco. Hughes suggests an awful lot of psychologists have an awful lot to worry about.
 
As an undergraduate psychology student it was generally regarded that in practice whole sections of psychology were not real science, social psychology being the main culprit. Further as a trainee Speech & Language Therapist there was a real awareness that clinical trials of therapeutic interventions were in real life very complex, such that many supposed 'experimental' designs were no better than attempts at descriptive best practice.

[Added: When later undertaking clinical research we were very aware that most of what we were attempting did not produce unambiguous answers, but rather was an attempt at providing a pointer towards premlininary theories that hopefully could be subsequently more rigorously evaluated. This in turn would hopefully result in improved clinical practice in a cyclical process of improvement. It seems to me that PACE decided to pretend that second step was totally unnecessary. Indeed the PACE appologists appear happy to continuously shift their theories in an effort to protect their immutable clinical practice.]

Now forty years later, despite a whole industry of research, it seems that little progress has been made in either area.

[Added: Do read both the articles linked to above, if you have not already. They highlight how a series of systemic biases in psychology and clinical research exist, such that it is not that surprising that the like of the BPS school of thought, that through systematic prejudice and bad science has done so much harm to people with ME, should have flourished.]
 
Last edited:
I think in multiple sclerosis, treatment trials are carried out over 2 years because of the relapsing/ remitting nature . It ought to be the same in psychology. No publication until follow up at least a year after treatment finished.
 
Hi, @obeat, welcome to the forum.
And I agree with your point about waiting 2 years to reach conclusions in clinical trials. I think this applies not only because ME, like MS can fluctuate, but also particularly in trials of psychological treatments where unblinding leads to placebo effects and the treatments themselves tell patients to ignore symptoms and 'think themselves well'. PACE is a classic example of this. By 2 years there were no between group differences.
 
I think that replicating a poorly designed and hugely expensive study trial like PACE would be a bad use of resources.
Isn't that the point, it is research fraud so attempting then failing to replicate the results which shows the original result is flawed is why there is a replication crisis.

That said i would be very happy to simply have it retracted without a replication attempt (redirect the replication cost to real research) but those with the power to do so show no interest in that and actually fight to prevent it. If we have to shame them by wasting money then thats unfortunate but science and ME/CFS research will benefit when the truth is put into motion no matter which method is used to get to it.

I get the impression that a lot of the 'data thug' people are put off by the drama that surrounds PACE, and that they see it (and angry patients) as a potential liability to their own efforts. I think that they're wrong, and that PACE is a valuable case study partly because it illustrates how problems in research can lead to real world problems and undermine respect for the systems of academia, but there you go.
If only replicating the popular stuff is the goal then its attempted confirmation bias. That means they are not fully acting in good faith.
 
Last edited:
Isn't that a point, it is research fraud so attempting then failing to replicate the results which shows the original result is flawed is why there is a replication crisis.

I just mulled this over a bit in my head and... maybe we can actually approach this slightly differently and (hopefully) more efficiently.

The point with PACE specifically was that it was a trial designed in a way that can never answer any relevant question right from the outset. I would argue that you do not even need scientific literacy to see that, basic literacy should be enough - nobody knows what their 'specialist medical care' actually was, it is very difficult to ascertain what exactly their GET/CBT entailed (yeah sure, whoever undertook it will tell you what they think they did, but a problem with psychological interventions in general is that you can't exactly give someone 10mg of a racemic mixture twice a day), they dropped objective outcome measures, more or less redefined words like 'recovery' and so on. You probably do need scientific literacy to understand problems with recruiting patients, samples and effect sizes, statistical stuff etc and I can see how the real world may make it a bit hard to construct ideal trials in medicine - e.g. people with severe ME can't go to the doctor so it will be harder to do trials on them that get ethical approval - but you actually have to be a bit of a moron to read through the entire thing and genuinely come to the conclusions that the people working there reported.

This does not even need to be replicated to be seen as failure because it is a structural garbage in / garbage out problem. Whatever random stuff comes out the other end once you run the same things through the same garbled mess is meaningless. We need peer reviewers at the very least, but better yet grant committees (or whoever greenlights this stuff) to be able to spot research that is not designed in a way which would add knowledge. If it really is 'impossible' due to real world constraints to stop this shit entirely - like people who want easy doctorates by declaring random epidemics mass hysteria - we at least need a system to call it out as soon as it is spotted more effectively than what we have now. I have explained those trials which were the basis for many of us not getting help to 3 different people who worked in research and all of them were baffled and responded with their polite way of saying 'wtf is this shit, no one will listen to this, I don't think I even know anyone who is stupid enough to pay much attention' - yet here we are, still talking about it almost a decade later.

There is this concept in real science where 'extraordinary claims require extraordinary evidence'. Many people who are into psychology seem to just accept things that are very extraordinary to me. Just take the entire concept of psychosomatic explanations: Claiming that emotional stress can lead to weight gain because it is observed that some people eat more when stressed (for whatever reason) sort of makes sense and is not, at this point, very extraordinary - we know how calories work. Claiming that a female human getting a nosebleed stems from her longing for a man because it is observed that women sometimes get nosebleeds and also the observer is coked out of his mind at the time he connects the dots does seem pretty extraordinary to me. Yet the field of psychology as a whole seems to gloss over this concept a bit more readily than it maybe should.
 
This does not even need to be replicated to be seen as failure because it is a structural garbage in / garbage out problem.
I agree yet facts have not got PACE withdrawn. If we need to go nuclear then we should do so.
And i would submit that in order to be properly replicated the source of funding should be replicated as well :emoji_imp:

Claiming that a female human getting a nosebleed stems from her longing for a man because it is observed that women sometimes get nosebleeds and also the observer is coked out of his mind
Its an attempt to prevent the culture from fighting misogyny, to justify, rationalize and keep current treatment of women cemented in place.
 
Although I say this is about the replication crisis, the 'data thugs' don't actually look to replicate. They look at trial data and reveal flaws: so if for example someone did a coin toss and claimed they found HTHTHTHTHTHT, etc, then they'd point out that looks wrong.

As for replicating PACE, in effect PACE was the replication. It was a bigger version of all the other trials they'd done, it was conducted by researchers who believed in the interventions, delivered in their way, with everything set up to find the results they wanted. And it still didn't replicate the other trials' findings. Job done. No evidence the interventions are effective. Let's move on.
 
No evidence the interventions are effective. Let's move on.
But its fraud thats on the books that harms us, so how do we move on?
I'm not saying replication is the only way to get it retracted but if it will work where other things have failed that would be great.
If other things will work such as parliament hearing, a lawsuit, public shaming or any other method that would be great as well
 
If the PACE trial was replicated then I expect that we would end up with similar results: CBT/GET leading to slight improvements in subjective self-report outcomes, but not more objective outcomes (aside from people in GET who focus on walking having a slight improvement in walking speed). The problem with PACE was more that it's design had a high risk of bias built in which meant that these results did not show that CBT/GET led to real improvements in participant's levels of fatigue and physical functioning, and could just be down to bias.

The problems with PACE were more down to poor design and spun results and replication wouldn't do much to help us understand the problems there.

Any attempt to conduct a 'new' PACE would need a quite different design, and given that even with the inbuilt biases of the PACE trial CBT/GET only led to small and transitory changes in how people report their symptoms, I don't think spending £8 million on a 'new PACE' will be seen as a wise use of limited resources.
 
Back
Top