An audit of 12 cases of long COVID following the lightning process intervention examining benefits and harms, 2025, Arroll et al

Yep. And if 7/12 had 0 hours per week and one patient reported 40 hours at worst - shouldn't a non-parametric test or log transform be used here, rather than a paired t-test?
I can't imagine that data would be normally distributed, which is required to use a t-test, but I'm not experienced enough to say for sure.

Hours of work/study at worst of COVID
Mean: 5.8
Median: 0
SD: 3.5
Range 0-40 with 7 zeros

So four values are unknown:
[0, 0, 0, 0, 0, 0, 0, x, x, x, x, 40]

Trying different values, I can't get an SD below about 11.2. And trying to get it to pass the Shapiro-Wilk test for normality (p value greater than 0.05), I can't get a p value higher than about 0.002. Though I'm not testing in any systematic way, just picking numbers that seem like they would work, but I still think it's unlikely that there are values where it would fit either of these criteria.
 
I can't imagine that data would be normally distributed, which is required to use a t-test
I'm not confident about this statement since apparently given a large enough sample size normality isn't as important for a t-test. One website says about 20 or more observations where it doesn't necessarily have to be normal. And Wikipedia says:
The means of the two populations being compared should follow normal distributions. Under weak assumptions, this follows in large samples from the central limit theorem, even when the distribution of observations in each group is non-normal.[19]

So I can't be sure a t-test is definitely not appropriate. Anyways, I assume it would still be significant with a non-parametric test though since the difference is so large (median 0 to median 30).
 
Maybe I'll add this to PubPeer tomorrow unless someone comes up with a reason these values make sense:

There appear to be errors in the reported statistics of this paper. For example:

**Hours per week at the time of the interview**
Mean: 32.3
Median: 30
SD: 1.0
Range: 16 - 50
Number of observations: 12

A standard deviation of 1 seems far too small to be possible for the large range of values.

Similarly:

**Hours of work/study at worst of COVID**
Mean: 5.8
Median: 0
SD: 3.5
Range 0-40 with 7 zeros
Number of observations: 12

Here an SD of 3.5 seems too small.

Could the authors provide the raw values for these metrics?
 
Is it acceptable to call the participants “Patients” in this paper? They are not being treated by a health professional so they are not patients in my mind. They are not doing this training under their doctor’s supervision. Some were not actually diagnosed. They are customers being misrepresented as patients.
 
Is it acceptable to call the participants “Patients” in this paper? They are not being treated by a health professional so they are not patients in my mind. They are not doing this training under their doctor’s supervision. Some were not actually diagnosed. They are customers being misrepresented as patients.
Agreed.
 
Blog by Nina Steinkopf regarding a letter sent to the journal. She says that this is a study and not an audit.
https://melivet.com/2025/02/27/notification-of-error-in-lightning-process-article/

The blog begins:

Notification of error in Lightning Process article

In the February 2025 issue of the Journal of Family Medicine and Primary Care, the Case Series «An audit of 12 cases of long COVID following the lightning process intervention examining benefits and harms» was published.

A lot can be said about the article. My first reaction though, was the use of the term «audit». On February 25th 2025 I sent an email to the editorial office:

Dear Dr. Raman Kumar,

Reference is made to the publication of the article Arroll, Bruce; Moir, Fiona; Jenkins, Eloise; Menkes, David Benjamin. An audit of 12 cases of long COVID following the lightning process intervention examining benefits and harms. Journal of Family Medicine and Primary Care 14(2):p 796-799, February 2025. | DOI: 10.4103/jfmpc.jfmpc_1049_24

According to the authors, they «aimed to conduct an independent, university-based audit on the first long COVID patients treated by the only full-time LP practitioner in New Zealand, …”.

The method is described as “a retrospective, cross-sectional audit”. Readers are informed that “Ethics approval is not required in New Zealand for audits of clinical practice.”.

This gives the impression that the article is a report from an audit of a clinical practice.

I regret to inform you that this is not the case.

....
 
Last edited by a moderator:
https://www.grow.co.nz/neurolinguistic-programming-a-brief-introduction/
The main author Bruce Arroll, provides CPD training on NLP.
Excerpt from above link
“NLP has a mixed reputation as there are very few quality randomised trials in the clinical literature. Professor Arroll, together with others, has undertaken two randomised trials on NLP processes. These include the rapid phobia cure for fear of heights (the largest study done to date) and the methodology was used for another phobia study by the Psychology Department at Oxford University. The other intervention is the “symptom shift” where you can reduce moderate stress (and pain) in a patient in a few minutes – it does come back but it shows the patient it is not part of them but something they can control.”
 
Yes, there is. I quoted it above.

I see, just getting back to this. The claim that this is an audit is preposterous. this is also what Crawley did. She exempted almost a dozen studies from ethical review on the grounds that they were service evaluation and not research. Some of them were. Some were clearly not. She interviewed 100+ students and their families face-to-face for one study. You can't interview anyone for service evaluation. It is only about whether specific services were or were not provided etc. You can't make ANY general statements. There are no "research questions." Period. "Clinical audit" in New Zealand is obviously the same thing.
 
Trial by Error said:
Although the paper is confusing in parts, it appears that 20 Long Covid patients had completed the LP with the practitioner, but only 12 of them “were contactable.”

@dave30th Yes, confusing, and maybe your interpretation is correct, but I understood it as saying they there was some unknown number that completed the LP, the investigators chose the sample size "20" based on how many the interviewer had time for (and unclear whether the sample of 20 was chosen randomly or with some other method, which is important here), and of those only 12 responded.

Though I guess it could be interpreted as 20 completed the LP, and "12" was all they had time for and could be contacted. But they don't differentiate at all between people they didn't have time for and people that didn't respond in the 8 who didn't follow-up, which makes me think it's the other scenario.

Paper said:
The sample size was determined by the number of participants the interviewer could do during her student vacation. [...] Of 20 patients completing the LP, 12 (60%) were contactable;

Edit: And very minor formatting issue. The "P" isn't included in the link:
Crawley’s 2018 pediatric trial of the LP as an example of research misconduct.)
 
Last edited:
A couple of other fixups —

In response to the study, the ME Association has highlighted once again xxxxng that the NICE guidelines warns against the LP.

In the introduction, the authors declare that the LP has “a developing evidence base for efficacy, particularly for CFS/ME.” It is worth noting that the reference for this claims is
 
Though I guess it could be interpreted as 20 completed the LP, and "12" was all they had time for and could be contacted. But they don't differentiate at all between people they didn't have time for and people that didn't respond in the 8 who didn't follow-up, which makes me think it's the other scenario.

It's very unclear. It's also unclear whether they were initially contacted by the LP instructor or the student. It could be what you suggest. I took it to mean there were 20 LC patients treated by this practitioner, but that could obviously be wrong.
 
Back
Top Bottom