An audit of 12 cases of long COVID following the lightning process intervention examining benefits and harms, 2025, Arroll et al

@forestglip you can also add: feeling of ‘guilt and blame’ when not getting better, ref. YP1 case study and last paragraph on page 14:
Later, however, her symptoms got worse and she started to think differently about the whole programme. Her experience was that the Lightning Process programme placed the full responsibility for recovery on her; if she didn’t do what she was taught at the seminar, it was her own fault that she didn’t experience any improvements.
 
The senior author wrote the following letter to the BMJ in 2019: What keeps corporate power hidden from doctors?
Exact same thing as "free speech warriors" who are always aggressively censoring everyone they disagree with.

Dude is out there throwing rocks out of a glass house, lecturing people about the dangers of doing that. When they do that, it's always fine. It's only when people they disagree with do it that's a problem.
 
Huh. Because this reads to me more as an observational case series than an audit of clinical practice.
It can’t be an audit of clinical practice because LP isn’t a medical treatment delivered by medical practitioners. But they sure try to make it seem that way:
We aimed to conduct an independent, university-based audit on the first long COVID patients treated by the only full-time LP practitioner in New Zealand, considering both reported benefits and harms.
How could one frame an argument that says ‘this is a study, not an audit’? What’s the core of that argument?
 
They contradict themselves here:
Ethics approval is not required in New Zealand for audits of clinical practice. The study has been presented according to the Strobe statement for observational studies.[8]
Is it an audit or a study?
As a retrospective, cross-sectional audit, the data reported here are derived from telephone interviews using a questionnaire specifically developed for this study.
Discussion
The main finding of this study was that all 12 participants with symptoms of long COVID had been severely disabled and improved considerably after undergoing LP, with 11/12 participants reporting having returned to at least 85% of normal.
Our study also of twelve participants found no harms. In the Bristol trial, no harm was reported; the participants in our audit also reported no harm.
It is impossible to generalize our study findings to a broader group of patients because of the small sample size and restricted demographic variation.
This is the first study to report outcomes for patients with long covid with the lightning process.
Financial support and sponsorship
This study was funded by the University of Auckland Research Fund for Professor Bruce Arroll.
 
Last edited:

Attachments

Anything that resembles formal English is way outside my comfort zone, but here’s an effort to make an argument about the audit/study situation:

«The authors claim that they describe an audit, and that it therefore is excempt from ethical approval. Yet, they seem to have forgotten this convenient and salient point when they wrote their own paper. They repeatedly refer to the paper as describing a ‘study’, and claims that it ‘is the first study to report outcomes for patients with long covid with the lightning process’. According to their declaration, it was also funded as a study. If the authors themselves believe that they are writing about a study, it was funded as a study, and they use this study to make claims about the efficacy of an intervention, it follows that it is in fact a study, and that the authors should have sought ethical approval. The authors can’t have their cake and eat it too.»
 
Anything that resembles formal English is way outside my comfort zone, but here’s an effort to make an argument about the audit/study situation:

«The authors claim that they describe an audit, and that it therefore is excempt from ethical approval. Yet, they seem to have forgotten this convenient and salient point when they wrote their own paper. They repeatedly refer to the paper as describing a ‘study’, and claims that it ‘is the first study to report outcomes for patients with long covid with the lightning process’. According to their declaration, it was also funded as a study. If the authors themselves believe that they are writing about a study, it was funded as a study, and they use this study to make claims about the efficacy of an intervention, it follows that it is in fact a study, and that the authors should have sought ethical approval. The authors can’t have their cake and eat it too.»
What are the actual rules though? Is an audit a type of study?
 
What are the actual rules though? Is an audit a type of study?
No, at least not according to this:
WHAT IS CLINICAL AUDIT
“Clinical audit is a quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria…Where indicated, changes are implemented, and further monitoring is used to confirm improvement in healthcare delivery.”

Principles for Best Practice in Clinical Audit (2002, NICE/CHI)

https://www.uhbristol.nhs.uk/files/nhs-ubht/1 What is Clinical Audit v3.pdf
 
What are the actual rules though? Is an audit a type of study?

My understanding is that an audit is an evaluation of something you already are doing, whereas as a research study is asking about something new.

One question might be is the object of study something you are doing already or something that you are doing for the purpose of this study.

So on this interpretation if the LP for Long Covid was already happening independently of this study, then attempting to evaluate its outcomes is an audit rather than research or an experiment. So doing this, even ignoring the selection bias and drop out, means there are important research questions you can not answer, the most significant here being you can not provide any control so have no idea if what you have done is any different to doing nothing or even doing any random activity with no LP content.

However what we see in the write up is a post hoc pretence that this audit can answer research questions, which it does not, indeed can not.
 
Last edited:
I have no New Zealand specific knowledge, and my own knowledge of ethics in academia is now quite dated and was never extensive to begin with. Nonetheless, a few brief thoughts on audits vs research - I'd be interested to know what others think:

This doesn't compare practice to established Long COVID guidelines or established benchmarks; the purpose is not quality improvement (well, on my brief skim I saw no mention of quality-improvement elements?). The question isn't how one clinic functions compared to how clinics should optimally function; the authors generalise somewhat from it, saying that "primary care clinicians can refer patients for treatment with a high chance of benefit without fear of harm". The purpose is surely that of collecting outcome data to generate new knowledge, not to improve an internal practice or compare it to a guideline, a benchmark, a standard, or ideal practice. And would this have been done anyway if there were no prospect of publication? (I suspect not, but audits are often done when there is no such prospect.)
 
Last edited:
What are the actual rules though? Is an audit a type of study?
Here are the standards that govern health research in NZ
https://neac.health.govt.nz/national-ethical-standards/part-two/18-quality-improvement
Under 'National Ethical Standards'
(they aren't great, but they are something)



https://neac.health.govt.nz/national-ethical-standards/part-two/18-quality-improvement
18. Quality improvement
Introduction
Quality improvement (QI) is an umbrella term that refers to a range of activities. Quality improvement activities involve cycles of change that are linked to measurable assessment, with the goal of improving the experience, process, safety and efficiency of health care. For an activity to be considered quality improvement, it must not be conducted to generate evidence to support an intervention’s efficacy, but it can involve evaluating and changing practice (Provost and Murray 2011).

Even quality improvement can pose sufficient risks that ethical approval is needed.


Table 18.1 – Identifying risk factors in quality improvement
QI ethical risk factor
QI activities are generally low risk. Some factors that may increase ethical risk are when:
  • it poses additional risks to or burdens on a patient and/or their family or whānau beyond their routine care; for example, if a patient is required to spend additional time for data collection (e.g. Interview or focus group), provide samples not essential for care or attend extra clinic or home visits
  • the data to be collected is of a sensitive nature or application; for example, data that could be emotional for participants to share, or highly confidential (see chapter 13, ‘health data’)
  • secondary use of data/using data or analysis from QA or evaluation activities for another purpose
  • the data will be used or available in such a way that individuals may be identifiable
  • use of algorithms – see Chapter 13 Health Data and Emerging Technologies
  • it allocates interventions differently among groups of patients or staff (randomisation or the use of control groups or placebos)
  • comparison of cohorts
  • it is unlikely to provide direct benefits to patients[2]
  • it involves the use, storage or preservation of an individual’s body parts or bodily substances.[3]


When an activity tests a new, modified or previously untested intervention, service, process or programme on participants, and there is insufficient evidence to determine whether this untested aspect is safe or effective, the activity may be defined as research involving humans, and ethical Standards for research processes apply.

18.10 Increased ethical oversight and specific informed consent for the QI activity is required where there is a change in the standard of care for the purposes of piloting a new approach that does not have clear evidence of benefit in a similar population, or if the change is being made solely to improve efficiency or otherwise benefit the health care provider, with potential adverse effects for consumers.

Types of quality improvement activities
Quality Improvement activities should be determined using improvement science to ensure a strong evidence base. Tools for quality improvement include Shewhart Charts, driver diagrams, Quality Improvement Cycles (Plan, Do, Study, Act-PDSA), Clinical Audit, Evaluation and Programme evaluation studies, Experience Capture tools i.e. interviews and focus groups. Many of these tools are commonly used across both Research and Quality Improvement, which again illustrates the importance of ensuring good ethical practice when using these tools regardless of the context in which they are being applied.

Clinical audit involves investigating whether an activity meets explicit standards, as defined from national or international standards, policies, guidelines, or best practice reviews, for the purpose of checking and improving the activity audited. An audit generates knowledge for the situation in which it was undertaken, rather than generalisable knowledge. It should provide feedback primarily to the local setting or particular service involved, although it may also involve a wider dissemination by way of publication or presentation of its findings.


I think it is possible to argue that this study was research, it is intended to find information about how well a treatment works and it is intended to be generalisable, and in fact the results are generalised. And, even if it was quality improvement, it required ethical approval.
 
Here's the definition of research, Section 1 of the standards:
Broadly speaking, health and disability research should:
  • aim to answer a question or solve a problem and therefore generate new knowledge to prevent, identify and treat illness and disease
  • have the ultimate purpose of maintaining and improving people’s health – in the sense of a state of physical, mental and spiritual wellbeing, rather than simply the absence of disease or infirmity
  • support disabled people to be included, participate more, exercise choice and control, and be more independent
  • address health and disability disparities
  • contribute to whānau ora.
This description is necessarily broad; we acknowledge that people’s health is influenced by a much wider range of social factors than their health care.

Speaking more specifically, health and disability research is any social science; kaupapa Māori methodology; or biomedical, behavioural or epidemiological activity that involves systematically collecting or analysing data to generate new knowledge, in which a human being is exposed to manipulation, intervention, observation or other interaction with researchers either directly or by changing their environment, or that involves collecting, preparing or using biological material or medical or other data to generate new knowledge about health and disability.

The following activities are not defined as ‘research’ and are not covered by these Standards.

  • Public health investigations: these explore possible risks to public health, are often immediate or urgent and are often required by legislation. Examples are investigations into outbreaks or clusters of disease, analyses of vaccine safety and effectiveness, and contact tracing of communicable conditions[2].
  • Routine public health activities: these include the use of identifiable data to support delivery of health services, the development of live National Health Index (NHI)-linked data as clinically actionable alerts to responsible clinicians, and the regularly investigation, assessment and monitoring of the health status of our resident populations.
  • Public health surveillance: this involves monitoring risks to health by methods that include systematically collecting, analysing and communicating information about disease rates.
  • Pharmacovigilance (post-marketing surveillance): this involves monitoring the adverse effects of pharmaceuticals after their introduction into the general population. Its methods include spontaneously reporting adverse events, and monitoring all adverse events for a restricted group of medicines (prescription event monitoring). Pharmacovigilance is distinguished from phase IV research, whereby sponsors or researchers conduct clinical research to assess or compare treatments (New Zealand Medicines and Medical Devices Safety Authority 2015).

Quality improvement and research in health care exist on a continuum of activities concerned with making changes and measuring their impacts with the aim of improving systems, processes and outcomes (Hirschhorn et al 2018). Research aims to develop new knowledge, while quality improvement aims to translate that knowledge into everyday practice through specific methods in a healthcare setting (The Health Foundation 2013).
 
One more point. From NG206:
They acknowledged that although some benefit was demonstrated and aspects of it, such as goal setting, practical examples and applications and peer support, were found to be helpful, the qualitative evidence on people's experiences of the therapy varied and raised some concerns. In the qualitative evidence, some people reported negative experiences to do with the confusing nature of the educational component, the intensity of the sessions, and the secrecy surrounding the therapy.
While in the SMILE trial children under 16 were accompanied by parents, the committee were particularly concerned about the reported secrecy of the Lightning Process in the qualitative evidence and the lack of public information on the implementation of the process in practice. The committee agreed the transparency of any intervention is important and noted that in the qualitative evidence it was reported that people had been specifically encouraged not to talk about the therapy. The committee agreed this was an inappropriate and unusual message to give, particularly to children and young people.
The committee discussed concerns that the Lightning Process encourages people with ME/CFS to ignore and 'push through' their symptoms and this could potentially cause harm. In the qualitative evidence, some participants reported they had received advice they could do what they wanted. The committee noted they had made clear recommendations on the principles of energy management and this advice appears at odds with these principles.
I think one could make a good argument based on that that even in the case where it was definitively audit and not research that the LP is something of a special case in terms of requiring ethical review and approval.
 
One more point. From NG206:

I think one could make a good argument based on that that even in the case where it was definitively audit and not research that the LP is something of a special case in terms of requiring ethical review and approval.
Something like ‘previous reports of harm from this intervention means that ethical clearance is required’?

Are there examples of this reasoning for approved or declined ethics applications?
 
Back
Top Bottom