Guidance for best practice for clinical trials, World Health Organisation, 2024

Trish

Moderator
Staff member
Moved from News from Cochrane

Tuesday, October 8, 2024
Cochrane helps launch new WHO guidance on best practices for clinical trials
Cochrane helps launch new WHO guidance on best practices for clinical trials | Cochrane

https://iris.who.int/bitstream/handle/10665/378782/9789240097711-eng.pdf?sequence=1

page 8
The Technical Advisory Group (TAG) for Development of Best Practices for Clinical Trials was constituted through a public call for nominations.
16 names including Karla Soares-Weisser, Cochrane

The document was drafted by 2 people, then there were multiple feedback rounds from different groups including the TAG.

page 12-13 (my bolding)
The remit includes:
• any design for a clinical trial: but with a focus on randomized clinical trials, including comparisons of two or more interventions, whether blinded or not, and whether parallel, cluster, crossover, factorial, adaptive platform, decentralized or other design;
• any health intervention: including (but not limited to) administration of pharmaceutical medicines, cells and other biological products, and vaccines; surgical or radiological procedures; diagnostics; use of medical devices, nutritional measures; cognitive, behavioural and psychological interventions; supportive or preventive care, including process-of-care changes; physical therapy interventions; digital and public health approaches; traditional or herbal measures; and screening processes. The interventions may be novel or pre-existing but being used in a different way (for example, repurposed or optimized) or to gain further knowledge about current practices;
• any purpose: including (but not limited to) evidence for guideline development; recommendations for clinical practice or public health strategies; and health technology assessments;
• any setting: any geographical, economic or societal context, and any context including clinical trials based in hospital, primary care or community settings; or where the intervention is delivered directly to a participant;
• any role: including researchers and clinicians, patient and public groups (including trial participants), regulators and other national health authorities, ethics committees and institutional review boards, research funders, and all trial sponsors (academic, government, nonprofit and commercial).

...
This document aims to complement other guidance in order to support implementation of universal ethical and scientific standards in the context of clinical trials, with a focus on under-represented populations; it does not represent a legal standard and does not supersede any existing guidance. I

page 26 onwards
2. Key scientific and ethical considerations for clinical trials
2.1 Good clinical trials are designed to produce scientifically sound answers to relevant questions
2.1.1 Robust intervention allocation
about the importance of randomisation
2.1.2 Blinding/masking of allocated trial intervention (where feasible)
Why this is important. In many clinical trials, knowledge of the allocated intervention can influence the nature and intensity of clinical management, the reporting of symptoms or the assessment of functional status or clinical outcomes, introducing bias. Where feasible, masking (or blinding) participants, investigators, health care providers, and those assessing outcomes to the assigned intervention through use of placebo medications or dummy interventions can help to prevent such issues, as can the use of information that is recorded separately from the clinical trial (for instance, in routine clinical databases and disease registries). These considerations are important for the assessment of both the efficacy and the safety of the intervention, including processes relating to adjudication of outcomes and considerations of whether an individual health event is believed to have been caused by the intervention.
. If blinding of an allocated trial intervention is not feasible (for example in trials of different types of patient management or surgical procedures), blinded or masked outcome assessment should be pursued for objectively determined outcomes, for example through use of a prospective randomized open-label blinded endpoint (PROBE) design (see also Section 2.1.9 ascertainment of outcomes).

2.1.3 Appropriate trial population
2.1.4 Adequate size
2.1.5 Adherence to allocated trial intervention
2.1.6 Completeness of follow-up
Discusses the importance of following up everyone to capture data on non compliance and harms.

2.1.7 Relevant measures of outcomes, as simple as possible
Discusses value of standardised core outcomes to enable comparison across trials.
Outcomes may include physiological measures, symptom scores, participant-reported outcomes (PROMs) (66) (that is, measurement tools that patients use to provide information on aspects of their health status that are relevant to their quality of life, including symptoms, functionality, and physical, mental and social health), functional status, clinical events or use of health care services. The way in which these are assessed should be sufficiently robust and interpretable (for example, clinically validated in a relevant context, particularly for surrogate outcomes given their potential limitations (67)).

2.1.8 Proportionate, efficient and reliable capture of data

2.1.9 Ascertainment of outcomes
Key message. Processes for ascertaining study outcomes should adopt an approach that is not influenced by the intervention trial participants or randomized groups receive. These measures include the frequency and intensity of assessments. For RCTs, particular care should be taken to ensure that the people assessing, clarifying and adjudicating study outcomes are not influenced by knowledge of the allocated intervention (that is, the outcome assessment is blinded or masked). Equally, the methods for acquiring, processing and combining sources of information (in order, for
example, to define participant characteristics or clinical outcomes) should be designed and operated without access to information about the intervention allocation for individual participants or knowledge of the unblinded trial results.
Why this is important. If the methods used to assess, clarify or classify outcomes differ between the assigned interventions, the results may be biased in one direction or other leading to inappropriate conclusions about the true effect of the intervention. Therefore, the approach used to assess what happens to participants should be the same regardless of the assigned intervention, and those making judgements about the occurrence or nature of these outcomes should be unaware of the assigned intervention (or features, such as symptoms or laboratory assays, that would make it easier to guess the assignment) for each participant.

2.1.10 Statistical analysis
Key messages. The trial should be designed to robustly answer a clearly articulated key question on which the primary analysis should focus. It is not good practice to seek to answer multiple questions through secondary analyses, which can often be misleading. Trial results should be analysed in accordance with the protocol and statistical analysis plan, with the latter being developed and clearly specified when the protocol is written, and finalized at the latest before the study results become known (that is, before conduct of any unblinded analyses on study outcomes). Any analyses conducted after the initial results are known should be clearly identified as such
For RCTs, the main analyses should follow the intention-to-treat principle, ...
Why this is important. A statistical analysis plan should be specified before any knowledge of the trial results (for example, unblinding of the treatment allocation in a RCT) in order to avoid the possibility that choices about the analysis approach may be biased (8). ...
Discussion of use of secondary outcomes and subgroup analyses.
Although a sound statistical approach is critical in clinical research, it is equally important to focus on the clinical magnitude and relevance of any effect size rather than solely its statistical significance (75–78), as well as any new findings in the context of previous findings (for example, using the Grading of Recommendations Assessment, Development and Evaluation [GRADE] approach(79))

2.1.11 Assessing beneficial and harmful effects of the intervention
Key messages. Data generated during the course of conducting a clinical trial may reveal new information about the effects of the intervention which is sufficiently clear that it necessitates alteration of the ways in which the trial is conducted and participants are cared for or which is sufficiently compelling as to warrant a change in the use of the intervention both within and outside the trial. Potential harms of the intervention should be considered alongside potential benefits and in the wider clinical and health contexts.

page 33
2.1.12 Monitoring emerging information on benefits and harms

Key messages. An independent data monitoring committee provides a robust means to evaluate safety and efficacy data from an ongoing trial, including for RCTs unblinded comparisons of the frequency of particular events, without prematurely unblinding any others involved in the design, conduct or governance of the trial...
A data management committee (DMC) should include members with relevant skills to understand and interpret the emerging safety and efficacy data, and where appropriate take into consideration patient and public perspectives. A DMC should review analyses of the emerging data, unblinded to any randomized intervention group so as to be able to make informed decisions given knowledge about the potential adverse effects of a specific treatment (which would not be possible if they were not unblinded). The DMC should advise the trial organizers when there is clear evidence to suggest a change in the protocol or procedures, including cessation of one or more aspects of the trial. Such changes may be due to evidence of benefit or harm or futility (where continuing the trial is unlikely to provide any meaningful new information). In making such recommendations, a DMC should take account of both the unblinded analyses of the trial results and information available from other sources (including publications from other trials).
Why this is important. All those involved in the design, conduct and oversight of an ongoing trial should remain unaware of the interim results until after the conclusion of the study so as not to introduce bias into the results (as in the case, for example, of stopping the trial early when the results happen by chance to look favourable or adverse). The requirement for, and timing and nature of, any interim analyses should be carefully considered so as not to risk premature decision-making based on limited data.
page 34
2.2 Good clinical trials respect the rights and well-being of participants
2.2.1 Appropriate communication with participants
About timely and accessible information.
2.2.2 Relevant consent
Key messages. The trial consent process should clearly explain to potential trial participants (or, where applicable, their legal representatives) the reasons why the trial is being done, the questions it is seeking to answer, what is involved for them, and the potential benefits and risks of participation ...
Why this is important. Consent is valid if it is informed, voluntary and competently given before entry into a trial....
mainly about capacity to give consent and safety.
2.2.3 Changing consent
Key message. Participants should be free to stop or change the nature of their participation without affecting the usual care received. Where possible and acceptable to the participant, efforts should be made to determine the intended meaning of such individual decisions and to explain the potential impact of any such decisions.
Importance of clarifying with participant what they are withdrawing from, eg treatment, or consent for data to be used...
page 36
2.2.4 Implications of changing consent
Key message. The rights of an individual participant to change or withdraw consent for use of trial data should be balanced against scientific and ethical requirements.
Why this is important. Removing data can result in unreliable or inconclusive findings, with ethical and clinical safety consequences for both participants continuing in the trial and the care of future patients. (For example, important safety signals may be missed.)...

2.2.5 Managing the safety of individual participants in the clinical trial
Key messages. Detection and management of safety of trial participants should be tailored to the trial population and to what is already known about the intervention. Such approaches may be modified as new information emerges (for example, from other trials or clinical studies in the relevant population). In some circumstances it may be appropriate to exclude some groups of individuals from a trial if the likely risk to their health is excessive (compared with potential gain) and cannot be mitigated by reasonable clinical strategies. For some blinded trials, there may be occasions when knowledge of the allocated intervention for an individual participant could materially influence the immediate medical management of the participant. In such circumstances, it should be possible for the treatment allocation to be unblinded and disclosed to the relevant medical team without delay.

2.2.6 Communication of new information relevant to the intervention
Key message. During an ongoing trial, new information may become available (from within the trial or external sources) that materially changes what is known about the effects of the intervention for some or all participants. This information should be communicated to those involved in overseeing, conducting or participating in the clinical trial for whom it is relevant (for example, because it might affect their understanding of the intervention or because they are required to take some action). Such communications and reports should be informative, timely and actionable.
Why this is important. Excessive, irrelevant or uninformative reports (particularly of individual cases) distract attention from those that require action. It is often preferable to produce and circulate contextualized periodic updates that are focused on safety issues that matter. Such reports may also be provided to the data monitoring committee (for consideration in the context of the unblinded emerging trial data) and to regulatory bodies (for consideration of the implications for participants in other trials and for the wider group of patients and public). The distribution of reports should be in a format and timing that is commensurate with the action that is likely to be needed and the audience for which it is intended (for example, participants, clinicians and regulators).

up to page 36
 
Last edited:
Continued from above

page 37
2.3 Good clinical trials are collaborative and transparent
2.3.1 Working in partnership with people and communities
2.3.2 Collaboration among organizations
2.3.3 Transparency
Registration. Clinical trials should be registered from the outset on a publicly-available registry of clinical trials (for example, the WHO registry network (44)) in accordance with the WMA Declaration of Helsinki (12). Where trial registries allow, they also should be updated with trial outcomes in a timely manner, even if the trial was stopped prematurely or did not meet its objective(s).
Trial materials. Making other information about a trial (including its protocol and other documentation such as the statistical analysis plan) publicly available is strongly encouraged.

Trial reports. Once the trial is completed, reports should be made available in a timely manner on a publicly available clinical trial registry and/or in a peer-reviewed journal (typically within 12 months but sooner, for instance, as a preprint, in public health emergencies) and should comprehensively describe the study design, methods and results in a clear and transparent manner, regardless of the trial’s findings (82). Negative findings are as important to report as positive ones. Trials should be reported following established guidelines where possible (for example, the Consolidated Standards of Reporting Trials [CONSORT] guidelines for RCTs (83, 84)) preferably in open-access peerreviewed publications in the context of other relevant evidence. It can be helpful for reports to be available in formats that enable both professional and lay readers to understand and interpret the results. Reporting results to participants and to the public requires different approaches from reporting results to the clinical and scientific community.

Trial funding. Sources of trial funding as well as declarations of any possible conflicts of interest by those involved in designing, conducting or reporting trials should be easily accessible.

Data sharing. This should be enabled at a suitable time if ethical, feasible and scientifically appropriate, with due consideration given to data protection and privacy. A data management and sharing plan should be developed in line with WHO data-sharing principles (85) of being effective, ethical and equitable, as articulated in the WHO policy on research data sharing.
Why this is important. Transparency and sharing of knowledge about health care interventions help to generate further knowledge, build and maintain trust and give confidence to both those involved in the trial and those who are not. Trial registration (86) can aid in the identification of gaps in clinical trials research, makes researchers and potential participants aware of recruiting trials (which may facilitate recruitment) and fosters more effective collaboration among researchers (including conducting prospective meta-analysis), and the process may lead to improvements in the quality of clinical trials. Timely communication of the trial results (regardless of what those findings are) is vital to guide future research, reduce unnecessary duplication of effort (which wastes resources) and enable care to be guided by an up-to-date evidence base. Good communication can also support wider efforts to foster potential collaborations and increase informed participation in clinical trials. Transparency of research communicated in a range of formats so as to make them widely accessible to patients, communities and the public is vital to foster public confidence about safety, quality and effectiveness of interventions and combat misinformation which is detrimental to public health.
2.4 Good clinical trials are designed to be feasible for their context
2.4.1 Setting and context
2.4.2 Use of existing resources
page 39
2.5 Good clinical trials manage quality effectively and efficiently
2.5.1 Good governance
Key message. Clinical trials should be subject to sufficient scrutiny to support completion of informative, ethical and efficient studies and to avoid, correct or mitigate problems.
Why this is important. Effective and efficient governance (for example, through a trial steering committee) helps to maintain the scientific and ethical integrity of a trial and to provide advice on appropriate courses of action...
The need for a member or a component of the governance structure to have independence from trial sponsorship and management should be determined by assessing the risk that judgement and advice could be materially influenced (or perceived to be influenced) by the relationship.
2.5.2 Protecting trial integrity
Key message. The integrity of the results of a clinical trial should be protected by ensuring that decisions about its design, delivery and analysis are not influenced by premature access to unblinded information about the emerging results. Interim analyses of unblinded data on study outcomes should not be performed unless prespecified in the protocol or statistical analysis plan or conducted by the data monitoring committee.
Why this is important. Unscheduled reviews of unblinded data on study outcomes provide an unreliable assessment of the overall benefit-to-risk profile of the trial interventions. Prejudgment based on overinterpretation of interim data can affect recruitment, delivery of interventions and follow-up, risking the ability of the trial to achieve its goals (87)
2.5.3 Planning for success and focusing on issues that matter
Key messages. Good quality should be prospectively built into the design and delivery of clinical trials, rather than relying on retrospectively trying to detect issues after they have occurred (when often they cannot be rectified). Such trials should be described in a well-articulated, concise and operationally-viable protocol that is tailored to be practicable given the available infrastructure in relevant settings.
Why this is important. Rather than trying to avoid all possible issues, the aim should be to identify the key issues that would have a meaningful impact on participants’ well-being and safety or on decisionmaking based on the trial results. Efforts should then be focused on minimizing, mitigating and monitoring those issues. Such an assessment should consider the context of the clinical trial and what is additional or special about it by comparison with routine care....

2.5.4 Monitoring, auditing and inspection of study quality
Key message. The nature and frequency of any trial monitoring, auditing and inspection activities should be proportionate to any identified risks to study quality and the importance to the trial of the data being collected.
Why this is important. Good trial monitoring, auditing and inspection activities identify issues that matter (important deviations from the protocol or unexpected issues that threaten to undermine the reliability of results or protection of participants’ rights and well-being) and provide an opportunity to further improve quality...
Rational monitoring takes a risk-based proportionate approach and focuses on the issues that will make a material difference to the participants in the trial and the reliability of the results (for example, trial recruitment, adherence to allocated intervention, blinding and completeness of follow-up). ...
seems to be more concerned with excessive monitoring rather than insufficient monitoring.

page 42
3. Guidance on strengthening the clinical trial ecosystem
Wider issues with equality, funding, oversight, accessiblity at national and international level

3.1.1 Clinical research governance, funding and policy frameworks
about priorities and funding
3.1.2 Regulatory systems
3.1.3 Ethical oversight
page 51
3.2.1 Patient and community engagement
3.2.2 Collaboration, coordination and networking
3.2.3 Use of common systems and standards
3.2.4 Training and mentoring
3.2.5 Efficiency
3.2.6 Sustainability
3.2.7 Innovation
3.2.8 Transparency

4.Conclusion
Clinical trials can transform health care and quality of life worldwide. To fulfil their potential, they need to be reliably informative, ethical and efficient, and answer scientifically important questions relevant to the populations they are intended to benefit. This goal can be attained through identification of relevant 4.Conclusion research questions, risk-based and proportionate design, conduct, monitoring and audit of clinical trials, and strengthening of the global clinical trial ecosystem. These steps in turn require partnership with patients and their communities, equitable and sustained funding and global collaboration
 
Last edited:
I've copied rather a lot, and not necessarily the 'best bits'. I intend to go back over it later to weed out a lot of words and home in on bits particularly relevant to clinical trials of psych and behavioural interventions for ME/CFS.

This was linked on the forum via a Cochrane article promoting the document. I therefore will be looking at it particularly in relation to the Cochrane Larun review of Exercise therapy for CFS, and the trials, particularly PACE that informed it.

At least that's the plan. If someone else wants to explore this too, all the better.
 
Reposted here:

If blinding of an allocated trial intervention is not feasible (for example in trials of different types of patient management or surgical procedures), blinded or masked outcome assessment should be pursued for objectively determined outcomes, for example through use of a prospective randomized open-label blinded endpoint (PROBE) design (see also Section 2.1.9 ascertainment of outcomes).

This suggests some confusion. If an endpoint is truly objective then there is no need to blind or mask. I have never heard of PROBE but it sounds dubious. Usually these acronyms are there to weasel through dodgy practices by making them sound respectable.

Recommending GRADE isn't good.

All the right words seem to be there, but the impression given is that as long as everyone follows a mindless recipe all will be fine (and you can cut a few corners if you really have to).
 
Reposted here:

If blinding of an allocated trial intervention is not feasible (for example in trials of different types of patient management or surgical procedures), blinded or masked outcome assessment should be pursued for objectively determined outcomes, for example through use of a prospective randomized open-label blinded endpoint (PROBE) design (see also Section 2.1.9 ascertainment of outcomes).

This suggests some confusion. If an endpoint is truly objective then there is no need to blind or mask. I have never heard of PROBE but it sounds dubious. Usually these acronyms are there to weasel through dodgy practices by making them sound respectable.

Recommending GRADE isn't good.

All the right words seem to be there, but the impression given is that as long as everyone follows a mindless recipe all will be fine (and you can cut a few corners if you really have to).

That bit puzzled me too.

More about PROBE:

. 1992 Aug;1(2):113-9. doi: 10.3109/08037059209077502.
Prospective randomized open blinded end-point (PROBE) study. A novel design for intervention trials. Prospective Randomized Open Blinded End-Point
L Hansson 1, T Hedner, B Dahlöf

Abstract
A novel design for intervention studies is presented, the so called PROBE study (Prospective Randomized Open, Blinded End-point). This design is compared to the classical double-blind design. Among the advantages of the PROBE design are lower cost and greater similarity to standard clinical practice, which should make the results more easily applicable in routine medical care. Since end-points are evaluated by a blinded end-point committee it is obvious that there should be no difference between the two types of trials in this regard.
Since it would cost £48 to access the full article, I am none the wiser.

I then found this one:
  • Review
  • Published: 16 January 2009
Cardiovascular clinical trials in Japan and controversies regarding prospective randomized open-label blinded end-point design
Abstract
Recently, results of several cardiovascular clinical trials conducted in Japan were published. Most of them were designed as prospective randomized open-label blinded end-point (PROBE)-type trials, in which patients were randomly allocated to different regimens and both the patients and doctors are aware of the regimen being administered.

Although the PROBE design enables performing trials resembling real-world practices, entails low costs and renders patient recruitment easier, it presents several conditions that have to be satisfied to acquire accurate results, due to its open-label nature.

Principally, the so-called hard end points, which are judged by objective criteria, should be used as primary end points in order to prevent biases. In this article, a general description of various designs of clinical studies is provided, followed by a description of the PROBE design, and the precautions to be taken while conducting PROBE-designed trials by comparing trials conducted in Japan and the West.

So we have, according to the WHO document:

blinded or masked outcome assessment should be pursued for objectively determined outcomes


and according to the Japanese review:

Principally, the so-called hard end points, which are judged by objective criteria, should be used as primary end points in order to prevent biases.

I think there's a confusion over what is meant by objective. Does blinding of the outcome assessors somehow render questionnaire scores objective, or do they mean the actual outcomes have to be objective? In which case there should be no need for the outcome assessors to be blinded.

Edit to add:
It seems to me that the originators of PROBE in 1992 thought blinding the outcome assessors rendered the outcomes just as good as those of double blind trials, regardless of whether they were subjective or objective outcomes, which is clearly nonsense.

By 2009 the Japanese group realised PROBE style studies required objective outcomes to be valid, but didn't go one step further and say if the outcomes are objective, you don't need to blind the assessors.
 
Last edited:
y 2009 the Japanese group realised PROBE style studies required objective outcomes to be valid, but didn't go one step further and say if the outcomes are objective, you don't need to blind the assessors.

So the new WHO policy is just turning the handle on half-understood concepts of method and no application of common sense. Par for the course.
 
If blinding of an allocated trial intervention is not feasible (for example in trials of different types of patient management or surgical procedures), blinded or masked outcome assessment should be pursued for objectively determined outcomes
That's not what objective means. Doing this does not making an outcome objective, this is ridiculous. It can slightly lower the bias involved, but even then it only can do that, it does not mean that it will, and only slightly.

Just the same way, let's imagine measuring the length of something. Let's say we don't have a standard measuring device and instead use a sample of guesstimators whose guesses are assessed by a blinded assessor. This does not make the guesstimates objective. Everyone involved in making the quoted statement above understands this. And yet they put this ridiculous notion that they can do that out of questionnaires that have layers and layers of biases and uncertainty baked into the process.
These measures include the frequency and intensity of assessments. For RCTs, particular care should be taken to ensure that the people assessing, clarifying and adjudicating study outcomes are not influenced by knowledge of the allocated intervention
And in behavioral and psychological trials the literal aim of the interventions is to influence the participants' responses. So this completely negates any and all attempts at mitigating the influence of bias, because it consists of targeted bias. It's like building a submarine that can reach the Mariana trench, but has a single porthole with a mosquito net. It doesn't matter if the porthole is small.

There is also the usual problem where they talk of randomized clinical trials, used the acronym RCT, where the C is typically meant to stand for controlled. This only confuses the issue further, the ambiguity is everywhere. And they do this in what is supposed to be a clarifying document. Mercy.
The trial should be designed to robustly answer a clearly articulated key question on which the primary analysis should focus. It is not good practice to seek to answer multiple questions through secondary analyses, which can often be misleading.
As we know, and they know, the entire body of evidence-based medicine in clinical psychology abuses this. It's made of nothing but that, because the primary objectives always fail, and they always fall back onto secondary assessments. This is something that Cochrane abuses the hell out of. The entire body of evidence for psychosomatic medicine depends wholly on this, so this is common practice. They can write it down in black and white all they want, they exempt the contexts in which it's needed the most. Fake rules made from fake standards leading to fake results.
Although a sound statistical approach is critical in clinical research, it is equally important to focus on the clinical magnitude and relevance of any effect size rather than solely its statistical significance
Another very problematic idea since many trials don't even reach statistical significance and the interventions are still recommended, this is common in psychological interventions, basically the norm. They can write it down all they want, they cannot not know that this is standard practice and that they won't be doing anything to change that.
Potential harms of the intervention should be considered alongside potential benefits and in the wider clinical and health contexts.
It's laughable to have this while organizations like Cochrane simply insist on ignoring exactly that. For years all we've heard from the trialists is that they don't see any harm in their trials and clinics and that's final, any data outside of this context is invalid. Write it down all you want, those are fake rules because they are exempted in exactly the cases where it's needed the most.

None of this matters if the rules are not enforced as intended. But this invalidates all psychosomatic ideology and most evidence-based medicine so all they do is write it down and pretend like they're following any of this. This entire way of doing things is invalid, completely unreliable. It's like children play-acting being at work.

Nothing will change in the future, and none of the past errors will be recontextualized. This is a completely empty set of fantasy rules. Like a non-binding code of ethics.
 
Last edited:
A novel design for intervention studies is presented, the so called PROBE study (Prospective Randomized Open, Blinded End-point). This design is compared to the classical double-blind design. Among the advantages of the PROBE design are lower cost and greater similarity to standard clinical practice, which should make the results more easily applicable in routine medical care
I can't find any other way to read this but "the standard in clinical practice is very low, so let's have the same standard in research". Of course it lowers costs. And of course it makes results closer to real-world, where a scientific approach is not possible. This is exactly what you don't want. This makes as much sense as arguing to build semiconductors in standard rooms because they will not be used in clean rooms. Good grief they are abandoning all sense and reason.

One of the most basic standards of scientific experiments is for "all other things being equal", that you make sure that you are testing one and only one thing influences the outcome. But here they basically prefer the alternative standard of "you know what? let's just not bother with that and go with what we prefer the outcomes to be".

The death of expertise and reason rolled into one. It's not even novel either. It's literally abandoning the only standard that has ever reliably worked: the scientific method. They are literally advising to just not bother with any of that. Just wing it.
 
I wonder what's Cochrane's stance on this. "Just putting out there to look good, we don't intend and never meant to follow any of this"? It's for others? Even though basically no one will actually respect this, neither spirit nor word? What is good for, then? Just virtue signaling?

Would it be worth writing to Cochrane and Bastian about all the points they are explicitly ignoring with the exercise review? I know it wouldn't make any difference, but it would expose the hypocrisy and futility of this organization and their pretend standards. Just to put it out there in writing.
 
That bit puzzled me too.

More about PROBE:

. 1992 Aug;1(2):113-9. doi: 10.3109/08037059209077502.
Prospective randomized open blinded end-point (PROBE) study. A novel design for intervention trials. Prospective Randomized Open Blinded End-Point
L Hansson 1, T Hedner, B Dahlöf


Since it would cost £48 to access the full article, I am none the wiser.

I then found this one:
  • Review
  • Published: 16 January 2009
Cardiovascular clinical trials in Japan and controversies regarding prospective randomized open-label blinded end-point design


So we have, according to the WHO document:

blinded or masked outcome assessment should be pursued for objectively determined outcomes


and according to the Japanese review:

Principally, the so-called hard end points, which are judged by objective criteria, should be used as primary end points in order to prevent biases.

I think there's a confusion over what is meant by objective. Does blinding of the outcome assessors somehow render questionnaire scores objective, or do they mean the actual outcomes have to be objective? In which case there should be no need for the outcome assessors to be blinded.

Edit to add:
It seems to me that the originators of PROBE in 1992 thought blinding the outcome assessors rendered the outcomes just as good as those of double blind trials, regardless of whether they were subjective or objective outcomes, which is clearly nonsense.

By 2009 the Japanese group realised PROBE style studies required objective outcomes to be valid, but didn't go one step further and say if the outcomes are objective, you don't need to blind the assessors.
the main thing with blinding of course is if you have 50-50 controls-treated then there is no incentive to bumping scores (even if done 'unconsciously') because really the important thing should be treatment increase - [minus] controls increase, so bumping up the score is just as likely to be 'subtracted' from your outcome as added to it if you don't know who is who.

I'm not quite sure how you can claim this level of blindness however when there are likely to be other 'hallmarks' like a surgical scar or for those who've done robotic CBT it'll be really blinking obvious (oh I shouldn't whinge, phrase positively, certain programmed phrases). And of course the biggie is how incestuous some areas are and how dominant the hierarchy is - so how independent can people really be if:

- they are in the same clinic
- they are in the same hospital
- there is overlap of other staff
- they are at conferences or other small meetings which might be regular with other people from the same niche
- they all think the same way/have the same bias (hence the objective measures issue for certain areas because some seem to confuse 'saying the right thing' with 'being better')
- and then you add in the above and issues with perceived threats for the patient (these people will be reporting back and writing letters that will affect my employers and GP and access to things)... I think being in a visibly different place, with it being made clear there is no way things will be fed back to those delivering not just the trial but potentially their 'care' in such a way that would indeed 100% reassure (does that exist? given the risk = impact vs likelihood, and you can maybe try and reassure on likelihood but that impact bit... without big change to culture and not just laws/rules but being able to make sure they are implemented..)
 
I've copied rather a lot, and not necessarily the 'best bits'. I intend to go back over it later to weed out a lot of words and home in on bits particularly relevant to clinical trials of psych and behavioural interventions for ME/CFS.

This was linked on the forum via a Cochrane article promoting the document. I therefore will be looking at it particularly in relation to the Cochrane Larun review of Exercise therapy for CFS, and the trials, particularly PACE that informed it.

At least that's the plan. If someone else wants to explore this too, all the better.

Crikey @Trish what a herculean task! :laugh: you're definitely not pacing. I downloaded it from the Cochrane thread, saw it was yet another single line space 70 odd page document to plough through and thought - hell no! Safety, safety, safety. The WHO have been highlighting this since 2004.
 
Please feel free to move this. I'm having a Grok day.

Grok, what are some of the worst mistakes made in clinical trials that reduce the reliability and rigor of results? [Not in any order.]

"The list is on point, but daunting all together. Experienced academics keep these in mind when reviewing grant applications and research for publication, as well as when designing studies. But it's a reminder, and good to know for those trying to understand how to assess the validity of results..."

-Inadequate sample size
-Poor randomization
-Lack of blinding (single, double, or triple)
-Selection bias in participant recruitment
-Failure to define clear inclusion/exclusion criteria
-Insufficient statistical power
-Improper control group selection
-Non-standardized intervention protocols
-Inconsistent data collection methods
-Missing or incomplete data
-Failure to account for dropouts or withdrawals
-Inadequate follow-up duration
-Overreliance on surrogate endpoints
-Cherry-picking outcome measures
-Post-hoc changes to study design or hypotheses
-Lack of pre-registration of the trial
-Publication bias (not reporting negative results)
-Data dredging or p-hacking
-Inappropriate statistical analysis
-Confounding variables not controlled
-Lack of replication or validation studies
-Poorly defined primary endpoints
-Inadequate training of study personnel
-Non-compliance with ethical standards
-Falsification or fabrication of data -Insufficient monitoring or oversight
-Failure to adjust for multiple comparisons -Overgeneralization of results -Ignoring adverse events or side effects
-Inadequate reporting of methodology
-Conflict of interest not disclosed
-Use of unreliable or unvalidated measurement tools
-Improper handling of outliers
-Lack of transparency in data sharing
-Misinterpretation of correlation as causation
-Inconsistent dosing or treatment schedules
-Failure to account for placebo effects -Inadequate baseline assessments
-Ignoring patient heterogeneity
-Premature termination of the trial"
 
This is one exam on which no one should want a passing grade.

Our psychobehavioral overlords easily get top grades. May in fact often get perfect marks.

And we're wrong for pointing it out. Ugh. I hate being early on the right side of history so much. It just sucks.
 
-Failure to account for placebo effects

Failure to account for heterogeneity in different confounders being lumped together under the placebo banner.

Plus some repetition. But otherwise a useful list.
 
Back
Top Bottom