Researcher allegiance in research on psychosocial interventions: meta- research study protocol and pilot study - Yoder et al (2019)

Sly Saint

Senior Member (Voting Rights)
from Introduction
RA
is defined as ‘the belief in superiority of an
intervention and of the superior validity of
the theory of change that is associated with
the treatment’1 (p. 55).

Abstract
Introduction One potential source of bias in randomised
clinical trials of psychological interventions is researcher
allegiance (RA). The operationalisation of RA differs
strongly across studies, and there is not a generally
accepted method of operationalising or measuring it.
Furthermore, it remains unclear as to how RA affects the
outcomes of trials and if it results in better outcomes
for a preferred intervention. The aim of this project is to
develop and validate a scale that accurately identifies RA,
contribute to the understanding of the impact that RA has
in a research setting and to make recommendations for
addressing RA in practice.
Methods and analysis
A scale will first be developed and
validated to measure RA in psychotherapy trials. The scale
will be validated by surveying authors of psychotherapy
trials to assess their opinions, beliefs and preferences of
psychotherapy interventions. Furthermore, the scale will
be validated for use outside the field of psychotherapy.
The validated checklist will then be used to examine two
potential mechanisms of how RA may affect outcomes
of interventions: publication bias (by assessing grants)
and risk of bias (RoB). Finally, recommendations will be
developed, and a feasibility study will be conducted at a
national mental health agency in The Netherlands. Main
analyses comprise inter-rater reliability of checklist items,
correlations to examine the relationship between checklist
items and author survey (convergent validity) as well as
checklist items and trial outcomes and multivariate meta-
regression techniques to assess potential mechanisms of
how allegiance affects trial outcomes (publication bias and
RoB).
this study will contribute to the
development of clinical trial guidelines and enhance the
field of psychotherapy research and practice. Once a valid
method exists to measure RA, future research should be
devoted to further studying different mechanisms of bias
(ie, quality of delivered therapy and control conditions
in clinical trials) and the associated relationship with RA.
interesting, a nice twist perhaps(?) But will anyone use it? Cochrane seem to be going in the opposite direction.

full paper here
https://bmjopen.bmj.com/content/bmjopen/9/2/e024622.full.pdf
 
Last edited:
Would it go like this?

Meta-researcher to researcher: 'We are conducting a study to see how biased you are about your treatments. Would you be happy to give consent to be involved?'

Researcher: 'Oh, sure, ask away.'

Meta-researcher: 'Are you biased in any way towards your treatments by believing in them?'

Researcher: 'Oh, no way, I am completely neutral during trials.'

Meta-researcher: 'Thanks'
 
Furthermore, it remains unclear as to how RA affects the outcomes of trials

I would personally posit there are a number of trials where it is very clear that RA, through experimental design relying on subjective outcomes in unblinded trials, deliberate biasing of participants, inappropriate subject selection, outcome switching and non reporting of null results on some measures, is the main determinant of the reported outcomes.
 
another weird retweet from Michael Sharpe (does he ever read these things he keeps retweeting?)

5 Reasons It’s So Hard To Think Like A Scientist
Even expert researchers suffer from the human foibles that undermine scientific thinking. Their critical faculties are contaminated by their agenda, by their ultimate motives for doing their experiments. This is why the open science revolution occurring in psychology is so important: when researchers make their methods and hypotheses transparent, and they pre-register their studies, it makes it less likely that they will be diverted, even corrupted by, confirmation bias (seeking out evidence to support their existing beliefs).
Take the example of systematic views in psychotherapy research: a recent analysis found that the conclusions of many are spun in a way that supports the researchers’ own biases. Other times, the whole scientific publishing community, from journals editors down to science journalists, seem to switch off their critical faculties because they happen to agree with the message to emerge from a piece of research.

https://digest.bps.org.uk/2017/06/20/5-reasons-its-so-hard-to-think-like-a-scientist/
 
Meta-researcher: 'Are you biased in any way towards your treatments by believing in them?'
Researcher: 'Oh, no way, I always wear my Equipoise Socks™ at work.'

and non reporting of null results on some measures,
Or late reporting, or misreporting/misinterpreting.

another weird retweet from Michael Sharpe (does he ever read these things he keeps retweeting?)
He is just trying to be seen as one of the woke warriors for science.

'See, I can't possibly be trying cover anything up coz I promote criticism of my own profession.'
 
Back
Top Bottom