Variability in the analysis of a single neuroimaging dataset by many teams, 2019, Botvinik-Nezer et al

Discussion in 'Research methodology news and research' started by InitialConditions, Nov 16, 2019.

  1. InitialConditions

    InitialConditions Senior Member (Voting Rights)

    Messages:
    1,669
    Location:
    North-West England
    I have just seen this new paper on Twitter. A timely reminder that imaging and the subsequent analysis is not an exact science.

    https://www.biorxiv.org/content/10.1101/843193v1

    Abstract
    Data analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed.
     
    Hutan, paolo, alktipping and 12 others like this.
  2. Cheshire

    Cheshire Moderator Staff Member

    Messages:
    4,675
    Merged thread - article about the paper

    https://dana.org/article/neuroimaging-many-analysts-differing-results/
     
    Last edited by a moderator: Aug 5, 2020
  3. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    I think this is an important paper when helping people understand the very large potential for bias in fMRI studies.

    As well as this one, I've referred to these:

    A Machine Learning Approach to the Differentiation of Functional Magnetic Resonance Imaging Data of Chronic Fatigue Syndrome (CFS) From a Sedentary Control
    A study of people with CFS and healthy controls of a similar size to that proposed here found 29 regions of interest on Day 1 and 28 regions on Day 2 after a physical and cognitive challenge. However, only 10 of the regions of interest were common to both days, demonstrating how important it is to manage activity prior to fMRI.

    Time of day is associated with paradoxical reductions in global signal fluctuation and functional connectivity
    fMRI also varies during the day; it has been suggested that variation in time of day could potentially account for between-study variation in results and failed replications.
     
    Andy and Cheshire like this.

Share This Page