Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results (Silberzahn et al - 2018)

Discussion in 'Research methodology news and research' started by Kalliope, Oct 14, 2018.

Tags:
  1. Kalliope

    Kalliope Senior Member (Voting Rights)

    Messages:
    6,570
    Location:
    Norway
    Thought this might be of interest for some of the members of the forum. Was this a bit surprising for you as well?

    Advances in Methods and Practices in Psychological Science:
    Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results


    Abstract:
    Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results.

     
  2. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    On that basis I suggest we throw all psychology clinical trials that claim their treatment is effective in the bin. I assume the data was objective, ie based on actual numbers of players in each group given red cards, so it's not even a problem with subjective outcomes, yet they still can't agree.

    The moral of the story, stop pretending psychology is science and take away all computer stats packages from psychologists. A bit like taking sharp knives away from small children - it's just too risky and may cause irreparable harm.
     
    TiredSam, Mithriel, Cheshire and 8 others like this.

Share This Page