Replication concerns in sports and exercise science: a narrative review of selected methodological issues in the field, Mesquida et al, 2022

Discussion in 'Research methodology news and research' started by cassava7, Dec 15, 2022.

  1. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    1,051
    Known methodological issues such as publication bias, questionable research practices and studies with underpowered designs are known to decrease the replicability of study findings.

    The presence of such issues has been widely established across different research fields, especially in psychology. Their presence raised the first concerns that the replicability of study findings could be low and led researchers to conduct large replication projects. These replication projects revealed that a significant portion of original study findings could not be replicated, giving rise to the conceptualization of the replication crisis.

    Although previous research in the field of sports and exercise science has identified the first warning signs, such as an overwhelming proportion of significant findings, small sample sizes and lack of data availability, their possible consequences for the replicability of our field have been overlooked.

    We discuss the consequences of the above issues on the replicability of our field and offer potential solutions to improve replicability.

    https://doi.org/10.1098/rsos.220946
     
    CRG, Michelle, Peter Trewhitt and 4 others like this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    I only skimmed very quickly, but I don't see anything in there about the fact that no other discipline uses this kind of "evidence-based" methodology to answer questions. Because the exact same issues exist everywhere this kind of study methodology is used: in chronic illness and its misapplication of psychosomatics, food science is notoriously bad, and basically anything in healthcare where psychological standards are applied, like sports science, it seems.

    Originally medicine used a kind of compromise with controlled pharmaceutical trials of drugs with objective outcomes where they could at least pretend they were studying one thing, proper A/B testing, and not a bunch of things they're not accounting for. In the end it's true A/B testing that works, controlled studies are the very corrupted version of this and clearly inadequate, the entire design of this paradigm is wrong.

    But precisely because of how easy it is to get away with BS results, medicine adopted this standard massively, and everywhere this type of study is used faces the exact same problems. Which clearly points to the fact that this is simply not a valid way to understand anything, it does not tell whether something is right or wrong, only whether someone can make it look so in a way that other people accept is legitimate. No discipline of science or even expert profession uses such flimsy evidence, it's exclusive to healthcare disciplines, which as a result have stagnated significantly everywhere this paradigm is used. If you removed technological progress out of medicine, you'd basically remove all significant progress. None of which uses such a flimsy way of answering hard questions, let alone complex ones.

    Still can't face uncomfortable truths. I see mostly recommending to trim around the edges, there is no ability to put the blame where it lay: this entire methodology is useless at giving out real answers, it's only good at giving out desirable answers. It exploits bias, not just in publication, but throughout the entire process and long after.

    You can keep trying all you want with this process, but you'll never get any answer more useful than 42.
     

Share This Page