The Validation Crisis in Psychology, Schimmack, 2021

Discussion in 'Research methodology news and research' started by cassava7, Nov 9, 2022.

  1. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    1,051
    Cronbach and Meehl (1955) introduced the concept of construct validity and described how researchers can demonstrate that their measures have construct validity. Although the term construct validity is widely used, few researchers follow Cronbach and Meehl’s recommendation to quantify construct validity with the help of nomological networks. As a result, the construct validity of many popular measures in psychology is unknown.

    I call for rigorous tests of construct validity that follow Cronbach and Meehl’s recommendations to improve psychology as a science. Without valid measures even replicable results are uninformative. I suggest that a proper program of validation research requires a multi-method approach and causal modeling of correlations with structural equation models. Construct validity should be quantified to enable cost-benefit analyses and to replace existing measures with better measures that have superior construct validity.

    https://open.lnu.se/index.php/metapsychology/article/view/1645

    Open access (PDF): https://open.lnu.se/index.php/metapsychology/article/view/1645/2436
     
    Frankie, RedFox, Sean and 4 others like this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    If you're not measuring, you're not doing science. Ratings aren't measurements, measuring something has a specific meaning in science and ratings aren't it. Psychology almost exclusively does ratings, through questionnaires, which makes them of very limited scientific value. As a choice. A choice that leads to many, many, so so many, false positives where if they measured anything real the silly fake ratings wouldn't be significant no matter how much mathemagics and lies, damned lies and statistics you pull out of a hat.

    But if you start measuring things the whole building falls down like a cartoon coyote who looks down. So there are still very few measurements in psychology, because ratings can be endlessly manipulated to achieve specific outcomes. Specific outcomes such as about 90% of studies "confirming" their hypothesis.

    The question in psychology is whether they want real knowledge, or to produce fake ones that make it seem like they're doing something. And without fail the obvious choice is made to stick with the flimsy ratings because otherwise the party dies down quickly and people have to come to grips with essentially doing useless things for no reason other than continuing to publish useless things.

    This "crisis" is not a bug, it's a feature. This is why no one's really doing anything about it: they don't have to and you can't make them.
     
    alktipping, EzzieD, John Mac and 4 others like this.
  3. BrightCandle

    BrightCandle Senior Member (Voting Rights)

    Messages:
    341
    Its just fraud all the way through. Psychology findings of all types, medical and society wide rarely survive even 10 years before they are shown to be a lie. Yet the newspapers keep printing the findings and the journals keep accepting papers, from known and recognised fraudsters, and none of it has any science rigor nor the historical context of truthiness to even be worth the level of presentation it gets. As a field its largely fraudulent and its behind a large amount of the failure to replicate in "science" today. They are not going to adopt fixing the validation errors because its a feature, its how they manage to find out anything where there is nothing to find. If they start trying to produce valid results their entire field will collapse over night.
     
    alktipping, EzzieD and Art Vandelay like this.
  4. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    I think a lot of psychology research is not so much fraud as attempts to quantify the unquantifiable using extremely inadequate questionnaires that don't really measure what they purport to measure. And then dumping the data obtained into computer stats packages that come up with statistical analyses of data the researchers barely comprehend, and that doesn't warrant that level of confidence that the numbers actually have any real world meaning.

    When that's just a student project testing irrelevancies like what colours, holidays or animals males and females or old and young people prefer so the students can learn to use the stats packages and write up projects, it can seem relatively harmless, but when applied to human mental and physical health, it can be lethal.
     
    Michelle, alktipping, Lilas and 7 others like this.
  5. CRG

    CRG Senior Member (Voting Rights)

    Messages:
    1,860
    Location:
    UK
    Author research background: https://research.com/u/ulrich-schimmack

    Overview
    What is he best known for?

    The fields of study he is best known for:
    • Social psychology
    • Social science
    • Cognition
    Ulrich Schimmack mainly investigates Social psychology, Life satisfaction, Personality, Big Five personality traits and Subjective well-being. In Social psychology, Ulrich Schimmack works on issues like Developmental psychology, which are connected to Neurosis. Ulrich Schimmack integrates many fields, such as Life satisfaction and Well-being, in his works.

    Within one scientific family, he focuses on topics pertaining to Neuroticism under Big Five personality traits, and may sometimes address concerns connected to Extraversion and introversion. His Subjective well-being research includes themes of Cognitive psychology, Job satisfaction, Facet and Clinical psychology. His Affect research incorporates themes from Structural equation modeling and Cognitive science.

    more at link.
     
    Sean, Michelle and Trish like this.

Share This Page