It remains shocking to see how poor the reasoning is about these things. It's abysmal. And it has so many consequences, such as the fact that this lack of reliable objective testing is explicitly used to misdiagnose:
Similarly, functional neurological disorder (FND) is characterized in part by internal inconsistencies in subjective symptoms versus objective measures (
Adewusi et al., 2021), with the subtype of functional cognitive disorder frequently defined by discrepant subjective and objective cognition
And it even gets worse from there. The entire construct of FND is built on this discrepancy (and a lack of other specific tests, which is the same thing). And yet it's weak, even though being built from it, it should be a total separation. But just like the deconditioning hypothesis that isn't bothered by most participants in a study meeting recommended activity levels, it just doesn't matter.
Three reviews in FND reported a weak association or a lack of concordance between self-reported and performance-based cognition in three (
Teodoro et al., 2018), four (
Millman et al., 2025), and five (
Van Patten et al., 2024) primary source articles.
Not just that, but they looked at "traditional" neurological disorders, and found mostly the same:
Regarding cross-sectional results, several reviews noted predominantly nonsignificant relationships between subjective and objective cognition in individual studies
But they call it differently, somehow. "Lack of concordance" becomes "predominantly nonsignificant relationships". Because reasons, I guess. To call this invalid is really under-selling how obviously wrong this is.
And of course they find the same overlap between subjective cognitive deficits and mental health questionnaires that ask overlapping questions:
In contrast to the overall lack of correspondence between subjective/objective cognitive data, the seven reviews specifically investigating mental health factors in cognitive aging found consistent associations between greater symptoms of depression and anxiety and increased subjective cognitive concerns
Because they ask overlapping questions. And the very arguments used to insist that we have psychological problems is also found in "traditional" diseases. None of this is consistent or coherent. Piss-poor reasoning is so far above this it's not even a challenge. Phrenology was more consistent than this, and it's total bunk.
And they find the same with "traditional" (what the hell does that even mean?) diseases:
Eight reviews focused on cancer (
Bray et al., 2018;
Crouch et al., 2022;
Cusatis et al., 2023;
Hutchinson et al., 2012;
Mekler et al., 2025;
Pembroke et al., 2024;
Pullens et al., 2010;
Vance et al., 2023), with all eight noting mixed findings and the majority of individual studies describing nonsignificant relationships.
The very thing that is used to argue about the psychosomatic nature of many illnesses medicine doesn't understand is found in... all diseases that affect mental functioning. But they can't imagine that their tests are inadequate, so they build up this elaborate belief system about how it's us who are wrong, who have poor insight and biases and so on. Not them. No, not them. They are perfect and all-knowing and unbiased and so on.
And they do discuss this, although steer totally clear of the implications of the entire construct being currently defined based on what they find to be bunk:
Although findings from our study do not lessen the clinical significance of cognitive anosognosia in psychosis or internal inconsistency in functional cognitive disorder, current results suggest that a simple discrepancy (or non-correlation) between self-reported and performance-based cognition is insufficient for the identification of these conditions. For example, an elevation on a self-report measure of cognition, together with a broadly intact neuropsychological test profile, is not pathognomonic of functional cognitive disorder because this pattern is non-specific and may simply reflect the lack of a consistent relationship between subjective and objective measures of cognition. Instead, we recommend that investigators seeking to diagnose functional cognitive disorder identify unique internal inconsistencies (e.g., a 2.5 standard deviation subjective/objective discrepancy cutoff) that distinguish it from other conditions.
But this is the literal definition of so-called FND. And there is no such "unique internal inconsistencies (e.g., a 2.5 standard deviation subjective/objective discrepancy cutoff) that distinguish it from other conditions". It does not seem to bother anyone that invalid concepts are routinely used used in health care, including concepts whose entire aim is to invalidate the lived reality of millions of people. No thoughts given to the fact that we are talking about human beings. Might as well be about alfalfa yields.
There is accumulating evidence that general cognitive dysfunction is a shared feature across numerous mental health and medical conditions
Uh. Sure. There's also the fact that it has been reported as such for a long time. None of this junk actually adds anything to the fact that self-reports are generally better than flawed objective tests that ask overlapping questions for ideological reasons. Most of psychosomatic "research" has been about confirming that patient self-reports are generally reliable, but having to find alternative explanations for why that is, because it just can't be true, "poor insight", internal biases, and so.
In contrast to subjective and objective cognition, mental health symptoms such as depression and anxiety were consistently related to self-reported cognition
Obviously, they all ask about it. What kind of nonsense is this where everyone pretends? It's like this entire circus where all the stage magicians ask the audience member they brought on the stage what their card is before revealing it. No one should be falling for stuff like this, let alone the damn stage magicians.
What's even more shocking is that the need for a very tight testing procedure is essentially the core of medical diagnosis. If you use the wrong test, you are simply not getting a valid answer about the problem. And yet when it comes to things categorized as mental health, correctly or not, everything just turns as whimsical as nonsensical. Up becomes down, and left becomes blue. Whatever is needed to argue a preferred version of reality.
Plus the way they do this is basically taking the sum of many poor assessments, and arguing that those assessments must converge to the truth. Except they don't have to. They may, a lot depends on the biases involved. And boy are there heavy biases involved here. In some cases it's literally entire structures built on biases.
It seems to be predicated on random guesses. Things like asking an audience to guess the number of marbles in a jar. Almost no one gets it right, but if you take the average of all guesses, it converges on the right answer. But the issues involved here are far too complex for this. They don't have nearly as much information as what someone would get looking at the jar and trying to eyeball its volume and roughly calculate based on that. Instead it's more like predicting the weather, in a place they never lived, during a different climate era. With no useful data. Just rough guesses. It just doesn't work for that. Nothing ever has.
Mental health care remains the absolute bottom pit of all the professions. It's so damn awful.