Symptom exaggeration and symptom validity testing in persons with medically unexplained neurologic presentations, 2015, Lockhart & Satya-Murti.

Discussion in 'Other psychosomatic news and research' started by livinglighter, Aug 2, 2023.

  1. livinglighter

    livinglighter Senior Member (Voting Rights)

    Messages:
    602
    Summary

    Neurologists often evaluate patients whose symptoms cannot be readily explained even after thorough clinical and diagnostic testing. Such medically unexplained symptoms are common, occurring at a rate of 10%–30% among several specialties. These patients are frequently diagnosed as having somatoform, functional, factitious, or conversion disorders. Features of these disorders may include symptom exaggeration and inadequate effort. Symptom validity tests (SVTs) used by psychologists when assessing the validity of symptoms and impairments are structured, validated, and objectively scored. They could detect poor effort, underperformance, and exaggeration. In settings with appropriate prior probabilities, detection rates for symptom exaggeration have diagnostic utility. SVTs may help in moderating expensive diagnostic testing and redirecting treatment plans. This article familiarizes practicing neurologists with their merits, shortcomings, utility, and applicability in practice.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5764424/
     
    bobbler and Hutan like this.
  2. livinglighter

    livinglighter Senior Member (Voting Rights)

    Messages:
    602
     
    RedFox and Hutan like this.
  3. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,998
    Location:
    Canada
    Love this probably unwittingly dishonest list. They are very much identical, however much they could pretend that the "or" does not imply it. It doesn't imply it, it says it. The dishonesty is especially infuriating, where they say it plainly in most contexts, and reserve the lies where they should never happen: in a consult room, deciding the entire life of a whole human being.

    Especially the opening:
    I don't know if they genuinely believe that they are capable of diagnosing everything real, or that their tests allow it, but this incredible hubris will go down in history as one of the most immoral ideas ever put into widespread practice. Especially in neurology, they are so laughably far from knowing everything. The whole idea of psychosomatic illness began in the 19th century, when medicine basically knew nothing and had no technology to speak of. And they still have the same hubris today, that there couldn't possibly be things that are beyond their infinitely wondrous skills.

    They even pretend that they can read motive and intent in people, even that they can tell the difference between faking and exaggeration. Which is plain absurd. All without any technology or real skills. They just think they're perfect and omniscient, but only when convenient. And all because of the complete power imbalance where they can just do and say whatever they want without oversight. They know very well that there are many things they don't know and can't explain, but somehow cannot apply this knowledge here, where they simply exempt the normal ways, even do the very thing they're not supposed to do and harm people. Awful, barbaric nonsense.

    The reasoning here is basically non-existent. They even use their failure as proof that they're not failing. It's so infuriating to me having an engineering background seeing people say such foolish things, incapable of serious reasoning.
     
  4. livinglighter

    livinglighter Senior Member (Voting Rights)

    Messages:
    602
    Interestingly, in the past, all the conditions that makeup MUS were advised to be considered following thorough clinical and diagnostic testing. Currently, the leading researchers say it's not needed. What advancements caused that to change?

    If SVT was so helpful in diagnosing MUS, why wasn't neuropsychological evaluation listed as a recommended test back when the 2007 CFS guideline was published?
     
    Sean, alktipping and Amw66 like this.
  5. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,982
    Location:
    Aotearoa New Zealand
    Considering the difficulty of getting any evaluations done after a label of MUS (or whatever equivalent term is favoured at that moment), even 1% of 1,114 is tragic. That's around 10 people who probably had some sort of acute event in order to have their "organic brain disorder" diagnosed in the 18 months following a MUS diagnosis. And, surely, there will be many times that number with accepted "organic brain disorders" that only become apparent in the following years. I've mentioned before the woman who was diagnosed with MS 20 years after being diagnosed with ME/CFS.

    We have regular mammography screening programmes set up for breast cancer where the rate of detection of cancer is a bit less than 1%. If the odds of finding some pathology are similar, why is it so unreasonable to properly screen someone at the onset of debilitating neurological symptoms? And screen them again sometime later, if the debilitating symptoms are still there?
     
    obeat, Michelle, Arnie Pye and 6 others like this.
  6. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,998
    Location:
    Canada
    The increasing need to reduce costs, even at the expense of actual lives. The biopsychosocial model has made this front and center.

    There are so many diseases and conditions that medicine knew nothing about and now treats. People live longer and that adds significant burden. But healthcare budgets aren't increasing, they're mostly fixed everywhere. There is more demand and equal supply, so this is basically high level "triage".
     
    Sean, bobbler, alktipping and 3 others like this.
  7. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,982
    Location:
    Aotearoa New Zealand
    And that's what makes the funding of useless, poorly evidenced therapies so frustrating - the talking therapies used to cure back pain or MUS... And the funding of endless poor quality research into those therapies - millions of dollars.

    And fobbing off a person with some sort of MUS diagnosis doesn't necessarily make them go away and not cause health system expenditure. They will probably try repeatedly to get some answers another way. Seeing six specialists who do no tests might cost just as much as someone seeing a GP and good specialist who between them do a thorough screening. If nothing is found after a thorough effort, the patient is more likely to feel satisfied that they have had obvious possibilities checked, and see if time heals things.

    Not to mention the cost of welfare benefits paid and tax revenue foregone by having patients who could have had an illness identified at an early stage and successfully treated.
     
    rvallee, obeat, RedFox and 6 others like this.
  8. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    2,978
    The manifesto justifying the 10-30% to write-off continues... I wonder what that number really is explained by and if it is 'in the mind/ideology/attitude ofthe neurologist' [perhaps truthfully 'post hoc justification' for sadly certain demographics or 'don't like the face of'], communication styles they just simply tend not to understand, or new conditions they are too lazy/ignorant/stupid/and so on to care about being curious of.

    Given Poppers theorem they simply can't prove their patient population they claim to be experts in the behaviour of.......so this just seem contradictory nonsense. It's backwards finding criteria to 'prove' such gut-instinct categorisation is right ... and then searching for fake post-hoc justifications over why they were made in the first place, whether they are any valid or consistent measures (neurologist's objective rating?) or not their apparent validity is based on 'does it match'?

    The most important part of this research would be the 1,3,5,10 and yes 20 year follow-up by a genuine independent to see how many turned out to have something else and the life quality and expectancy difference the labels directly caused vs those who didn't have such misfortune. Weird they never want to know the real results, when that's the blindingly obvious 'test'? Shouldn't that list be the most fundamental thing that haunts such people being asked to the point it is a 'tracked outcome' just as much as other measures for departments and conditions?

    This made me remember a really good comment made by @rvallee relatively recently to another thread noting that [relevant to that particular thread] there seemed to be a potentially disturbing trend towards people trying to claim the term 'psychometric' - which means something quite specific and is about measurability vs a population - applies just because they claim their job title has the word 'psych' in it. Except these individuals don't even have that if it is neuros.

    I daren't open this to find out it might be a bunch of fools doing a test with ambiguous questions about 'how many times a day' or 'how someone feels when' on a likert scale, instead of e.g. a properly scientifically designed (with all impacting e.g. environmental factors accounted for at least if not 'controlled and measured') challenge of reaction times or task they can label the cognitive elements of, done in the form of a 2-day CPET.

    I love the fact they think they can wield poisonous terms like 'poor effort' in what will be including those with serious illness and potentially severe symptoms as if there is no need to rate that in a way relative to what 100% effort (or more for pwme, given how PEM works) could achieve

    And that they think 'rated by someone else' doesn't mean their data is actually more psychometric about tendency to rate certain individuals than it is about those paraded in front of them, given they will be the ones 'doing the action' ie part of the measurement if they are 'rating'. Can people really be that confused on methodological terms?

    It also makes me laugh that whilst their supposed beloved 'BPS' claims to holistically bear in mind more things impact on 'the person' they can't imagine the idea someone could have had something else [bad] happen on one day versus another

    The words validated and scored makes me very suspicious that isn't because they required ranking continuous into discrete data for some important statistical reason but because it isn't psychometric at all. Indeed psychometric would normally demand data of a continuous form because of the nature of distributions.

    And I balk at the word 'objectively' being included where I have a horrible feeling that in 'objectively scored' when the principal investigator uses terms fictious and conversion disorder is double-speak

    Either way I guess the underlying causation for any correlations they have might sadly be things related to gender, peer pressure and various presumptions and tropes tht are inaccurate about what disabled vs malingerer would answer in a situation ... all scientifically gleaned by the mind of one person backed up by other people who have the same presumptions because they all chat together in the tea room about what they think malingerers are. And they don't realise that is neither scientific nor knowledge-based often because..... well we're experts? erm... not if you don't know how to test things or can't accept the results or your own test if it contradicts your gut feelings?

    I remember watching a talk from an expert in university league tables who noted that after having done all of the careful thinking as to what criteria matter, how they are best measured, checking weighting and modelling it through for flaws...... the important thing was that unless Oxford or Cambridge came at or very near the top then noone will believe it. Which of course = calibration, as much as triangulation (a process that does need to be used to check for validity, and seems to be highly absent here and in most BPSM research... but then they don't like using well-defined patient criteria either).

    I have a grim feeling these people know their audience and that they aren't checking the scientific rigour and giving awards for ingenuity in robust measuring ... and of course 'calibration' in the context of people who demand to see what they presume is a scary example of reorganising reality in order to back up deluded beliefs and feel better that they seem 'more feasible' because there is now a scale of it with the right amount of people landing in the different groups.

    Wouldn't someone who really wanted to calibrate begin with the few famous 'proven' frauds and include huge numbers of people with proper diagnoses, particularly including a group who might have gone through the hell of being doubted or 'under a MUS type approach' (there are enough of them who end up finding out they have cancer or RA or PA or thyroid issues and so on) so that gaslighting-impact and the horrendous impact it has on your confidence when talking about symptoms is accounted for IF they wanted to genuinely get the right people and not the right numbers (and who cares about the collateral damage) in this apparent 'holy grail' of tools?

    Isn't doing it this way - by asking those who might be getting it wrong based on their own individual biases, just a way of ingraining problems and flaws and error through an industry instead of the other way around? But as long as the low hanging fruit = the right number seems to be the focus here which is scary?
     
    Last edited: Aug 2, 2023
    RedFox likes this.

Share This Page