CAN LINGUISTIC ANALYSIS BE USED TO IDENTIFY WHETHER ADOLESCENTS WITH A CHRONIC ILLNESS ARE DEPRESSED? - Jones, Loades, Crawley et al Dec 15 2019

Sly Saint

Senior Member (Voting Rights)
Abstract
Comorbid depression is common in adolescents with chronic illness. We aimed to design and test a linguistic coding scheme for identifying depression in adolescents with Chronic Fatigue Syndrome/Myalgic Encephalomyelitis (CFS/ME), by exploring features of e‐consultations within online cognitive behavioural therapy treatment.

E‐consultations of 16 adolescents (aged 11 – 17) receiving FITNET‐NHS treatment in a national randomised controlled trial were examined. A theoretically‐driven linguistic coding scheme was developed and used to categorise comorbid depression in e‐consultations using computerised content analysis. Linguistic coding scheme categorisation was subsequently compared to classification of depression using the Revised Children's Anxiety and Depression Scale (RCADS) published cut‐offs (t‐scores ≥ 65, ≥ 70).

Extra linguistic elements identified deductively and inductively were compared with self‐reported depressive symptoms after unblinding. The linguistic coding scheme categorised three (19%) of our sample consistently with self‐report assessment. Of all 12 identified linguistic features, differences in language use by categorisation of self‐report assessment were found for ‘past‐focus’ words (mean rank frequencies: 1.50 for no depression, 5.50 for possible depression, and 10.70 for probable depression; p < .05) and ‘discrepancy’ words (mean rank frequencies: 16.00 for no depression, 11.20 for possible depression, and 6.40 for probable depression; p < .05).

The linguistic coding profile developed as a potential tool to support clinicians in identifying comorbid depression in e‐consultations showed poor value in this sample of adolescents with CFS/ME. Some promising linguistic features were identified, warranting further research with larger samples.
https://onlinelibrary.wiley.com/doi/abs/10.1002/cpp.2417?af=R

eta:
full paper now available
https://sci-hub.tw/10.1002/cpp.2417
 
Last edited:
Once you read Freud, you know where these ideas that patients cannot tell they have a psychological problem come from.
I have read Freud. Unfortunately for me.

(Required reading during my English Lit course at university. Ugh.) Don't remember that part though.

All I remember is that if you have a dream about losing a body part, it means you're jealous that you don't have a penis. Yeah, that's definitely what my nightmare about my teeth falling out was about. [sarcasm]
 
Oh, wow. OK. This is just a perfectly distilled ultra-concentrated example of everything wrong with BPS. It's like a man eating his own head, now I have seen everything.

First off it's a basic requirement for diagnosing ME, even CFS, to differentiate depression. It's a widely diagnosed condition, one often presented as an alternative to ME/CFS, meaning it should actually be more identifiable. Which it isn't, because even "experts" cannot tell the difference. Tens of millions are diagnosed with depression, with massive consequences, despite the diagnosis being unreliable.

So then you have this:
Comorbid depression is common in adolescents with chronic illness. We aimed to design and test a linguistic coding scheme for identifying depression in adolescents
stating that it is common and yet still has to be identified using a bizarre linguistic proxy. Or do you struggle to differentiate it and have to resort to weird schemes? So how can it be said to be common if it still needs to be identified? Those things are in direct opposition to one another.

Especially as one of the most common beliefs I have seen among physicians is that CFS is the physical symptoms of depression, whatever that means. This is largely inspired by work from Crawley and her likeminded BPS colleagues, because they always present anxiety and depression, both hard to diagnose because there are no reliable tests, as being common, if not universal in CFS.

But of course diagnoses of anxiety or depression are no more reliable than CFS with x months of "fatigue", whatever that means. But they are presented as more applicable diagnoses because... they are seen as more easily identifiable? Which is clearly not the case when you have studies like this that make it clear that the people who insist CFS is the thing they insist it is cannot tell the difference and still need to try and devise Rube Goldberg methods to tell it apart, because they can't, even though their model depends on it.

A good point is that this is not a questionnaire. Questionnaire are not reliable and should never, ever, be the only tool, especially used only on small samples in cherry-picked circumstances, used to decide on abandoning millions of people to suffering and premature death. A bad point is that this linguistic analysis is even less reliable, literally tells nothing and would never tell apart someone with CFS, ME, the flu, MS or just generally being misanthrope or whatever other things go into that spurious linguistic analysis.

Notable that this could have been done 30 years ago, as is, regardless of anything that has happened since the point at which people like Crawley and other BPS ideologues hijacked ME and actually regressed it. No new technology or scientific knowledge in the last 30 years are of any relevance other than computer technology making linguistic analysis faster, without actually making it any more relevant in the identification of depressive thoughts, nevermind that depressed thoughts are the most natural response to an impossibly desperate situation, a reality caused in large part because of the BPS stranglehold so this is basically reality folding on itself here.

It's not exactly surprising but still shocking to witness this much incompetence running in circles for so long by now the bodies they leave in their wake add up to a sizable structure under their feet. And no doubt the most common reaction to this study will be contentment at the fact that it incorporates a psychosocial perspective, as if this was a desirable goal rather than a process.

At this point most of my anger is directed to people who give this garbage legitimacy. It's not even close to be credible and yet it effectively acts as judge, jury and executioner to millions of lives further broken by levels of incompetence that seem to persist only because no one actually believes anyone working in medicine could be this bad. And yet it appears to be common and it doesn't take a linguistic analysis to find that out.
 
I think they should try these things out on robots first. That would tell them if the words really meant the robots were depressed. Then that could be extended to people. In fact why bother extending it to people. You could cure thousands of robots with internet CBT in half a second. So much more cost effective.
 
There is an obsession with depression and anxiety in children and adolescents.
Probably underpinned by the variety of inappropriate scales used
HADS
CHALDER
SF36
These are not nuanced for ME responses.

But hey, narrative to reinforce, papers to publish....
 
Adolescence for many is a trial! Being down depressed challenged etc is all part of a 'normal typical adolescent journey'..... at least that was my 30 years of profession experience working with challenged young people in a Referral Unit. Most came through the PRUs eventually remarkably well... and ,'normal'.
It begs the question as to how much damage might be done by this sort of approach and attention favoured by EC at al.
 
Last edited:
As a linguist I formally object to my field being dragged down to FITNET level, some branches of linguistics are quite bonkers enough as it is. For example Linguistic Sentiment Analysis - which I suspect the study here has used a version of - has gone quite faddish but even there they wouldn't consider just looking at 16 samples, let alone conclude that a hint of something in 3/16 was worth pursuing (at least I hope they wouldn't, I haven't really been following things much the last decade).
 
As a linguist I formally object to my field being dragged down to FITNET level, some branches of linguistics are quite bonkers enough as it is. For example Linguistic Sentiment Analysis - which I suspect the study here has used a version of - has gone quite faddish but even there they wouldn't consider just looking at 16 samples, let alone conclude that a hint of something in 3/16 was worth pursuing (at least I hope they wouldn't, I haven't really been following things much the last decade).
How long before someone does a trial of interpretive eyebrow-and-hands dancing and uses the process of identifying sub-conscious semaphores to diagnose... I don't know... the relationship between thoughts-about-desserts and intestinal bloating?

Just use a "BPS" stamp on the document, not even in the text but on top of it, and for sure BMJ will publish it, the SMC will promote it and it will be defended as good science because it's a psychosocial perspective or something. For effect have someone shout "BPS! BPS! BPS!" while the reviewer is reading it, "LIGHTNING BOLT!" style.

Alchemy was 100x more scientific than this, even at its worst. Just make it stop, with all the damn efforts wasted on this we would likely have safe and effective treatments for half the chronic illnesses by now.
 
About as bad as it seemed. They invented their own analysis protocol, then graded themselves on whether the protocol is relevant. Though to some credit it is recognized that it is not particularly relevant but more research is (always) needed.

Their definition of fatigue is definitely not what pwME mean at all, in case anyone was wondering:
any mood disorder was not a cause of the fatigue
We are not talking about the same thing and it's not even close, here it clearly means a proxy for motivation. Whatever, typical. And this is a FITNET-related paper that confirms this is the invalid definition of fatigue used within the study, or whatever FITNET actually is. Might as well define a stuffy nose as talking funny for all that it is relevant.

One weird thing is that the study analyzes the first 4 emails sent to the service among those who were later randomized to the iCBT. Before there could be any informed participation. Would it truly be a Crawley paper if it didn't make a mockery of ethical approval?
Clinicians should be aware that, at the moment, it is very difficult to identify comorbid depression in the early stages of e-consultations.
Indeed, which makes all the statistics about those diagnoses very unreliable. Doesn't stop anyone from claiming precise % as if they were solid as ground. But the clear logical conclusion of this is that this "relation" with depression is very unreliable. If only that meant anything.

I will dub this the FISH trial, where if you go to a pond where you seeded fish and build lures for the fish then just scoop them out at a choke point and sort out the precise fish you want, you can claim whatever you want because clearly nobody reads this stuff or checks its substance, it's just a churn of lousy papers of no significance whatsoever that only aim to maintain this stupid fictitious narrative alive.
 
Sadly this perpetuates a mindset ensuring that if/ when EC leaves the field everything continues as " business as usual" for a particularly vulnerable patient group.

We really need charities/ biomedical researchers to lead on this and expose it for what it is.

If it is too uncomfortable for UK researchers then leverage needs to come from abroad.

Recent research ( I am assuming timed for NICE deliberation) is abysmal.
 
Back
Top Bottom