Language as a predictor of anxiety, depression, and self-efficacy scores and recovery rate in teenagers with CFS, 2022, Fennema (M.Sc. Thesis)

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Dolphin, Dec 14, 2022.

Tags:
  1. Dolphin

    Dolphin Senior Member (Voting Rights)

    Messages:
    5,792
    Source: Utrecht University Date: October 28, 2022 URL: https://studenttheses.uu.nl/handle/20.500.12932/43311

    https://studenttheses.uu.nl/bitstre...2/43311/Thesis Mara Fennema Final Version.pdf

    Language as a predictor of anxiety, depression, and self-efficacy scores and recovery rate in teenagers with Chronic Fatigue Syndrome
    -------------------------------------------------------------------
    Mara Fennema - Artificial Intelligence, Utrecht University, The Netherlands

    Summary

    Nowadays, Artificial Intelligence (AI) models are being used in multiple areas of the healthcare sector.

    This thesis looks into the relationship between language use of teenaged patients with Chronic Fatigue Syndrome (CFS) and their anxiety, depression, self-efficacy, and CFS treatment outcome.

    This research aims to make it easier for healthcare professionals to get an indication of the level of a patient's anxiety or depression, the measure of their self-efficacy, and whether or not a specific type of treatment will work for a patient.

    Using a short text written by the patient to get such an indication would facilitate an earlier start of effective treatment.

    This thesis uses data from 102 patients who received online email-based Cognitive Behavioural Therapy for its two main focus areas.

    The first focus area looks at the correlation between a patient's language use and their anxiety, depression, and self-efficacy.

    This is done by training n-gram-based language models and Naive Bayes on the text in the emails to predict the patients' anxiety, depression, and self-efficacy scores.

    The language models' results were compared to those of models trained on randomly generated scores, and it was shown that outputs of these models were statistically significant.

    The language model performed better than Naive Bayes, and it was concluded that there was a correlation between language use and anxiety, depression, and self-efficacy.

    The second focus area looks at how well the language used by the patients in the emails sent to their therapists can be used with various AI models to predict the level of their anxiety and depression, the measure of their self-efficacy, and their CFS treatment outcome.

    This was done using the number of non-agentic language features per email, Bag of Words, and BERTje embeddings.

    These features were used as input for both logistic regression models and neural networks.

    When using logistic regression, the models for predicting self-efficacy using BERTje embeddings performed best.

    The neural networks using BERTje embeddings outperformed the logistic regression models when predicting anxiety, depression, self-efficacy, and treatment outcome.

    Thus it was concluded that it is possible to predict anxiety, depression, self-efficacy, and patient recovery based on language use.

    -------- (c) 2022 Utrecht University

    "Lastly, a form of gratitude should be extended to the fact that I was diagnosed with Chronic Fatigue Syndrome (CFS) as a teenager. Through having CFS from when I was 15 until I was 17, the subject of this thesis was very personal, which helped my motivation immensely."
     
    Ravn, Andy, adambeyoncelowe and 2 others like this.
  2. Shadrach Loom

    Shadrach Loom Senior Member (Voting Rights)

    Messages:
    1,053
    Location:
    London, UK
    Language use bristles with confounding factors, many of which correlate with social capital, and therefore with ability to self-advocate, and to ride out the financial consequences of ill-health, both of which might well be inversely linked to depression and anxiety in pwME.

    It’s very tricky to see the clinical value of this thesis, but at least data scientists with doctorates can go off and get proper jobs in marketing and defence. It’s only psychologists who have to use their ill-conceived postgraduate work on ME in the real world, to feed their children.
     
    Arnie Pye, Trish, Ravn and 8 others like this.
  3. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    It's "good patient"/"bad patient" binarism. They see themselves as a good patient, who got better due to their own positivity, and therefore everyone who stays ill is a bad patient.

    It's an age old trope.
     
    Woolie, Solstice, Arnie Pye and 16 others like this.
  4. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    That would be even less reliable than the bad questionnaires they've been using for decades. IMO this is an admission that they understand the questionnaires are BS, even though those questionnaires are what defines the very concept to begin with. Because when you have good instruments, you don't try out other methods that are, somehow, even less reliable.

    In addition to people having vastly different levels of language skills. What incredible nonsense.

    In case anyone is wondering whether the chemical imbalance is still around:
    It's still around. And there's this:
    Which is invalid, as the questionnaires they use ask about symptoms and it's on the basis of those symptoms that "anxiety" is said to occur. Nothing to do with fear, dread or alarm. This is manufacturing false data.

    Also, uh, this:
    Is nonsense. Self-efficacy is not the belief in being able to execute, it's being able to execute. What is this nonsense? It's like talking about a self-driving car that requires a driver.

    Oh, by the way, this is data from FITNET:
    Oh, wow, it actually gets so much worse. This is how they define objective data:
    This is obviously not objective data. WTH?

    Also, once again, you cannot use AI to solve unsolved problems. Neural networks are great at brute forcing solved problems, they still cannot solve unsolved problems. All neural networks work by comparing to valid answers that must already exist, a "ground truth" that tells the network how close to reality it is, which is especially bad considering how contentious the very definition of recovery is. This is nonsense. Merely finding associations is not artificial intelligence. We are truly in the peak era of medical pseudoscience.
     
    SNT Gatchaman, Hutan, Ravn and 6 others like this.
  5. Ravn

    Ravn Senior Member (Voting Rights)

    Messages:
    2,181
    Location:
    Aotearoa New Zealand
    It's a Master's thesis (TLDR, comments based on abstract and quoted snippets only)

    From a student who had CFS herself, most probably in the age of the Internet, I would have expected more awareness of the fact there's a much wider range of views in the patient community.

    And where was the supervisor to steer them towards more balance at least? Ok, it was probably somebody specialising in AI rather than anything medical expert but still. You need to know a bit about whatever you're unleashing your AI onto.

    Finally, I still don't get why researchers mess about with all manner of unsuitable instruments. If you want to know if someone feels depressed/anxious/self-efficacious - just ask them straight!
     
  6. Midnattsol

    Midnattsol Moderator Staff Member

    Messages:
    3,776
    As someone who has written a degree on a subject not very related to what the supervisor(s) is actually working on, I don't think I got a single comment on how I approached what I did.
     
  7. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    They are less likely to feel judged, because you are secretly judging them. Chances are, when you tell them and their parents that the young person is depressed, or anxious or hopelessly lacking in self-efficacy, that's when the feeling of being judged will set in. Especially if they don't actually feel that depression, anxiety or a lack of self-efficacy is what is stopping them from recovering.
     
    Woolie, Ravn, SNT Gatchaman and 3 others like this.
  8. Ravn

    Ravn Senior Member (Voting Rights)

    Messages:
    2,181
    Location:
    Aotearoa New Zealand
    There is that. But at least if asked directly whether you're depressed or not you know what question you are answering.

    The alternatives are 1) questionnaires asking about all manner of physical symptoms that are then reinterpreted as psychological or 2) a reinterpretation of some language you used in some other context. Both are creepily sneaky as well as inherently unreliable; the more indirect a communication the higher the risk of a misunderstanding even when undertaken in good faith and with the best of intentions. In 1) you have to know the game and deny your physical symptoms to escape an unwarranted psychological diagnosis and in 2) you're not even told that whatever you say may be misinterpreted and used against you.

    With a straight question, if you don't have any psychological issues you can just say so. If you do struggle with your mental health you have the choice to say that, explain the problem as you yourself see it and ask to access treatment. Or you can choose to deny if you deem the risk of unsuitable treatment, stigma or discrimination as too high. That the latter possibility should still occur to anyone in 2022 is an indictment on the way mental health in general has been treated historically and continues to be treated by too many today. A situation which, ironically, is not at all helped by certain factions attempting to redefine seemingly everything and anything as psychosomatic. But that's another discussion.

    As for the current thesis, this business of analysing natural language for hidden signs of hidden significance or meaning isn't new but it seems to have boomed with AI and the ability to scrape SoMe posts. Expect plenty of stuff in the vein of 'Facebook can tell you're depressed even when you don't know yourself!' and 'Your SoMe can predict you're about to quit your job before you are aware of your decision!' I've only been following this trend loosely but often a lot hinges on the interpretation researchers past and present have attached to certain words or utterances (which is what AI is then trained on), often without adequate context and almost never with any consideration for how non-native speakers with different cultural backgrounds may use language. What could possibly go wrong?

    Mind you, if enough people start writing with the help of ChatGPT that may throw a spanner in the works. Someone will have to develop AI to analyse AI which mimics natural language but hides the natural language of the human behind the AI mimic of natural language :confused::laugh:
     
    Hutan, Woolie, RedFox and 2 others like this.

Share This Page