Millions of black people affected by racial bias in health-care algorithms

Discussion in 'Other health news and research' started by Andy, Oct 25, 2019.

  1. Andy

    Andy Committee Member

    Messages:
    22,410
    Location:
    Hampshire, UK
    https://www.nature.com/articles/d41586-019-03228-6
     
    Joh, MSEsperanza, boolybooly and 17 others like this.
  2. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    3,862
    I assume such algorithms will also disadvantage women, given the historic bias in describing medical conditions from a male perspective, resulting in a number of conditions being more often under diagnosed and/or misdiagnosed in women. Any algorithms based on such would presumably also disadvantage woman.

    Also most of us would also assume that a number of conditions that predominantly effect woman are often misdiagnosed or inappropriately treat. It would be interesting to see if such algorithms also disadvantaged women in this situation or if the were actually better at being gender neutral.
     
    rvallee, Mithriel, Hutan and 4 others like this.
  3. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    Looks like it was to do with cost: https://www.google.com/amp/s/www.ws...racial-bias-in-hospital-algorithm-11571941096
     
    Last edited: Oct 27, 2019
  4. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
  5. Wilhelmina Jenkins

    Wilhelmina Jenkins Senior Member (Voting Rights)

    Messages:
    220
    Location:
    Atlanta, GA, USA
    The article states that lower use of the health care system results in an evaluation that Black patients are less sick rather than their actual higher level of illness. There is a greater historical reluctance among Black people to turn to the health care system because of past and current poor treatment.

    The algorithm assumes that those who access the system less are not as sick. This results in an inaccurate evaluation of their health care needs. This is also one of a number of reasons that Black people are disproportionately undiagnosed with ME.
     
    Last edited by a moderator: Oct 26, 2019
    Woolie, Trish, Joh and 12 others like this.
  6. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    “Indifference to social reality is, perhaps, more dangerous than outright bigotry.”

    This, I think, is the crux of it.

    Sounds familiar in more ways than one, actually...
     
  7. Wilhelmina Jenkins

    Wilhelmina Jenkins Senior Member (Voting Rights)

    Messages:
    220
    Location:
    Atlanta, GA, USA
    But in a case like this, where one group in particular is disadvantaged by an algorithm, it seems reasonable to look at that particular case rather than jumping to generalizations. It may well be that this algorithm disadvantages other groups, but we don’t know that. This study found millions of Black peoples being affected by this algorithm. It’s worthwhile to look at the repercussions of this particular case.
     
    MSEsperanza, Amw66, Skycloud and 6 others like this.
  8. WillowJ

    WillowJ Senior Member (Voting Rights)

    Messages:
    676
    There are many kinds of bias. Nobody disputes that there are other kinds of bias. Someone can even have several types of bias applied to them at the same time.
     
    Last edited by a moderator: Oct 31, 2019
  9. Trish

    Trish Moderator Staff Member

    Messages:
    53,673
    Location:
    UK
    This is the study on which the article is based:
    Dissecting racial bias in an algorithm used to manage the health of populations

     
  10. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    This post replies to a now deleted post.

    A limited viewpoint is undoubtedly the problem in bias like this, although I think a lot of bias and prejudice is really just down to a limited viewpoint rather than active hate or dislike.

    Raving fascists aside, most everyday bias is just because people don't know or 'get' what another person's life is like.

    Take ME as an example. A big issue with clinical prejudice is clinicians not knowing (m)any patients with the disease.

    Once clinicians know someone with the disease personally, they tend to become more open-minded and curious, and ultimately more accepting.

    If they don't have that knowledge, they judge it through their own lens: 'If I felt that way, I'd just push through.' 'Everyone's tired. Why are these patients so soft?' 'If they're really this ill, why can't I see it or measure it?' Etc, etc.

    It never occurs to them that they're only seeing a small part of the picture because they've never been made to step back and see the whole thing.
     
    Last edited: Oct 31, 2019
  11. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,515
    Location:
    UK
    The point is that this is often not coding but rather it is due to machine learning where the bias is a result of the choice of training samples and can often be implicit.

    With Bias in ML algorithms sometimes people pull out two concerns

    1) The data set selected just doesn't have samples from particular minority groups. So there are a number of examples of things like facial recognition systems or even automated taps that work less well with black people because the developers didn't include data from black people in the training set.

    2) Inferred Bias. This is where there is potential correlation which can also represent societal biases in datasets which is then learned into the algorithm and features that correlate become used to predict. An example given here is around crime prediction where again race becomes a 'predicting factor' where it may really relate to being disadvantaged but the easy feature for a classifier to pick up on is race.

    There are many more issues with the use of ML and biases and there is an active research field around the ethics of AI and how things like explanation become important tools that should be used prior to deploying a model.

    A classic example was where a classifier was trained to recognize cats and dogs but all the cats pictures included grass and all the dog pictures didn't. So what it learned was to recognize small furry animals on grass or other surfaces. This was due to bad training (and test) data. It reflects the dangers of ML and the need to be very careful about inbuilt biases.

    Maybe the issue is that ML really isn't very good and can pick up on the wrong features and will only generalize from the examples that it has been given. Image recognition systems will, for example, often pick up on texture rather than shape so if you replace the fur on a cat picture with a texture like rhino skin then an ML algorithm will often see the cat as a rhino even though the shape is very different.

    That is without going into the attack space where someone may be trying to force a bad decision. My new avatar is an adversarial patch so if you put a small image of that next to a object and get a ML (or one particular algorithm) to classify the object it has a high chance of classifying it as an otter.
     
    Last edited by a moderator: Oct 31, 2019
  12. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,515
    Location:
    UK

    The issue from an algorithmic perspective (or really an machine learning one) is that if you train on data which include these everyday biases then the algorithm picks up on them. For example in algorithms that sift for good CVs if they are trained on data of what a good CV is based on previous job selections then it will pick up on all the biases of past selections (i.e. it builds in sex, social biases etc)


    A good short blog here:
    https://www.clsa.com/idea/ethical-ai/
     
  13. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    Absolutely.
     
    JohnTheJack and Trish like this.
  14. WillowJ

    WillowJ Senior Member (Voting Rights)

    Messages:
    676
    That's probably true, too, but it's also true that there's over-policing of Black people. This actually may begin in school.
    As you said, the data points aren't unbiased.
    http://www.justicepolicy.org/news/8775m
     
  15. WillowJ

    WillowJ Senior Member (Voting Rights)

    Messages:
    676
    I'll also draw a parallel to M.E.

    It's kind of like, when we want to discuss M.E. and the C.D.C., is like, Oh, yeah, great topic: "chronic unwellness." And we're like, Um, no, that's not... (sigh)... you don't get it. :headdesk:

    And then we start to worry that M.E. would get erased.

    Of course it's true that M.E. and chronic illness generally face some similar issues, but it's also true that M.E. has some unique issues. And sometimes we need a space to talk just about M.E. and not every illness in the world. (Definitely there's a time and place to talk about how it's all connected, though.)
     
    Last edited by a moderator: Nov 1, 2019
  16. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,515
    Location:
    UK
    Its an example I have heard others giving. I've not gone into detail the point for me is around implicit biases in data sets where an algorithm extracts information that it uses to predict which demonstrates a bias which reflects a correlation in the data (perhaps due to bias) but one that shouldn't be predictive. @Wilhelmina Jenkins point about black people being less likely to turn to the health system and therefore treated as not as sick is another really good example.


    It is useful to look at it from an ME perspective but it is also useful to understand the general points that are being made in an area like this (bias in AI/ML algorithms) as it is a very active research area. Then it becomes interesting to put the general points back into the ME world.

    For example, I worry that if ML algorithms were to be applied to ME diagnosis the data may reflect something about the doctor that someone visits (and hence the diagnosis ME, CFS, MUS, BDS,....) rather than the symptoms and accuracy of the diagnosis. This in turn could pull out strange correlations are predictions such as wealth (to see private doctors) or where someone lives. Rather than picking up on actual symptoms.

    Another concern which I have read is that if algorithms reflect current practices (say with automated diagnosis and treatment recommendations) it could become very hard to update medical knowledge and it could mean treatment strategies become very static. So for example an AI system could learn that on diagnosis doctors recommend CBT/GET for ME patients. If this process become automated it becomes entrenched and self-reinforcing and very hard to change as new knowledge should influence practice.
     
    Last edited by a moderator: Nov 3, 2019
  17. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    Good points. Thank you.

    I agree that this would be very risky for us.
     

Share This Page