Nature: 'A fairer way forward for AI (artificial intelligence) in health care', 2019, Nordling

Discussion in 'Other health news and research' started by Andy, Sep 30, 2019.

  1. Andy

    Andy Committee Member

    Messages:
    22,410
    Location:
    Hampshire, UK
    https://www.nature.com/articles/d41586-019-02872-2
     
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,010
    Location:
    Canada
    Today, in people finding about how poverty is bad for your health, for numerous reasons that often compound with one another.

    Although for the US the problem is massively aggravated by the broken system of private insurance meaning millions of people will completely avoid health care until they are in a dire state.

    It's pretty interesting that it took an AI to flag this for humans to see the obvious they should have already known about. AI will bring out a lot of interesting blind spots like that in the next few years.
     
  3. BruceInOz

    BruceInOz Senior Member (Voting Rights)

    Messages:
    414
    Location:
    Tasmania
    The thrust of the article seems to be that if the current system is so full of biases based on race and socio economic status, it can not be relied on to train the new AI systems. Using current practice to train AI systems will result in an AI system that entrenches the current biases.
     
  4. Andy

    Andy Committee Member

    Messages:
    22,410
    Location:
    Hampshire, UK
    Wasn't there a chatbot that had been developed on public online interactions between people and it was found to be racist?
     
  5. BruceInOz

    BruceInOz Senior Member (Voting Rights)

    Messages:
    414
    Location:
    Tasmania
    I guess you mean this:
    https://en.m.wikipedia.org/wiki/Tay_(bot)
     
    Andy, Annamaria and Wonko like this.
  6. Andy

    Andy Committee Member

    Messages:
    22,410
    Location:
    Hampshire, UK
    BruceInOz likes this.
  7. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,010
    Location:
    Canada
    Depends. There has been a trend in the past few years showing that machine learning systems do best when given the fewest assumptions, ideally none at all. So at least there is enormous pressure in the way research is showing much more useful results out of systems that specifically go out of their way to give the most freedom and the least constraints.

    In some cases there have been experiments with systems learning with almost no supervision at all and coming up with far better solutions, many of which were shockingly creative.

    This would be a significant problem if the trend were the other around. There is bias in all data so I don't think medical data would be significantly different. The main problem would be in missing data, as for example we ourselves are largely missing or incomplete, but biased AI systems do so poorly that I think those with fewer, or none, assumptions will precisely highlight the discrepancy that we have been screaming about for years.

    Specifically, the kind of biased research that supports the psychosocial model of ME will no doubt be flagged as below garbage-tier and of no value whatsoever besides being perfect examples of how not to do research.
     
    Last edited: Oct 1, 2019
    Annamaria, BruceInOz and Wonko like this.
  8. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,010
    Location:
    Canada
    Unmoderated data will always be lousy, doesn't matter if AIs are involved or now. People actually went out of their way to influence the chatbot.

    The quality of medical data will never be perfect but should generally be relatively free of Nazi sentiments, various controversial opinions and hate speech. Personal files no doubt have some less than stellar comments but those are unlikely to be found anywhere but on a sheet of physical paper and never submitted to centralized systems.
     
    Annamaria and BruceInOz like this.

Share This Page