ME research seems to be proceeding very slowly. AI development seems to be proceeding faster than predicted. So, do you think that human researchers will find the root cause of ME first, or will AIs? A related topic for discussion is whether ME research funding might be better invested in developing an AI to solve the problem. Furthermore, since there are still many diseases that remain a mystery or lack effective treatments, the organizations trying to solve those diseases could pool their funds to create a medical research AI (and necessary lab facilities). I suppose it would be fair to give disease priority based on funding (MS would be ahead of ME due to its greater funding organization). However, research into one disease may help with other ones (lots of data) and the AI will improve with each attempt, so successive attempts will go faster. The AI won't compromise its work in order to get citations or promotions, and it won't try to hide some incorrect decision it made. AIs have their own flaws and limitations, but unlike humans, they are willing to improve from feedback. AIs certainly have the potential to exceed human medical researchers; the question is how fast will that happen?
Neither. Anyway, AIs don’t in any meaningful sense act independently, so the answer for pretty much any future research endeavour would always be “humans, assisted by tools that scale up trivial classification and correlation”. But there’s no reason to think that ME is a conundrum particularly suited to brute force solutions, or that AI will be a panacea for medical mysteries, or that machine learning eliminates biases and wishful interpretations - it often exacerbates them.
The answer is 42 - yes but what is the question ? There a profound problems of interpretability* in AI and it's very difficult to make sound progress if you can't demonstrate how the solution was arrived at. And that's even without the challenge of heterogeneity in ME/CFS - is it a single pathology or two or ten ? And is that more than one per individual as well as the whole patient population ? If we get to the point where there are some very big numbers to deal with, then yes software rather than wetware will do the sums best, but wetware needs to understand what software is doing and it'll be wetware that puts any solution into effect. * a couple of examples (which I make no claim to more than a surface understanding): Towards falsifiable interpretability research Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks
My concerns about AIs being involved in crunching through research papers to generate answers is the question of whether they're capable of assessing the quality of any research paper and the potential for bias. Otherwise, AIs are going to take papers such as the PACE ones at face value and come out with dangerous rubbish.
Current AI will give us answers, but those answers might be nonsense and not pan out. Humans will still be in the mix if an AI comes up with an hypothesis, at least for many years to come. So the answer is either humans or a mix of both, but both possibilities ignore the most important point. The most important thing goes to the question. THE answer to ME? There will be many breakthroughs in time. Some of those might be AI assisted. There will be treatments, which currently require human clinical trials. AI might feature in some of the analysis, which can include classification of subgroups etc. and that could be determined using AI or statistical or other methods. Over time there might be more or better treatments. An outside possibility, speculation at this point, is humans have already come up with a cure and we just do not realize it yet. Longevity research now has ways to reset the epigenetics of cells, and some expect this will cure, treat or prevent many diseases. It might not cure genetic disorders, or active pathogens, but should have an impact on cancer though not a cure. The claim is, and I have not read the original research, that old blind mice can have their eyesight restored, get old again and lose sight, and have it restored again. I think the same mechanisms occur in humans, but studies are now only just moving into primates. Resetting the biological clock in this research involves changing some aspect of the epigenetics. It is likely to be many years before this can even be tested on ME patients, and most likely will start with in vitro tests. A consequence of this technology is that people might live much longer, which might give some of us a chance at living to see a cure if it takes a long time. The downside right now is its unclear about costs, and many of us might not be able to afford it. On the other hand many of the chemicals used are natural substances already sold at health food and body building stores. These include resveratrol, NMN, NR, and alpha ketoglutarate. There are probably many more. Even the amino acid glycine has potential to at least slow aging. Would others be interested in starting a thread on longevity research and ME?
AIs can't do this on their own, they require values and those are described to them. There is no objective definition of a good or bad study. For example a Cochrane AI doing systematic reviews would be given instructions on how to rate open label trials with subjective outcomes run by biased researchers who clearly have zero equipoise, and rate them very poorly, if at all. Not because it doesn't think much of those, but simply because this is what the guidebook says. Just because humans like those doesn't change that the instructions say they're worthless, because worth is relative to each person. The quality of research is a subjective assessment. It has objective foundations, but those foundations are subjective, based on a preference towards getting real answers vs. manufacturing fake ones. But a legitimate process has to apply the rules fairly and evenly, which is what never happens in EBM. Disasters like the BPS model are entirely carved out of exemptions to those rules. In order to find those studies and trials of acceptable quality, an AI would have to exceptionally change their quality assessment, to either make exceptions in particular cases, which is not acceptable, or to render all other studies and trials worthless by simply having values so devoid of pertinence that nothing is useful. AIs won't have jobs, or colleagues. They won't need a steady income for their necessities and won't have strained relationship with a former adviser whose continued respect they seek. They won't have an ideologically conflicted boss or have written books and given paid talks about what turns out to be made-up quackery. They will make the assessment based on how they assess all other things, which is the opposite of how everything BPS happens. Such a tool would obviously never accept overlapping entry and "recovered" criteria, those are completely absurd human folly. Even the whole comparing apples and oranges and pine nuts and metal rods won't fly as it makes no logical sense. The real answer here is that humans will find the answer with the help of AIs. It's a tool, just like computers. They will simply enable better, faster and cheaper work. It's all about asking the right questions, then going about the process of answering them for real, rather than making sure to get the politically and financially desired answer.
Maybe not in the general case, but in some studies result in objective outcomes, such as identifying a disease mechanism which is then verified by further studies, and also leading to a treatment based on the outcome of the study. An AI could go through thousands of studies and find common elements such as cohort selection, data processing (Did they use method x or method y? Did they remove some outliers?), and various other factors that might not be noticed by humans. Training an AI on studies with such objective outcomes could help them judge ones that don't yet have an objective outcome.
I feel more & more like Marvin every day i couldnt answer the poll because theres no option for 'dont know'. One thing that worries me is 'Garbage IN Garbage Out', so AI worries me a bit, similar to what @Hutan said. The other answer is that i dont care, it can come from a sewer rat or an alien frog for all i care, as long as it comes. I must confess i know little to nothing about AI, but that kind of independant, lateral thinking 'light bulb moment' seems quintessentially human to me, but i hope advances in computer science & AI might make it possible to crunch numbers and produce answers much quicker. But then i come back to my worries about GIGO
Right now no AI that I have read about understands anything. The current generation seem to just process patterns, and can process huge amounts of data over a very short time in order to find patterns. There is no way humans can be kept out of this loop. What worries me is the propensity for humans to see the patterns that an AI finds and think they are real without doing the necessary scientific work. Its even more worrying that AI systems might write answers that are persuasive to humans despite the AIs intrinsic lack of understanding. To me its just a more sophisticated form of Searle's Chinese Room. Its just a better generation of programs faking smart.
Sorry unclear why you quoted me? were you agreeing or disagreeing? My understanding of GIGO is that if you give a computer data to process that is inaccurate, then the answers it gives based on that data will also be inaccurate. Which is what would happen when it spots a pattern (for example) based on a huge number of bunch of BPS studies? Please excuse me if i talking nonsense i know nothing about this field lol ETA I looked up Searle's Chinese room & gave up reading after the initial skim as it would cost too much of my cognitive energy budget to concentrate sufficiently to decipher it from scratch from the wiki explanation, so have no idea what it means lol. Not that it matters that i didnt understand it
It shows that a linguist AI does not understand anything, it just pushes symbols around in hopefully smart-like ways. PS My first degree was in AI but is now more than two decades out of date. PPS AI language using software has to use rules or algorithms, typically heuristic (approximate or rule of thumb) rules, to make decisions. Even without GI they can produce GO. Other AI systems, such as neural networks, are a little different but subject to many of the same issues.
I didn't include those options because most people would probably choose them, which wouldn't reveal much about how they feel about human vs AI. Since the results are pretty much equal, I suppose it means that most are really choosing C and D. It does show that there's no dramatic bias either way. Personally I'd choose C, but feel that AI is likely to be an important tool, doing a lot of the 'grunt work' of looking for correlations in a massive amount of data, and being capable of managing a "bigger picture" than humans can. Human researchers tend to have narrow focus, while an AI could consider connections between a mass of brain data and gut data and blood/lymph/glymph network data and cytokine data, etc.
Human are also vulnerable to GIGO. Since most psychological studies are of questionable (can't even clearly define what they're trying to measure) basis, what does that say about theories bases on them?
Your imaginary AI could perhaps consider those connections if all that data was collected and stored for a sufficient number of pwME and controls for a sufficient amount of time to train an algorithm to predict pwME. And if the data chosen happened to be the right sets, and if the diagnosis was clear-cut, and assuming there were no confounding factors. But it’s vanishingly unlikely that this would ever happen at sufficient scale, considering the ethical, governance and activist challenges that face even quite siloed and purpose-bounded NHS data observatories. I’m still sticking with E - Neither.
I’d certainly be interested in following one! A field rife with inflated claims and wild promises, though, presumably.
I'm betting that invisible intergalactic time travelling lizards from the future, with a 'take one pill' cure for everything, will turn up, give it away for free on the interthingy, with the only price being that people must wait in line wearing a shirt that has what flavour they are printed on them, in lizard, and only take 95% of the planets water in recompense. I'm hoping that each pill is smaller than half a giraffe (apparently a unit of measurement these days).
Have you seen "Landscape with Invisible Hand"? Its interesting if you like that kind of story, worth watching if its free! (which it is on Amazon) My own view is it has to be both, because AI is just a tool which has been hyped. Even a spellchecker is a kind of AI. They cant work without being aimed at something and fed with something. Researchers since Kerr have not been willing to grapple with subtypes of ME without big data to back them up. It takes people to collect the data and it will take carefully designed AIs to find the patterns in it, which have so far eluded us. I think Decode ME is the first such example and will probably need AI to interpret the data. I am not sure that the AI they use will be like the large language models but it could work in conjunction to provide descriptive output.