https://twitter.com/user/status/1477982601692524545 “ In a 2018 study, for example, one of us (Professor Dunning) and the psychologist Carmen Sanchez asked people to try their hand at diagnosing certain diseases. (All the diseases in question were fictitious, so no one had any experience diagnosing them.) The participants attempted to determine whether hypothetical patients were healthy or sick, using symptom information that was helpful but imperfect, and they got feedback after every case about whether they were right or wrong. Given the limited nature of the symptom information that was provided, the participants’ judgments ought to have been made with some uncertainty. How did these would-be doctors fare? At the start, they were appropriately cautious, offering diagnoses without much confidence in their judgments. But after only a handful of correct diagnoses, their confidence shot up drastically — far beyond what their actual rates of accuracy justified. Only later, as they proceeded to make more mistakes, did their confidence level off to a degree more in line with their proficiency.”
They seem to be trying to show that doctors are the only ones with true expertise and it should be the way things are but it is not. Too often they become too sure of themselves. A friend went to the doctor's because she suspected she had coeliac disease but was told categorically that only children get it. For myself I was sent to see a cardiologist about POTS. He thought that I had a negative test because my heart rate had not gone up 30 beats while standing at the GPs. This was for half a minute when my HRT was already 140 from sitting upright in the waiting room. He knew nothing and now I have it written in my notes that I do not have POTS so I am on my own.
I mostly interpret it as the importance of feedback, not sure it means any special about physicians. Without useful feedback, they do the exact same. Once they get accurate feedback, even amateurs can adjust their thinking. Professionals sometimes get anchored regardless. Not surprisingly, getting incorrect feedback, being praised for failure: not very effective.
https://doi.apa.org/doiLanding?doi=10.1037/pspa0000102 http://people.uncw.edu/hakanr/documents/overconfidence2017.pdf None of this is terribly convincing as there were no real-world consequences for the participants in theirs study. The "medical diagnosis" task was just meaningless Amazon Mechanical Turk trash research. I wonder if there is a similar mechanism for the confidence of psychological researchers?
Do your own research is at its worst when you are coming up with new ideas. They are usually suck it at your own risk to see if its a lemon or nectarine. (Substitute fruit of your choice.) Sometimes you win, sometimes you have a sour taste in your mouth. Do your own research is at its best when its about finding evidence that a specific treatment or diagnosis is bogus. It does not usually tell you where to go from there. This is also about hard science, in reputable journals, not your favourite youtuber. So for ME and graded exercise you can find a meta-analysis it does not work; that multiple government investigations in Belgium found no benefits; the new NICE guideless which are much more reserved and do not recommend GET; evidence that its flagship study "the PACE trial" is somewhere between inadequate and scientific misconduct, and repeat CPET studies show that even moderate exercise can worsen health. That is before you even get to patient surveys or start reading forums.
We all have our theories of what is happening in ME, what helps what doesn't, but as Alex says it has to be checked against evidence. This sort of forum is often seen as a little bubble of people reinforcing each others prejudices but I like to feel that we keep checking against evidence. I look at the BPS and their research just does not stand up to scrutiny and then they defend something like the lightning process and I know we are on the side of science and accuracy.
I'd say: duh. The lack of reliable feedback makes learning dysfunctional, in fact they often have invalid feedback, sometimes self-created. When all answers are good enough, none are.