Kitty
Senior Member (Voting Rights)
Presumably some sort of testing process has been done but it sounds as if when it has that things have turned out inconsistent.
I dare say, if you got really bored, you could write an entertaining spoof trial.
Presumably some sort of testing process has been done but it sounds as if when it has that things have turned out inconsistent.
I dare say, if you got really bored, you could write an entertaining spoof trial.![]()
Does the application of a large moving fish to psychologists noggins successfully treat the desire to use CBT/GET for anything, and does the mortality of the fish have any effect on outcomes?
This would be research of comedic interest, at least to me.
I thought the same thing; the Cochrane reviews on the MMR vaccine might be one to look at (I tried but my brains just not functioning right now); the 2012 one didn't use GRADE but the most recent one 2020 did. This is the link to the historyIt would be interesting to have a comparison of how evidence is rated with and without GRADE.
A study in effing and blinding?
The bottom line is they are more about saving money by not employing enough of the right staff to make the appropriate decisions.
A huge amount of work already done by Cochrane was repeated involving sorting results of studies into subgroups and all sorts of things relating to GRADE that were completely unneeded.
Should NICE take Cochrane's work at face value?
I am certainly not talking about taking over Cochrane's conclusions. I am simply referring to all the searching and sorting into subgroups that must have gone on at both - prior to any evaluation. Here at S4ME as a community we already knew enough about the scope of the data before NICE began. Why trawl through it all again and more importantly why sort it into all these mindless subgroups? We knew from looking at the abstracts that the studies were incapable of providing usable information on efficacy.
If these are the standard procedures NICE use then it's important they do that in our case.
All evidence I have seen so far suggests GRADE is potentially useful
ll it does is provide a few bumpers to decisions
But my point is that much of these 'standard procedures' are simply the nonsensical antics of GRADE grading - pages and pages of them - which in the end is a bad idea because we do not want grading we just want to know if evidence justifies recommendation. In reality the result is decided by the committee readjusting what the NICE staff have presented to them.
From a different field, my experience is that both "techies" and non "techie" should be involved in the one process rather than independently and the decision of both teams weighed up at the end.
It would also be interesting to see how much variability of gradings there can be between different teams working on the same data.It would be interesting to have a comparison of how evidence is rated with and without GRADE. I suspect that the approach with GRADE will result in evidence being rated higher quality than the approach without GRADE.
I can’t say I am well read enough on the detail of GRADE but I would say that whatever system is used to put a ‘value’ on research should determine first what the purpose of the ‘grade’ is.
Pace is a bad trial in terms of flawed methodology so you could discount it completely or can you use it to provide proof that even flawed it shows that CBT and GET doesn’t work (based on the principle that a negative result is as useful as a positive one)? So I guess it all hangs on what your objective is ....do a thorough search of all known research and use it to establish what facts exist?
In this case it’s a bit moot since all the evidence we have says that we don’t know very much and there isn’t a lot of anything other than what little we have tried so far doesn’t work.
One thing I used to do when doing a literature search ahead of pitching for a research grant (food not medical) was to initially group past research in terms of quality/strength just so I could weigh things up. This was good because you could quickly filter out the wheat from the chaff and spot ‘career publishing’ by the same authors and genuine replication etc. but also negative results that showed what ideas had been disproved.
I can see that grouping evidence might be useful initially to establish a base and even to demonstrate at a high level what you are dealing with, but that’s probably where it ends.
The next bit (insight) should be based on skill, common sense and consensus. I.e free thought not some second rate algorithm that assumes that people are incapable of learning a skill
Looking at the BMJ 'What is GRADE' and the opening para of 'How does it work?' the following sentence is interesting:
An overall GRADE quality rating can be applied to a body of evidence across outcomes, usually by taking the lowest quality of evidence from all of the outcomes that are critical to decision making. (my bolding)
To me, the confusions involved in what GRADE is trying to do are apparent straight away. It is not clear whether the idea is to decide whether or not there is an effect or to decide what size it is, apparently assuming that there is one. Certainty and quality are also seen as interchangeable. The whole thing looks like a fail on a probability exam paper.
I see now what 'transparent' is supposed to mean - to have the reasoning explicit. But GRADE does not do this. It just requires that you say you downgraded one pip for bias and one pip for indirectness or whatever. Does it require you to say what your reasons are? I think it would be better simply to have a rule at NICE and Cochrane that reasons for evaluations must be given in full.