One of the issues with AI/ML being used is the quality of the data being fed into the algorithms. For example, looking for connections over multiple abstracts may pull out something useful but it could easily be perturbed by bad data (i.e. too many small studies which may not have been done...
Yet there is no admission of the serious failures with the review. Or apologies for misleading people.
This basically shows a very weak editorial process and that they will publish things they know to be wrong. In effect they are admitting their brand is not trustworthy.
I went to the pdf version and selected the full version. Characteristics of the studies starts at page 44. Pace starts on page 55.
They go through each trial and assess the risks. They don't discuss issues with subjective outcomes with the PACE risks and also give a low risk for "complete...
From the latest review their assessment of selective reporting in PACE
So they are still being very kind to PACE in terms of selective reporting where we know that protocol changes had a huge effect (after a court ordered them to release data). Also the TSC minutes were ordered to be released...
@Action for M.E. should tag them but something seem to be picked up in the finding name bit of the tagging (don't know why). They are still members but the account hasn't been active since March 2019.
I think there are several stages to the LP process the first is just reading a book they provide. If people drop out after looking at that and thinking what a waste of time then that is part of the effectiveness of the treatment.
I think it is quite subject dependent. I've seen it happen when economists have reviewed papers but not from computer scientists.
I wondered if it was considered normal in some subjects.
Its a great article on psudo science and the dangers of it. I did think she should also think about some of the work done by the medical profession who claim to be experts (not to mention names).
There is a really good book on this "Bad Blood: Secrets and Lies in a Silicon Valley Startup" by an investigative journalist John Carryrou
Some of the tests they did were very dodgy and with quite random results because the way they handled samples was not correct. They were trying to do blood...
I would have thought this was very dodgy. They should have done power calculations for the initial ethics approval to size the trial so cutting in half suggests that they either got these wrong or they are running a trial which may risk not having sufficient participants to give a meaningful result.
I suspect that was AYME rather than AfME?
AYME was firmly behind Crawley which I suspect lead to its closure as it was becoming toxic for anyone who knew anything about ME.
I think with the NICE submission we had a thread like this one where we collected comments from members then we produced a first draft for NICE on another thread and people suggested comments and edits which got included. This was then the document that got submitted.
I would have thought that just using a browser in private mode would be sufficient? They may try tracking IP addresses but this is generally not a good thing to do.
The comment does say regisitered the protocol with the Lancet rather than preregisted. But of course the 2011 paper ignored the published protocol as this was rewritten with their stats analysis plan.
In a sense it would be difficult since it would need to be a set of broken machines. If it were a single one (or even a few) then I assume that would be considered as missing data.
I guess what could go wrong is the data collection if for example the server collecting the data was having...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.