Blog: Murky matters involving conflicts of interest

Indigophoton

Senior Member (Voting Rights)
Considers COI as they relate to peer review,
I’ve noticed that junior scientists tend to be really picky about conflicts of interest, whereas senior scientists don’t tend to be sticklers.

I don’t think this is necessarily nefarious. I think senior scientists have come to terms with having some kind of history with a lot of people in their field. For example, let’s say I’m looking for reviewers of a manuscript. A doctoral candidate or a postdoc might express concern that they coauthored a paper with one of the middle authors of the manuscript a few years back. A more senior researcher probably wouldn’t even blink at that relationship as a potential conflict of interest. It’s a small world.

That said, some people (independent of career stage) are less inclined to volunteer conflicts of interest when reviewing the work of their peers. There clearly are some people out there who have few qualms about reviewing work of people who they have close relationships with, and also recommending friends to review their own work. Other people take the guidelines for conflict of interest a lot more seriously. Sometimes, the person in charge of a review process (for a manuscript or a grant or an award or whatever) won’t be positioned to know whether these conflicts exist and sometimes it’s hard to tell.

https://smallpondscience.com/2018/05/14/murky-matters-involving-conflicts-of-interest/amp
 
I know so little of this area but maybe a solution would be to have Statisticians as first line reviewers to review the technical aspects of a paper then the follow up (peer)reviewer would be required to bear in mind that first review of the strengths weaknesses and possible flaws of the methods employed.
 
I know so little of this area but maybe a solution would be to have Statisticians as first line reviewers to review the technical aspects of a paper then the follow up (peer)reviewer would be required to bear in mind that first review of the strengths weaknesses and possible flaws of the methods employed.
I'd say yes, if stats were the pivotal issue that distinguished poor from good research. But I think it almost never is.

Recently, I've been reviewing research on depression, and have come across truckloads of weak studies. But not one was weak because of the stats. They were all weak for other reasons - because the researchers asked the wrong question, designed the study badly, ignored confounding variables or alternative interpretations, or emphasised those findings that fit their preconceptions, while playing down those that didn't.

One had a ridiculously small sample size, so that's something a statistician would pick up (but the idea behind the study was ridiculously stupid so the sample size issue was kind of a moot point!).

Its pretty hard to spot a lot of these issues unless you're familiar with the pitfalls in that particular subject area. For example, I doubt that statisticians reading the PACE trial would notice the problem with the reliance on self-report measures - that would take someone who knew about the psychological research specifically, and the issues surrounding reliability of self-report measures. And that is the biggest flaw in the whole trial, imo.

Of course, just being a specialist in the area doesn't mean you're any good at these things either. All I'm saying is that statisticians aren't the answer.
 
Yes, I see your point. So the problem needs a fix from a number of fronts then?
Yes, I think the solution is probably not at the peer review/publication end of things (in any case, that wouldn't work retrospectively). I think it has to come from increasing readers' general scepticism and awareness of potential biases.

And teaching anyone who reads or in any way uses other people's findings never to take the researchers' conclusions at face value. Always look at what was actually done and found.
 
So we're back to wouldn't it be great if critical thinking was taught in grade school.

Some people manage to come by it naturally they are sceptics. Others need to have their optimism and enthusiasm tempered maybe.
 
Is that information available separately to what was said to have been done and found?
Valid point.

But then, if I were going to bother to lie, I wouldn't be producing shit articles like those ones. My articles would all have large Ns, low dropout rates and the results would look amazing, with all hypotheses confirmed at .001. The fact that so many studies are shit tells us that people, on the whole, are not faking their experiments. I suspect for every study that contains outright deception there are probably 1000 with no lies but that are just plain shit.
 
I am not suggesting that it necessarily involves deliberate fraud. I just believe that people often did not do what they thought they did or what they intended to do. That's what it is to be human.
 
I am not suggesting that it necessarily involves deliberate fraud. I just believe that people often did not do what they thought they did or what they intended to do. That's what it is to be human.
Yea, researchers do push the boundaries of what's acceptable. Like not reporting studies or manipulations that didn't turn out as hoped. That's not actually lying, but its still being economical with the truth.
 
I think it's not always consciously done.

Somebody might take certain actions thinking they understand the underlying mechanisms and therefore thenend result is x.

But actually, their understanding is insufficient and not all the mechanisms they thought were implicated are and there may be other factors they didn't allow for, or were unaware of. Their end result might be a more complex equation involving x.

I saw this kind of thing quite often in my career. I'd get a call from someone who took certain actions to resolve an issue, but didn't really understand the technical implications. 75% of the time the might get away with it, but in those other situations....it would hit the fan.
 
Back
Top Bottom