Theoretical Amnesia

Indigophoton

Senior Member (Voting Rights)
Nice article from a while back on the problems that arise in a field - psychology in this case - that has no theoretical framework,
In the past few months, the Center for Open Science and its associated enterprises have gathered enormous support in the community of psychological scientists. While these developments are happy ones, in my view, they also cast a shadow over the field of psychology: clearly, many people think that the activities of the Center for Open Science, like organizing massive replication work and promoting preregistration, are necessary. That, in turn, implies that something in the current scientific order is seriously broken. I think that, apart from working towards improvements, it is useful to investigate what that something is.

In this post, I want to point towards a factor that I think has received too little attention in the public debate; namely, the near absence of unambiguously formalized scientific theory in psychology.
[With no theoretical basis] your scientific field becomes susceptible to the equivalent of what evolutionary theorists call free riders: people who capitalize on the invested honest work of others by consistently taking the moral shortcut. Free riders can come to rule a scientific field if two conditions are satisfied: (a) fame is bestowed on whoever dares to make the most adventurous claims (rather than the most defensible ones), and (b) it takes longer to falsify a bogus claim than it takes to become famous. If these conditions are satisfied, you can build your scientific career on a fad and get away with it. By the time they find out your work really doesn’t survive detailed scrutiny, you’re sitting warmly by the fire in the library of your National Academy of Sciences.
Remind you of anyone?

http://osc.centerforopenscience.org/2013/11/20/theoretical-amnesia/
 
Good article. It seems to me the elephant in the room in a lot of these replicability discussions. A lot of the problems in replication would disappear if we were to stop testing hypotheses based on stupid or half-baked ideas (like whether you're more responsive to blue wavelengths when feeling "blue", I mean FFS!). You're just going to get a lot of false positives this way. Whereas if you start with a well thought-out theory, and only test predictions that align with that, you've already reduced your chances of false positives hugely. Theory is everything.

But like the first commenter says, this problem of dustbowl empiricism is definitely bigger in some areas of Psychology than others. Especially personality/intelligence and social psychology. In some other fields, research is heavily theory-driven. Cognitive Psychology, for example, is all about theory, and about testing competing theories - there are huge theory wars, where people race each other to generate and test predictions of their particular theory, to show its superior to the others out there. These wars are played by a set of clear rules and they definitely move the field forward.
 
Good article. It seems to me the elephant in the room in a lot of these replicability discussions. A lot of the problems in replication would disappear if we were to stop testing hypotheses based on stupid or half-baked ideas (like whether you're more responsive to blue wavelengths when feeling "blue", I mean FFS!). You're just going to get a lot of false positives this way. Whereas if you start with a well thought-out theory, and only test predictions that align with that, you've already reduced your chances of false positives hugely. Theory is everything.

So the real issue is that psychology experiments are cheap?

You'd never get away with that in the rest of research because you need preliminary data to even think about getting funded. Even the ones they swear you don't on paper... you actually do, or the guy who DOES have data is going to get that funding and you aren't.

But the money you'd spend scrambling to get a few data points before grant submission is probably already half the budget of your psych study.
 
You'd never get away with that in the rest of research because you need preliminary data to even think about getting funded. Even the ones they swear you don't on paper... you actually do, or the guy who DOES have data is going to get that funding and you aren't.
The "blues" study I mentioned above actually cost a lot - it was an fMRI study. They would definitely have needed a grant.

I was on a national research funding panel once. The cheapest research areas to fund were economics, mathematics and linguistics. But all three produce some great theoretically-driven work.
 
Yes, but I think it was time and money wasted, because it simply attempts to link one outcome with a bunch of potential predictors, without applying any conceptual framework beyond simple association.

They could always contact a few hundred or a few THOUSAND of them again and get additional information. ;)
 
Back
Top Bottom