Doesn't that make it a coin flip, meaning psychological studies are essentially meaningless? Or does replicable imply that multiple studies have replicated it?
To be fair, a good psychological study will be difficult to replicate because they will need a very large sample size, making replicating it expensive and cumbersome. I remember reading parts of an interesting study on the differences is manifestations of schozophrenia between patients with western Christian beliefs and patients with traditional Chinese religious beliefs, where the western patients were more likely to experience voices/hallucinations as malicious "demons" out to hurt them, whereas patients with traditional Chinese religious beliefs were likely to experience them as friendly advice from ancestors out to help them. I have no idea how well researched the study actually was, but if it had a solid data sample replicating it would be very, very difficult, because you would need to find and interview a large sample size of schizophrenic patients with traditional Chinese religious beliefs.
Economics will also be a difficult field to conduct experiments in, and pretty much impossible to control for external factors in. Still, if they do want to be a "real science" they should treat it like one and decide that this "Philips Curve" whatever it is, is obviously not true. That could be a result! A scientific result! Proven with data and everything! A physicist who expected the curve on the left and got the... curve (?) on the right would probably be very excited, because that means you just proved a theory wrong.
That example is interesting, because while I don't know if that specific study has been replicated, the same basic dynamic has been studied with other religious and cultural groups.
So I guess you could say it's corroborated, but not replicated?
I'm assuming the poster means replicable as in the results are replicable. Also keep in mind that Psych is a very young science, shit is still muddy and being learned constantly. Gotta throw shit at the wall and see what sticks
p<0.05 as a measure of accuracy is actually really bad science. A professor of mine did a paper on it, it's actually a big enough issue to have a wikipedia page on it.
TL;DR is that the 5% confidence interval is based on an arbitrary suggestion from a 1925 paper and that the scale of modern datasets means that false positives are super common.
it's literally not a science when do they test anything
⅓ of economics results aren't replicable: https://fantasticanachronism.com/2020/09/11/whats-wrong-with-social-science-and-how-to-fix-it/
To be fair the same is true for a lot of psychology studies too.
It's in the same link: psychology is lower, about 55% of studies are replicable (depending on sub-discipline)
Doesn't that make it a coin flip, meaning psychological studies are essentially meaningless? Or does replicable imply that multiple studies have replicated it?
To be fair, a good psychological study will be difficult to replicate because they will need a very large sample size, making replicating it expensive and cumbersome. I remember reading parts of an interesting study on the differences is manifestations of schozophrenia between patients with western Christian beliefs and patients with traditional Chinese religious beliefs, where the western patients were more likely to experience voices/hallucinations as malicious "demons" out to hurt them, whereas patients with traditional Chinese religious beliefs were likely to experience them as friendly advice from ancestors out to help them. I have no idea how well researched the study actually was, but if it had a solid data sample replicating it would be very, very difficult, because you would need to find and interview a large sample size of schizophrenic patients with traditional Chinese religious beliefs.
Economics will also be a difficult field to conduct experiments in, and pretty much impossible to control for external factors in. Still, if they do want to be a "real science" they should treat it like one and decide that this "Philips Curve" whatever it is, is obviously not true. That could be a result! A scientific result! Proven with data and everything! A physicist who expected the curve on the left and got the... curve (?) on the right would probably be very excited, because that means you just proved a theory wrong.
That example is interesting, because while I don't know if that specific study has been replicated, the same basic dynamic has been studied with other religious and cultural groups.
So I guess you could say it's corroborated, but not replicated?
I'm assuming the poster means replicable as in the results are replicable. Also keep in mind that Psych is a very young science, shit is still muddy and being learned constantly. Gotta throw shit at the wall and see what sticks
I would think if 55% of your studies can be replicated with results having p<0.05 significance, that's better than a coin flip.
p<0.05 as a measure of accuracy is actually really bad science. A professor of mine did a paper on it, it's actually a big enough issue to have a wikipedia page on it.
TL;DR is that the 5% confidence interval is based on an arbitrary suggestion from a 1925 paper and that the scale of modern datasets means that false positives are super common.