p<0.05 as a measure of accuracy is actually really bad science. A professor of mine did a paper on it, it's actually a big enough issue to have a wikipedia page on it.
TL;DR is that the 5% confidence interval is based on an arbitrary suggestion from a 1925 paper and that the scale of modern datasets means that false positives are super common.
I would think if 55% of your studies can be replicated with results having p<0.05 significance, that's better than a coin flip.
p<0.05 as a measure of accuracy is actually really bad science. A professor of mine did a paper on it, it's actually a big enough issue to have a wikipedia page on it.
TL;DR is that the 5% confidence interval is based on an arbitrary suggestion from a 1925 paper and that the scale of modern datasets means that false positives are super common.