P values are commonplace in the reporting of biomedical research, with a value lower than 0.05 generally considered by most as an indication of a statistically significant result. However, with some journals now choosing to ban them and many statisticians warning of the perils of relying on them, should researchers focus less on p values? This is the subject of a recent blog that discusses current opinion on this often-maligned statistical test.
In experiments where there are many variables interacting in a complex manner, sound statistical analysis is vital to interpret data and draw correct conclusions. However, it is often in this type of noisy experiment that statistical significance as indicated by a p value is more likely to be an over‑estimate, leading to spurious correlations. The authors suggest that it’s not the fault of the p value, rather the way that some chose to use it. Whether its ‘p-hacking’, where researchers play with their data until a significant result is found, or authors publishing a significant result without considering its scientific relevance, there are many p value pitfalls. The authors conclude that banning the p value altogether may not be the solution, but a better understanding of how to use the test appropriately would certainly help.