Incorrectly analysing data can have serious implications. In a recent blog, Jeff Leek brought together some of the more common mistakes that researchers can make and provides guidance on how to avoid them. For example:
- Outcome switching can introduce bias, mislead readers and may even be unethical. Clear definition of outcomes and planned analyses must be made up front and adhered to. Obviously, even the best made plans can come unravelled, and if this happens then researchers should be honest and specify how the analysis plan was adapted. Indicate how decisions were made and why, for example, how invalid measurements or outliers were dealt with.
- The desire to describe significant findings can result in “P-hacking”, where a raft of different analyses are performed. However, only those that give a significant result are reported. This may or may not be intentional but a paper published last year suggests that it is prevalent in science.
- Some statistical methods are designed to measure one outcome at a time and are unsuitable for multiple testing. So researchers may need to perform multiple testing corrections.
Jeff also highlights the so-called “I got a big one here”, where overenthusiasm of getting a hugely significant result can prevent you from noticing experimental errors or anomalies and advises caution if big effects are found. Jeff concludes by drawing attention to the use of over-complicated methods when simple ones will suffice. He also emphasises that data analysis can be complicated and is relatively easy to get wrong. So if in doubt, consult a statistician.