The problem is that scientific experiments are rarely this simple: there are many different experimental groups with a huge amount of data and many possible statistical formulas to choose from. The most rock-solid studies set out with a plan for exactly what data and what statistical techniques will be used before the experiment begins. But whether due to poor training, sloppiness, or plain corruption, some scientists end up doing it a different way: after the study is complete, they pick and choose certain pieces of data and statistical techniques until they find the one that gets them to statistical significance. That's p-hacking. (Try it yourself with FiveThirtyEight's interactive p-hacking calculator). It's similar to a game of football where one team gets to decide when the game ends, instead of playing for the pre-determined amount of time.
Unfortunately, without expertise in statistics and a whole lot of time on your hands, it's virtually impossible to spot p-hacking in a scientific study. To fight this troubling trend, some journals are reducing their reliance on p-values, focusing instead on other statistical elements that tell more of the story. Delve further into the realm of scientific research with the videos below.