P-hacking in academic research a critical review of the job strain model and of the association between night work and breast cancer in women
Sammanfattning: P-hacking can be described as a more or less deliberate, explorative approach to data analysis with a flexible/opportunistic search space and the reporting of primarily statistically significant findings. This leads to inflated type-1 error rates and to bias in reported estimates in the scientific literature.This thesis aims to describe how p-hacking can be manifested in academic research and to illustrate how bias due to p-hacking is expected to affect the veracity of published findings using two specific examples from the literature. This thesis also argues that when evaluating published findings in the current academic environment, we should assume a priori that biases due to p-hacking and publication bias are present.The thesis used Monte Carlo simulations and systematic reviews of the literature in two specific fields: the proposed associations between exposure to night work and breast cancer in women, and between job strain and coronary heart disease.A general model and mathematical framework to predict expected bias from p-hacking was developed, and can be used for a priori defined protected inferences of any published finding, under explicit assumptions of various levels of p-hacking. The model indicated a close to 100% chance of demonstrating a false positive association in larger studies, but also showed that even minimal p-hacking results in substantial bias in estimates.The literature review identified large flexibility in the analytical process, allowing for the final model to be picked from a large pool of available models, with an implied search space of thousands of estimates. Some of the specific observations made here could be used to argue evidence for high risk of p-hacking and publication bias in the reviewed literature:None of the 17 reviewed studies on job strain and coronary heart disease reported the proper estimate of the job strain interaction (chapter 6) and our analysis showed that the proper estimate would not have been statistically significant in any of the studies (chapter 7).One study described a data driven approach with an implied search space of at least 502 models, where adjusting for confounding did not reduce the strength of the association, as would be expected, but instead increased its strength so it fell above the threshold for statistical significance (chapter 5).One study was based on a speculative and marginally significant estimate after arbitrarily restricting the analysis to a subgroup, when estimates on the full group were available and indicated a non-significant association (chapter 5).Statistical power analyses on research into night work and breast cancer indicated that statistically significant findings were over-represented in the literature (p≈.001) suggesting the presence of bias from p-hacking or selective publishing of significant findings (chapter 5).The findings also suggest that previously reported estimates in meta-analyses was likely to represent prevailing bias in the two fields reviewed here. A bias-adjusted meta-analysis on the job strain model and coronary heart disease with a total of 462,220 subjects and 6,836 CHD events indicated no support for the job strain interaction (RR=1.00; 95% CI: 0.88--1.14). In addition, it did not show an increased risk due to high job demand (RR=1.03; 95% CI: 0.97--1.11) but it did confirm previously reported risks due to low job control (RR=1.11; 95% CI: 1.03--1.20).
KLICKA HÄR FÖR ATT SE AVHANDLINGEN I FULLTEXT. (PDF-format)