Statistical variability in comparing accuracy of neuroimaging based classification models via cross validation.
Academic Article
Overview
abstract
Machine learning (ML) has significantly transformed biomedical research, leading to a growing interest in model development to advance classification accuracy in various clinical applications. However, this progress raises essential questions regarding how to rigorously compare the accuracy of different ML models. In this study, we highlight the practical challenges in quantifying the statistical significance of accuracy differences between two neuroimaging-based classification models when cross-validation (CV) is performed. Specifically, we propose an unbiased framework to assess the impact of CV setups (e.g., the number of folds) on the statistical significance. We apply this framework to three publicly available neuroimaging datasets to re-emphasize known flaws in current computation of p-values for comparing model accuracies. We further demonstrate that the likelihood of detecting significant differences among models varies substantially with the intrinsic properties of the data, testing procedures, and CV configurations of choice. Given that many of the above factors do not typically fall into the evaluation criteria of ML-based biomedical studies, we argue that such variability can potentially lead to p-hacking and inconsistent conclusions on model improvement. The obtained results from this study underscore that more rigorous practices in model comparison are urgently needed in order to mitigate the reproducibility crisis in biomedical ML research.