Simulating rigid head motion artifacts on brain magnitude MRI data-Outcome on image quality and segmentation of the cerebral cortex
Authors
- H. Olsson
- J.M. Millward
- L. Starke
- T. Gladytz
- T. Klein
- J. Fehr
- W.C. Lai
- C. Lippert
- T. Niendorf
- S. Waiczies
Journal
- PLoS ONE
Citation
- PLoS ONE 19 (4): e0301132
Abstract
Magnetic Resonance Imaging (MRI) datasets from epidemiological studies often show a lower prevalence of motion artifacts than what is encountered in clinical practice. These artifacts can be unevenly distributed between subject groups and studies which introduces a bias that needs addressing when augmenting data for machine learning purposes. Since unreconstructed multi-channel k-space data is typically not available for population-based MRI datasets, motion simulations must be performed using signal magnitude data. There is thus a need to systematically evaluate how realistic such magnitude-based simulations are. We performed magnitude-based motion simulations on a dataset (MR-ART) from 148 subjects in which real motion-corrupted reference data was also available. The similarity of real and simulated motion was assessed by using image quality metrics (IQMs) including Coefficient of Joint Variation (CJV), Signal-to-Noise-Ratio (SNR), and Contrast-to-Noise-Ratio (CNR). An additional comparison was made by investigating the decrease in the Dice-Sørensen Coefficient (DSC) of automated segmentations with increasing motion severity. Segmentation of the cerebral cortex was performed with 6 freely available tools: FreeSurfer, BrainSuite, ANTs, SAMSEG, FastSurfer, and SynthSeg+. To better mimic the real subject motion, the original motion simulation within an existing data augmentation framework (TorchIO), was modified. This allowed a non-random motion paradigm and phase encoding direction. The mean difference in CJV/SNR/CNR between the real motion-corrupted images and our modified simulations (0.004±0.054/-0.7±1.8/-0.09±0.55) was lower than that of the original simulations (0.015±0.061/0.2±2.0/-0.29±0.62). Further, the mean difference in the DSC between the real motion-corrupted images was lower for our modified simulations (0.03±0.06) compared to the original simulations (-0.15±0.09). SynthSeg+ showed the highest robustness towards all forms of motion, real and simulated. In conclusion, reasonably realistic synthetic motion artifacts can be induced on a large-scale when only magnitude MR images are available to obtain unbiased data sets for the training of machine learning based models.