Brain tumor identification is crucial to prevent long term disabilities. Severe cases such as High Grade Glioma may be fatal. Magnetic Resonance Imaging (MRI) is a powerful non-invasive tool for obtaining these brain scans. MRI scans can provide key information such as the location, shape, size and the growth stage of the brain tumor. To perform any medical image analysis using deep learning techniques, a sufficient volume of data with variability is required. However, traditional image augmentation methods such as scale, rotation, crop etc. create highly correlated images which are unable to capture the underlying features of the source images. In addition, they might change the pattern useful for diagnosis. Class imbalance is another reason to apply augmentation. Moreover, real patient data is expensive to obtain and its usage in training AI models is highly regulated. Generative Adversarial Network (GAN) models have shown promising results in generating synthetic data with good generalization ability to large datasets.
In this work we use the Aggregation GAN (AGGrGAN) model to capture both the unique features and localized information of a source image using style transfer and also the shared information among the different latent representation of multiple images. We then perform an ablation study to quantitatively evaluate (using PSNR and SSIM scores) the generated images and also to study the impact of aggregation followed by style transfer. For a qualitative analysis, we train a classification network using both real images and a mixture of real and fake images to study the effectiveness of the images generated by our models. All our experiments have been performed on the BraTS 2020 dataset
AGGrGAN uses an aggregation logic to merge the results from DCGAN, WGAN and UNet GAN. Style transfer technique was used to improve the similarity of MRI images. We also perform a qualitative and quantitative analysis of results.
Our results show that DCGAN + Style Transfer can generate synthetic images with PSNR scores as high as 29.64 and SSIM scores as high as 0.87 compared to real world patient data of Brain MRI images. Moreover, we trained a classification model to distinguish MRI modality purely using synthetic data and testing on real world data produced classification accuracies of nearly 90.1% which show the usefulness of GAN models for augmentation and non-personalization of sensitive medical image data.
Images
Network architecture of Aggregate GAN

Example of Brain MRI images of various modality including synthetic images generated by GANs

DCGAN training curve

DCGAN quality of images over time

Link to source code
Click here to see the source code for the project
Link to research report
For more details on the internal workings of AGGrGAN and the following post-processing steps, please read the full report