Document Type

Article

Publication Date

4-11-2022

Abstract

Deep generative networks in recent years have reinforced the need for caution while consuming various modalities of digital information. One avenue of deepfake creation is aligned with injection and removal of tumors from medical scans. Failure to detect medical deepfakes can lead to large setbacks on hospital resources or even loss of life. This paper attempts to address the detection of such attacks with a structured case study. Specifically, we evaluate eight different machine learning algorithms, which include three conventional machine learning methods (Support Vector Machine, Random Forest, Decision Tree) and five deep learning models (DenseNet121, DenseNet201, ResNet50, ResNet101, VGG19) in distinguishing between tampered and untampered images. For deep learning models, the five models are used for feature extraction, then each pre-trained model is fine-tuned. The findings of this work show near perfect accuracy in detecting instances of tumor injections and removals.

Comments

This article was originally published in Machine Learning with Applications, volume 8, in 2022. https://doi.org/10.1016/j.mlwa.2022.100298

Copyright

The authors

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.