Date of Award
Fall 12-2024
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Computational and Data Sciences
First Advisor
Uri Maoz, Ph.D., Chair
Second Advisor
Erik Linstead, Ph.D.
Third Advisor
Kyongsik Yun, Ph.D.
Abstract
Advances in computer vision and image processing have made a clear impact on many fields, from healthcare diagnostics to autonomous driving. However, as these models become more complex, understanding their decision-making processes has grown increasingly challenging, making explainable AI (XAI) a crucial component of modern AI systems. The focus of this work is to integrate these new technologies alongside foundational methods of image processing to create tools that can be used by domain experts who are not programmers. Prior to delving into the projects which investigate these concepts, the methodologies, background, and the overall frameworks are discussed. In the first study, I developed an image processing and ML classification pipeline and web application for identifying the brain region associated with the neuropathology of schizophrenia in a disease agnostic method. In the second study, I used a masked autoencoder for pre-training, and then a vision transformer for the classification on a small dataset of alcoholic patients from healthy controls. In the third study, I used a transformer with spatiotemporal attention, where the data was updated from time-series data to a set of images, and became an image completion problem.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Recommended Citation
C. Chavez, "Explainable AI in medical imaging: An interdisciplinary translational approach," Ph.D. dissertation, Chapman University, Orange, CA, 2024. https://doi.org/10.36837/chapman.000623