08/16/2024
By Mojtaba Talaei Khoei

The Department of Operations and Information Systems at the Manning School of Business invites you to attend a doctoral dissertation proposal defense by Mojtaba Talaei Khoei on “Advances in Explainable Artificial Intelligence and Data Science: Methodological Developments and Practical Applications.”

Name: Mojtaba Talaei Khoei
Date: Aug. 27, 2024
Time: 10:30 a.m. to noon
Location: Virtual via Zoom

Dissertation Title: Advances in Explainable Artificial Intelligence and Data Science: Methodological Developments and Practical Applications

Committee Members:

  • Asil Oztekin (chair), Ph.D., Department of Operations & Information Systems, Manning School of Business, UMass Lowell
  • Luvai Motiwalla, Ph.D., Department of Operations & Information Systems, Manning School of Business, UMass Lowell
  • Hongwei (Harry) Zhu, PhD., Department of Operations & Information Systems, Manning School of Business, UMass Lowell

Abstract:

This dissertation proposal presents a series of studies on methodological developments in Explainable Artificial Intelligence (XAI) and Data Science and their practical applications. The proposal is structured into four interconnected chapters. Chapter 1 focuses on developing a methodology to explain latent labels. Chapter 2 proposes a modified gradient-based explanation method for differentiable Artificial Intelligence (AI) models. Chapter 3 presents an adversarial attack framework to explore the dynamics between robustness and explainability. Chapter 4 showcases the use-case of explainability in text analytics.

Chapter 1 introduces a methodology for explaining unobserved targets through an ensemble explanation framework. This XAI approach combines supervised and unsupervised learning techniques to produce cohesive feature importance scores, effectively addressing the challenge of interpreting latent models. Moreover, the chapter demonstrates the algorithm’s advantage over existing XAI methods. Chapter 2 presents a modified gradient-based explanation technique designed to enhance the stability and consistency of local explanations in differentiable AI models. This methodology reduces the volatility typically associated with traditional gradient-based approaches, balancing local interpretability with global consistency. Chapter 3 details the role of perturbed adversarial examples in probing the explanation robustness when an XAI model faces input manipulations. This chapter details the mathematical and algorithmic design of the method, including using a shallow neural network as a surrogate model to generate perturbation-driven adversarial attacks, highlighting the complex interplay between model explainability and robustness. Finally, Chapter 4 explores the application of XAI in the text classification context, aiming to develop an accessible framework that simplifies stakeholder engagement without relying on large language models. This chapter underscores the importance of making AI models transparent and interpretable, promoting trust, and empowering informed decision-making.

All interested students and faculty members are invited to attend.