Toma, Victor (2025) Explaining the Black Box: Grad-CAM Interpretability in Deep Learning for Autonomous Weapon Systems. Bachelor thesis, Data Science and Society (DSS).
|
PDF
Thesis-DSSVIctorTomaS4553020-1.pdf Download (6MB) | Preview |
Abstract
Artificial Intelligence (AI) is becoming increasingly integrated into defense systems, with applications ranging from surveillance and logistics to autonomous weapons. The growing reliance on opaque AI decision-making in defense systems raises urgent concerns about ethical compliance, legal accountability, and real-time operational risk, particularly when life-and-death classifications are at stake (Roff & Moyes, 2016; Horowitz, 2019). As military reliance on AI increases, the need for transparency and accountability in these systems becomes imperative. An AI model making an error in a high-stakes environment, such as identifying civilians as combatants, could lead to unjustified harm, violations of international humanitarian law, and a breakdown of trust in autonomous systems. This thesis addresses the opacity of such models in military applications by evaluating their decision-making through explainability techniques. Specifically, it focuses on CNN-based models used to distinguish between soldiers and civilians in image data, a task with potentially life-or-death consequences. To address the critical question of where a model is "looking" when making a classification, this thesis employs Gradient-weighted Class Activation Mapping (Grad-CAM), a post-hoc explainability method that produces visual heat maps indicating the most influential regions in the input image.
Item Type: | Thesis (Bachelor) |
---|---|
Name supervisor: | Haleem, N. |
Date Deposited: | 30 Jun 2025 08:22 |
Last Modified: | 30 Jun 2025 08:22 |
URI: | https://campus-fryslan.studenttheses.ub.rug.nl/id/eprint/684 |
Actions (login required)
![]() |
View Item |