Academic Journal
Opti-CAM: Optimizing saliency maps for interpretability
العنوان: | Opti-CAM: Optimizing saliency maps for interpretability |
---|---|
المؤلفون: | Zhang, Hanwei, Torres, Felipe, Sicre, Ronan, Avrithis, Yannis, Ayache, Stephane |
المساهمون: | éQuipe d'AppRentissage de MArseille (QARMA), Laboratoire d'Informatique et des Systèmes (LIS) (Marseille, Toulon) (LIS), Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS)-Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS), the Excellence Initiative of Aix-Marseille Universite - A*Midex, a French “Investissements d’Avenir programme” AMX-21-IET-017, ANR-19-CE23-0009,UnLIR,Apprentissage non-supervisé de représentation pour la reconnaissance visuelle(2019) |
المصدر: | ISSN: 1077-3142. |
بيانات النشر: | HAL CCSD Elsevier |
سنة النشر: | 2024 |
المجموعة: | Aix-Marseille Université: HAL |
مصطلحات موضوعية: | Interpretability, Explainable AI, Saliency map, Class activation maps, Computer vision, [INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] |
الوصف: | International audience ; Methods based on class activation maps (CAM) provide a simple mechanism to interpret predictions of convolutional neural networks by using linear combinations of feature maps as saliency maps. By contrast, masking-based methods optimize a saliency map directly in the image space or learn it by training another network on additional data. In this work we introduce Opti-CAM, combining ideas from CAM-based and masking-based approaches. Our saliency map is a linear combination of feature maps, where weights are optimized per image such that the logit of the masked image for a given class is maximized. We also fix a fundamental flaw in two of the most common evaluation metrics of attribution methods. On several datasets, Opti-CAM largely outperforms other CAM-based approaches according to the most relevant classification metrics. We provide empirical evidence supporting that localization and classifier interpretability are not necessarily aligned. |
نوع الوثيقة: | article in journal/newspaper |
اللغة: | English |
Relation: | info:eu-repo/semantics/altIdentifier/arxiv/2301.07002; hal-04678832; https://hal.science/hal-04678832; https://hal.science/hal-04678832/document; https://hal.science/hal-04678832/file/OptiCAM_CVIU-1.pdf; ARXIV: 2301.07002 |
DOI: | 10.1016/j.cviu.2024.104101 |
الاتاحة: | https://hal.science/hal-04678832 https://hal.science/hal-04678832/document https://hal.science/hal-04678832/file/OptiCAM_CVIU-1.pdf https://doi.org/10.1016/j.cviu.2024.104101 |
Rights: | http://creativecommons.org/licenses/by-nc/ ; info:eu-repo/semantics/OpenAccess |
رقم الانضمام: | edsbas.D598A914 |
قاعدة البيانات: | BASE |
DOI: | 10.1016/j.cviu.2024.104101 |
---|