Academic Journal

The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions

التفاصيل البيبلوغرافية
العنوان: The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions
المؤلفون: Mauro F. Pinto, Joana Batista, Adriana Leal, Fábio Lopes, Ana Oliveira, António Dourado, Sulaiman I. Abuhaiba, Francisco Sales, Pedro Martins, César A. Teixeira
المصدر: Epilepsia Open, Vol 8, Iss 2, Pp 285-297 (2023)
بيانات النشر: Wiley, 2023.
سنة النشر: 2023
المجموعة: LCC:Neurology. Diseases of the nervous system
مصطلحات موضوعية: drug‐resistant epilepsy, EEG, explainability, machine learning, seizure prediction, Neurology. Diseases of the nervous system, RC346-429
الوصف: Abstract Many state‐of‐the‐art methods for seizure prediction, using the electroencephalogram, are based on machine learning models that are black boxes, weakening the trust of clinicians in them for high‐risk decisions. Seizure prediction concerns a multidimensional time‐series problem that performs continuous sliding window analysis and classification. In this work, we make a critical review of which explanations increase trust in models' decisions for predicting seizures. We developed three machine learning methodologies to explore their explainability potential. These contain different levels of model transparency: a logistic regression, an ensemble of 15 support vector machines, and an ensemble of three convolutional neural networks. For each methodology, we evaluated quasi‐prospectively the performance in 40 patients (testing data comprised 2055 hours and 104 seizures). We selected patients with good and poor performance to explain the models' decisions. Then, with grounded theory, we evaluated how these explanations helped specialists (data scientists and clinicians working in epilepsy) to understand the obtained model dynamics. We obtained four lessons for better communication between data scientists and clinicians. We found that the goal of explainability is not to explain the system's decisions but to improve the system itself. Model transparency is not the most significant factor in explaining a model decision for seizure prediction. Even when using intuitive and state‐of‐the‐art features, it is hard to understand brain dynamics and their relationship with the developed models. We achieve an increase in understanding by developing, in parallel, several systems that explicitly deal with signal dynamics changes that help develop a complete problem formulation.
نوع الوثيقة: article
وصف الملف: electronic resource
اللغة: English
تدمد: 2470-9239
Relation: https://doaj.org/toc/2470-9239
DOI: 10.1002/epi4.12748
URL الوصول: https://doaj.org/article/718ae0aaf7014c37b060707c0c33dba0
رقم الانضمام: edsdoj.718ae0aaf7014c37b060707c0c33dba0
قاعدة البيانات: Directory of Open Access Journals
الوصف
تدمد:24709239
DOI:10.1002/epi4.12748