Dissertation/ Thesis
Explainable cautious classifiers ; Classifieurs prudents explicables
العنوان: | Explainable cautious classifiers ; Classifieurs prudents explicables |
---|---|
المؤلفون: | Zhang, Haifei |
المساهمون: | Heuristique et Diagnostic des Systèmes Complexes Compiègne (Heudiasyc), Université de Technologie de Compiègne (UTC)-Centre National de la Recherche Scientifique (CNRS), Université de Technologie de Compiègne, Benjamin Quost, Marie-Hélène Masson |
المصدر: | https://theses.hal.science/tel-04674135 ; Machine Learning [stat.ML]. Université de Technologie de Compiègne, 2023. English. ⟨NNT : 2023COMP2777⟩. |
بيانات النشر: | HAL CCSD |
سنة النشر: | 2023 |
المجموعة: | Université de Technologie de Compiègne: HAL |
مصطلحات موضوعية: | Statistical learning, Cautious classifiers, Explainable artificial intelligence, Cautious classification, Imprecise Dirichlet model (IDM), Belief functions, Ensemble learning, Explainable AI, Counterfactual explanation, Random forest classifier, Dempster-Shafer theory, Machine learning, Apprentissage statistique, Classifieurs prudents, Intelligence Artificielle explicable, Classification prudente, Modèle de Dirichlet imprécis, Fonctions de croyance, Apprentissage ensembliste, Forêts aléatoires, IA explicable, Explications contrefactuelles, [STAT.ML]Statistics [stat]/Machine Learning [stat.ML], [INFO]Computer Science [cs] |
الوصف: | Machine learning classifiers have achieved impressive success in a wide range of domains such as natural language processing, image recognition, medical diagnosis, and financial risk assessment. Despite their remarkable accomplishments, their application to real-world problems still entails challenges. Traditional classifiers make precise decisions based on estimated posterior probabilities; this becomes problematic when dealing with limited data and in complex, uncertain scenarios where making erroneous decisions is costly. As alternatives, cautious classifiers, also known as imprecise classifiers, provide subsets of classes as predictions. We propose in this thesis a cautious classifier called cautious random forest, within the framework of belief functions. It combines imprecise decision trees constructed by the imprecise Dirichlet model and aims at achieving a better compromise between the accuracy and the cautiousness of predictions. Cautious random forests can be regarded as generalizations of classical random forests, where the usual aggregation strategies (averaging and voting) are replaced with a cautious counterpart. However, making imprecise predictions has a cost, since indeterminacy must be resolved via further analysis. Therefore, it seems crucial to understand what led to an indeterminate prediction, and what could be done to turn it into a determinate one. To address this problem, we propose in this thesis a framework for providing explanations so as to discover which features contribute the most to improving the determinacy of the cautious classifier and how we can modify the feature values soas to achieve a determinate prediction (counterfactual explanations). ; L’apprentissage automatique a connu un succès impressionnant dans des domaines variés comme le traitement du langage naturel, la reconnaissance d’images, ou le diagnostic médical. Malgré ces résultats remarquables, son application à certains problèmes réels soulève encore des questions. Les classifieurs traditionnels choisissent une ... |
نوع الوثيقة: | doctoral or postdoctoral thesis |
اللغة: | English |
Relation: | NNT: 2023COMP2777 |
الاتاحة: | https://theses.hal.science/tel-04674135 https://theses.hal.science/tel-04674135v1/document https://theses.hal.science/tel-04674135v1/file/These_UTC_Haifei_Zhang.pdf |
Rights: | info:eu-repo/semantics/OpenAccess |
رقم الانضمام: | edsbas.94EE5655 |
قاعدة البيانات: | BASE |
الوصف غير متاح. |