Dissertation/ Thesis
Quantifying and understanding uncertainty in deep-learning-based medical image segmentation ; Quantification et caractérisation de l'incertitude de segmentation d'images médicales pardes réseaux profonds
العنوان: | Quantifying and understanding uncertainty in deep-learning-based medical image segmentation ; Quantification et caractérisation de l'incertitude de segmentation d'images médicales pardes réseaux profonds |
---|---|
المؤلفون: | Lambert, Benjamin |
المساهمون: | GIN Grenoble Institut des Neurosciences (GIN), Institut National de la Santé et de la Recherche Médicale (INSERM)-Université Grenoble Alpes (UGA), Université Grenoble Alpes 2020-., Michel Dojat |
المصدر: | https://theses.hal.science/tel-04673383 ; Imagerie médicale. Université Grenoble Alpes [2020-.], 2024. Français. ⟨NNT : 2024GRALS011⟩. |
بيانات النشر: | CCSD |
سنة النشر: | 2024 |
المجموعة: | Université Grenoble Alpes: HAL |
مصطلحات موضوعية: | Segmentation, Image processing, Uncertainty Quantification, Deep Learning, Mri, Explainable AI, Intelligence Artificielle, Incertitude, Irm, Traitement d'images, IA explicable, [INFO.INFO-IM]Computer Science [cs]/Medical Imaging |
الوصف: | In recent years, artificial intelligence algorithms have demonstrated outstanding performance in a wide range of tasks, including the segmentation and classification of medical images. The automatic segmentation of lesions in brain MRIs enables a rapid quantification of the disease progression: a count of new lesions, a measure of total lesion volume and a description of lesion shape. This analysis can then be used by the neuroradiologist to adapt therapeutic treatment if necessary. This makes medical decisions faster and more precise.At present, these algorithms, which are often regarded as black boxes, produce predictions without any information concerning their certainty. This hinders the full adoption of artificial intelligence algorithms in sensitive areas, as they tend to produce errors with high confidence, potentially misleading human decision-makers. Identifying and understanding the causes of these failures is key to maximizing the usefulness of AI algorithms and enabling their acceptance within the medical profession. To achieve this goal, it is important to be able to distinguish between the two main sources of uncertainty. First, aleatoric uncertainty, which corresponds to uncertainty linked to intrinsic image noise and acquisition artifacts. Secondly, epistemic uncertainty, which relates to the lack of knowledge of the model.The joint aim of Pixyl and GIN is to achieve better identification of the sources of uncertainty in deep neural networks, and consequently develop new methods for estimating this uncertainty in routine, real-time clinical use.In the context of medical image segmentation, uncertainty estimation is relevant at several scales. Firstly, at the voxel scale, uncertainty can be quantified using uncertainty maps. This makes it possible to superimpose the image, its segmentation and the uncertainty map to visualize uncertain area. Secondly, for pathologies such as Multiple Sclerosis, the radiologist's attention is focused on the lesion rather than the voxel. Structural uncertainty ... |
نوع الوثيقة: | doctoral or postdoctoral thesis |
اللغة: | French |
Relation: | NNT: 2024GRALS011 |
الاتاحة: | https://theses.hal.science/tel-04673383 https://theses.hal.science/tel-04673383v1/document https://theses.hal.science/tel-04673383v1/file/LAMBERT_2024_archivage.pdf |
Rights: | info:eu-repo/semantics/OpenAccess |
رقم الانضمام: | edsbas.43E72B80 |
قاعدة البيانات: | BASE |
الوصف غير متاح. |