Academic Journal

Looking at the posterior: on the origin of uncertainty in neural-network classification

التفاصيل البيبلوغرافية
العنوان: Looking at the posterior: on the origin of uncertainty in neural-network classification
المؤلفون: Linander, H., Balabanov, O., Yang, H., Mehlig, B.
سنة النشر: 2022
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Computer Vision and Pattern Recognition, Statistics - Machine Learning, geo, stat
الوصف: Bayesian inference can quantify uncertainty in the predictions of neural networks using posterior distributions for model parameters and network output. By looking at these posterior distributions, one can separate the origin of uncertainty into aleatoric and epistemic. We use the joint distribution of predictive uncertainty and epistemic uncertainty to quantify how this interpretation of uncertainty depends upon model architecture, dataset complexity, and data distributional shifts in image classification tasks. We conclude that the origin of uncertainty is subjective to each neural network and that the quantification of the induced uncertainty from data distributional shifts depends on the complexity of the underlying dataset. Furthermore, we show that the joint distribution of predictive and epistemic uncertainty can be used to identify data domains where the model is most accurate. To arrive at these results, we use two common posterior approximation methods, Monte-Carlo dropout and deep ensembles, for fully-connected, convolutional and attention-based neural networks. ; Comment: 25 pages, 6 figures, 5 tables, 1 appendix
نوع الوثيقة: text
اللغة: unknown
Relation: http://arxiv.org/abs/2211.14605
الاتاحة: http://arxiv.org/abs/2211.14605
Rights: undefined
رقم الانضمام: edsbas.1A8FF624
قاعدة البيانات: BASE