Confidence estimation in deep neural networks
Author: Daniel Hari
Mentor: izr. prof. dr. Matej Rojc univ. dipl. inž. el.
Co-mentor: Danilo Zimšek, univ. dipl. inž. tel.
Degree: 2.
Date: september, 2020
DKUM: DANIEL HARI
Author: Daniel Hari
Mentor: izr. prof. dr. Matej Rojc univ. dipl. inž. el.
Co-mentor: Danilo Zimšek, univ. dipl. inž. tel.
Degree: 2.
Date: september, 2020
DKUM: DANIEL HARI
The master’s thesis presents approaches to evaluate trust in deep neural networks in the case of digit recognition
In the thesis, we focus mainly on two approaches, namely Bayesian training and Dropout layer. Bayesian learning is a more mathematically demanding procedure because it works by treating each input data into a neural network as a probability distribution and not as a deterministically determined value. In the Dropout layer technique, a stochastic Dropout layer is added behind each hidden layer of the network, so that the output from the model can be viewed as a random sample created from the a posteriori distribution. Such a procedure is less computationally demanding, but gives a similar result.
Confidence estimation in neural networks is an approach that provides information from a deep neural network about the certainty of recognition or perception.
Methods for determining neural network certainty
Experimental environment
The Bayesian training process
Sampling procedures with Dropout layers
Results