Can We Trust You? On Calibration of Probabilistic Object Detector for Autonomous Driving
This papaer discusses the idea of caliberating the probability which is used in proababilistic object detector
Last updated
This papaer discusses the idea of caliberating the probability which is used in proababilistic object detector
Last updated
https://arxiv.org/abs/1906.02530
This paper discusses the idea of calibrating the uncertainty/probabilities predicted in the probabilistic object detectors. They show three methods for calibration namely - isotonoic regression, temperature scaling and calibration loss.
So, It is said to be calibrated probability if the predicted probability is equal to the empiriacal probabililty. For ex, if detector makes prediction with 0.9 probability, then 90% of those prediction should be correct. Hence, the uncertainty of detector should match the natural frequency.
Let be the predicted probability from the network. In this we learn an auxiliary model based on isotnoic regression . So basically we learn a model . We learn the model based on recalibration dataset which comes from validation dataset. The recalibration dataset is defines as where N is total length of validation dataset and refers to the empirical probability. See the paper as how the empirical probability is calculated.
Simply, , where learn the optimal value of based on the Negative log liklihood score on the recalibration dataset.
Just add this calibration loss to the original regression loss.
The isotonic regression performs better than the other two and all three performs better than uncalibrated vanilla probabilistic detectors.