Dropout Sampling for Robust Object Detection in Open-Set Condition
Investigates dropout sampling object detection
Summary
This papers shows object detection in open-set. They show that dropout sampling can be effective to reduce the false-positive for the unknown classes (those classes which were not in the training dataset) in the testing dataset.
Methodology
Dropout sampling
In this, multiple inferences are made with same input with dropout layer being active. Hence, we get multiple output for the same input. This is a variational inference technique as it helps to obtain rather intractable infereces.
Bayesian Perpective
Basic Idea: Model the network's wights as a distribution conditioned on the training data , instead of a deterministic point estimate vairable.
How: Placing a prior over the weights e.g. , the network training can be interpreted as determining a posterior over the weights given the training data: . However, evaluating this posterior is not tractable without approximation techniques. Where is the training data
See What's Bayesian here: - Prior: - Liklihood: Training of network is basically liklihood estimation - Posterior: Final trained wieght distribution is the posterior
Approximation to Intractable Inference: Let be the input to the network, then bayesian inference of the ouput is:
where is the number of times the inference is done and is the ouput at inference in which is used which is sampled using dropout.
Final: Then basically they use the Entropy of the above classification output to threshold between the unknown and known classes. If the entropy is high, then the detected output is mostly from an outlier class.
Insights
Using entropy over average of output is better than single output. As aerage gives better approximation of posterior.
Last updated