Jump to content

User:Appuasc11r4

From Wikipedia, the free encyclopedia

Maximum margin Hough Transform (computer vision ) is a feature extraction technique used to detect objects in image analysis and digital image processing. This technique places Hough transform in a discriminative framework where local parts probabilistically vote for location of the object. It allows for the weights to be learned in a max-margin framework to optimize the classification performance.


Introduction

[edit]

Various object detection techniques have been used throughout history viz. sliding window classifier, constellation model and implicit shape model. The Hough transform is also one such method and has been used for various pose estimation problems including shape detection. When the Hough transform is placed in a discriminative framework where each local part casts a weighted vote for the possible locations of the object center, the learning framework takes into account both the appearance of the part and the spatial distribution of its position with respect to the object center and parts. As the appearance of the part and its spatial distribution repeat and occur again at a consistent location then they are assigned with higher weights than the rest. The globally optimal solution can be obtained by using off the shelf optimization packages. This approach is known as Maximum Margin Hough Transform or . The parts and the probability distribution of the object locations are treated as blackbox allowing us to learn weights for the popular implicit shape model.

Theory

[edit]

Let denote the features observed at a location , which could be based on the properties of the local patch around . Let S(O, x) denote the score of object O at a location x. Here x denotes pose related properties such as position, scale, and aspect ratio. Let denotes the i’th codebook entry of the vector quantized space of features f.

The overall procedure Probabilistic Hough Transform can be seen as a weighted vote for object locations all over the codebook entries . Whereas in Max-Margin Hough Transform the weights can be learnt in a discriinative manner which optimizes the classification performance. The main idea here is that the score of the S(O, x) is a linear function of . background. Assuming that the probability is independent of the location (location invariance).

where is the activation vector and is given by the following equation:

This algorithm finds weights that maximizes the score S on correct object locations over incorrect ones. Rather than estimating based only on codebook activations, we can use the conditional distribution of the object centers to learn the weights.[1]

Discriminative Training

[edit]

Let be set of training examples, where ₢ {+1,−1} is the label and is the location of the i’th training instance.


The first stage is to compute the activations for each example by carrying forward the voting process and adding up the votes for each feature found at location according to the Equation explained in the above section. Thus the score assigned by the model to the instance i is . Weights are learned by maximizing this score on correct locations of the object over incorrect ones. In order to be robust to outliers and avoid overfitting, we propose a max-margin formulation leading to the following optimization problem

where w>=0, c>=0 for all i= 1,2,...,N

References

[edit]
  1. ^
    • Maji, Subhransu; Malik, Jitendra (2009). "Object Detection using a Max-Margin Hough Transform". Computer Vision and Pattern Recognition: 1038–1045. doi:10.1109/CVPR.2009.5206693. ISBN 978-1-4244-3992-8.{{cite journal}}: CS1 maint: date and year (link)
[edit]

Category:Image processing Category:Artificial intelligence