Sunday, May 26, 2013

Hessian Matrix and Jacobian Matrix

Hessian Matrix
In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a function. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants".
Given the real-valued function
f(x_1, x_2, \dots, x_n),\,\!
if all second partial derivatives of f exist and are continuous over the domain of the function, then the Hessian matrix of f is
H(f)_{ij}(\mathbf x) = D_i D_j f(\mathbf x)\,\!
where x = (x1x2, ..., xn) and Di is the differentiation operator with respect to the ith argument. Thus
H(f) = \begin{bmatrix}
\dfrac{\partial^2 f}{\partial x_1^2} & \dfrac{\partial^2 f}{\partial x_1\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_1\,\partial x_n} \\[2.2ex]
\dfrac{\partial^2 f}{\partial x_2\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_2^2} & \cdots & \dfrac{\partial^2 f}{\partial x_2\,\partial x_n} \\[2.2ex]
\vdots & \vdots & \ddots & \vdots \\[2.2ex]
\dfrac{\partial^2 f}{\partial x_n\,\partial x_1} & \dfrac{\partial^2 f}{\partial x_n\,\partial x_2} & \cdots & \dfrac{\partial^2 f}{\partial x_n^2}
\end{bmatrix}.
Because f is often clear from context, H(f)(\mathbf x) is frequently abbreviated to H(\mathbf x).
The Hessian matrix is related to the Jacobian matrix by H(f)(\mathbf x) = J(\nabla \! f)(\mathbf x).
The determinant of the above matrix is also sometimes referred to as the Hessian.[1]
Hessian matrices are used in large-scale optimization problems within Newton-type methods because they are the coefficient of the quadratic term of a local Taylor expansion of a function. That is,
y=f(\mathbf{x}+\Delta\mathbf{x})\approx f(\mathbf{x}) + J(\mathbf{x})\Delta \mathbf{x} +\frac{1}{2} \Delta\mathbf{x}^\mathrm{T} H(\mathbf{x}) \Delta\mathbf{x}
where J is the Jacobian matrix, which is a vector (the gradient) for scalar-valued functions. The full Hessian matrix can be difficult to compute in practice; in such situations, quasi-Newton algorithms have been developed that use approximations to the Hessian. The best-known quasi-Newton algorithm is the BFGS algorithm
Jacobian Matrix

In vector calculus, the Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function. Specifically, suppose F : \mathbb{R}^n \rightarrow \mathbb{R}^m is a function (which takes as input real n-tuples and produces as output real m-tuples). Such a function is given by m real-valued component functions, F_1(x_1,\ldots,x_n),\ldots,F_m(x_1,\ldots,x_n). The partial derivatives of all these functions with respect to the variables x_1,\ldots,x_n (if they exist) can be organized in an m-by-n matrix, the Jacobian matrix J of F, as follows:
J=\begin{bmatrix} \dfrac{\partial F_1}{\partial x_1} & \cdots & \dfrac{\partial F_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \dfrac{\partial F_m}{\partial x_1} & \cdots & \dfrac{\partial F_m}{\partial x_n}  \end{bmatrix}.


Friday, May 3, 2013

Implementation of KalmanFilter in Opencv

Two Functions used: gemm , solve
gemm: Performs generalized matrix multiplication.
C++: void gemm(InputArray src1, InputArray src2, double alpha, InputArray src3, double gamma, OutputArray dst, intflags=0 )
The function performs generalized matrix multiplication similar to the gemm functions in BLAS level 3. For example, gemm(src1,src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T) corresponds to
\texttt{dst} =  \texttt{alpha} \cdot \texttt{src1} ^T  \cdot \texttt{src2} +  \texttt{beta} \cdot \texttt{src3} ^T

solve: Solves one or more linear systems or least-squares problems.
C++: bool solve(InputArray src1, InputArray src2, OutputArray dst, int flags=DECOMP_LU)
  • solution (matrix inversion) method.
    • DECOMP_LU Gaussian elimination with optimal pivot element chosen.
    • DECOMP_CHOLESKY Cholesky LL^T factorization; the matrix src1 must be symmetrical and positively defined.
    • DECOMP_EIG eigenvalue decomposition; the matrix src1 must be symmetrical.
    • DECOMP_SVD singular value decomposition (SVD) method; the system can be over-defined and/or the matrix src1 can be singular.
    • DECOMP_QR QR factorization; the system can be over-defined and/or the matrix src1 can be singular.
    • DECOMP_NORMAL while all the previous flags are mutually exclusive, this flag can be used together with any of the previous; it means that the normal equations\texttt{src1}^T\cdot\texttt{src1}\cdot\texttt{dst}=\texttt{src1}^T\texttt{src2} are solved instead of the original system \texttt{src1}\cdot\texttt{dst}=\texttt{src2} .
The function solve solves a linear system or least-squares problem (the latter is possible with SVD or QR methods, or by specifying the flag DECOMP_NORMAL ):
\texttt{dst} =  \arg \min _X \| \texttt{src1} \cdot \texttt{X} -  \texttt{src2} \|

OpenCV: Surf Matching in Video Sequence

OpenCV: SURF Feature matching

  1. Load two images
  2. do SURF feature extraction
  3. Using Flann matching to match the keypoints
  4. Identify good matches
  5. find the object in the scene image

Wednesday, May 1, 2013

OpenCV: SURF Feature extractor

Main steps:
  1. Read Two Images
  2. Resize to half of its size
  3. Detectect Keypoints  using SURF
  4. Calculate Feature Descriptor
  5. Matching Descriptor using Brute Force Matcher
Original Image
 SURF Keypoints
SURF Matches