Showing posts with label OpenCV. Show all posts
Showing posts with label OpenCV. Show all posts

Thursday, December 26, 2013

SURF and BRISK in OpenCV

Today I spent half a day to compare the two feature detector and descriptor SURF and BRISK.

The conclusion is

  • SURF is more accurate but takes much longer
  • BRISK is 10 times faster with comparable accuracy. 
One thing to notice:
when using Brute-force matcher, mind the normType. See HERE
C++: BFMatcher::BFMatcher(int normType=NORM_L2, bool crossCheck=false )
Parameters:
* normType – One of NORM_L1, NORM_L2, NORM_HAMMING,NORM_HAMMING2. L1 and L2 norms are preferable choices for SIFT and SURF descriptors, 
* NORM_HAMMING should be used with ORB, BRISK and BRIEF, NORM_HAMMING2 should be used with ORB when WTA_K==3 or 4.

The other thing to notice is the threshold used to select good matches. Maybe RANSAC should be used.
SURF feature
BRISK feature
 



Saturday, November 2, 2013

Finds an object pose from 3D-2D point correspondences

C++: bool solvePnP(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int flags=ITERATIVE )


Parameters:
  • objectPoints – Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector<Point3f> can be also passed here.
  • imagePoints – Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. vector<Point2f> can be also passed here.
  • cameraMatrix – Input camera matrix A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1} .
  • distCoeffs – Input vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
  • rvec – Output rotation vector (see Rodrigues() ) that, together with tvec , brings points from the model coordinate system to the camera coordinate system.
  • tvec – Output translation vector.
  • useExtrinsicGuess – If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.
  • flags –
    Method for solving a PnP problem:
    • CV_ITERATIVE Iterative method is based on Levenberg-Marquardt optimization. In this case the function finds such a pose that minimizes reprojection error, that is the sum of squared distances between the observed projections imagePoints and the projected (using projectPoints() ) objectPoints .
    • CV_P3P Method is based on the paper of X.S. Gao, X.-R. Hou, J. Tang, H.-F. Chang “Complete Solution Classification for the Perspective-Three-Point Problem”. In this case the function requires exactly four object and image points.
    • CV_EPNP Method has been introduced by F.Moreno-Noguer, V.Lepetit and P.Fua in the paper “EPnP: Efficient Perspective-n-Point Camera Pose Estimation”.
The function estimates the object pose given a set of object points, their corresponding image projections, as well as the camera matrix and the distortion coefficients.

Thursday, August 15, 2013

Image stiching in OpenCV

Reference to http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/

Main steps of the code:
  1. Load two images;
  2. Convert to gray scale;
  3. Using SURF detector to find SURF descriptor in both images;
  4. matching the SURF descriptor using FLANN Matcher;
  5. Postprocessing matches to find good matches
  6. Using RANSAC to estimate the Homography matrix using the matched SURF descriptors;
  7. Warping the images based on the homography matrix

The image shows the definition of Homography which transforms 2d Planar point to the image plane. 


 The following image is shows the initial image and the matched SURF features of the two. Homography is derived from the left image to the right image.


Stiching image result.

Source Code

Friday, May 3, 2013

OpenCV: Surf Matching in Video Sequence

OpenCV: SURF Feature matching

  1. Load two images
  2. do SURF feature extraction
  3. Using Flann matching to match the keypoints
  4. Identify good matches
  5. find the object in the scene image

Wednesday, May 1, 2013

OpenCV: SURF Feature extractor

Main steps:
  1. Read Two Images
  2. Resize to half of its size
  3. Detectect Keypoints  using SURF
  4. Calculate Feature Descriptor
  5. Matching Descriptor using Brute Force Matcher
Original Image
 SURF Keypoints
SURF Matches


Sunday, April 28, 2013

Surf Detector

OpenCV Tutorial 1: Mat - The basic Image Container


Color Space In OpenCV
There are, however, many other color systems each with their own advantages:
  • RGB is the most common as our eyes use something similar, our display systems also compose colors using these.
  • The HSV and HLS decompose colors into their hue, saturation and value/luminance components, which is a more natural way for us to describe colors. You might, for example, dismiss the last component, making your algorithm less sensible to the light conditions of the input image.
  • YCrCb is used by the popular JPEG image format.
  • CIE L*a*b* is a perceptually uniform color space, which comes handy if you need to measure the distance of a given color to another color.
Sample Code

Tuesday, April 16, 2013

Camera Calibration tool box from Caltech

http://www.vision.caltech.edu/bouguetj/calib_doc/
This is a release of a Camera Calibration Toolbox for Matlab® with a complete documentation. This document may also be used as a tutorial on camera calibration since it includes general information about calibration, references and related links. 

Friday, February 22, 2013

opencv_performance.exe


NAME

       opencv_performance - evaluate the performance of the classifier

SYNOPSIS

       opencv_performance [options]

DESCRIPTION

       opencv_performance  evaluates  the  performance  of  the classifier. It
       takes a collection of marked up test images, applies the classifier and
       outputs the performance, i.e. number of found objects, number of missed
       objects, number of false alarms and other information.

       When there is no such collection available test samples may be  created
       from  single  object  image by the opencv_createsamples(1) utility. The
       scheme of test samples creation in this case  is  similar  to  training
       samples

       In the output, the table should be read:

       'Hits' shows the number of correctly found objects

       'Missed'
              shows  the  number  of  missed  objects  (must exist but are not
              found, also known as false negatives)

       'False'
              shows the number of false alarms (must not exist but are  found,
              also known as false positives)

OPTIONS

       opencv_performance supports the following options:

       -data classifier_directory_name
              The directory, in which the classifier can be found.

       -info collection_file_name
              File with test samples description.

       -maxSizeDiff max_size_difference
              Determine   the   size   criterion  of  reference  and  detected
              coincidence.  The default is 1.500000.

       -maxPosDiff max_position_difference
              Determine the  position  criterion  of  reference  and  detected
              coincidence.  The default is 0.300000.

       -sf scale_factor
              Scale  the  detection  window  in each iteration. The default is
              1.200000.

       -ni    Don't save detection result to an image. This could  be  useful,
              if collection_file_name contains paths.

       -nos number_of_stages
              Number  of  stages  to  use.  The  default is -1 (all stages are
              used).

       -rs roc_size
              The default is 40.

       -h sample_height
              The sample height (must have  the  same  value  as  used  during
              creation).  The default is 24.

       -w sample_width
              The  sample  width  (must  have  the  same  value as used during
              creation).  The default is 24.

       The same information is shown, if opencv_performance is called  without
       any arguments/options.

EXAMPLES

       To create training samples from one image applying distortions and show
       the results:

              opencv_performance -data trainout -info tests.dat

Thursday, September 6, 2012

Canny Edge on Webcam



From StackOverflow

  1. Remember that OpenCV works with BGR, so when you convert, use the CV_BGR2GRAY
  2. Be careful with the threshold in Canny, they should be different and with a ratio of 2 or 3( recommended). Might try 100-200...
  3. Try to avoid printing in every loop, that slows down a little bit your code
  4. For filters, try not to use a big window. A size 3 0r 5 at most is usually fine (Depending on your application). A size 11 is probably not required.
  5. consider using cv::Mat. It is far more flexible than IplImage and in fact ( no more Release Image...)
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
using namespace std;
using namespace cv;
int main(int, char**)
{
    namedWindow( "Edges", CV_WINDOW_NORMAL ); 
    CvCapture* capture = cvCaptureFromCAM(-1);

    cv::Mat frame; cv::Mat out; cv::Mat out2;

    while(1) {
        frame = cvQueryFrame( capture );

        GaussianBlur( frame, out, Size(5, 5), 0, 0 );
        cvtColor( out ,out2, CV_BGR2GRAY ); // produces out2, a one-channel image (CV_8UC1)
       Canny( out2, out2, 100, 200, 3 ); // the result goes to out2 again,but since it is still one channel it is fine

        if( !frame.data ) break;
        imshow( "Edges", out2 );

        char c = cvWaitKey(33);
        if( c == 'c' ) break;
    }
    return 0;
}

CvArr、Mat、CvMat、IplImage、BYTE转换


CvArr、Mat、CvMat、IplImage、BYTE转换(总结而来)

http://blog.csdn.net/wuxiaoyao12/article/details/7305848
 一、Mat类型:矩阵类型,Matrix。
    在openCV中,Mat是一个多维的密集数据数组。可以用来处理向量和矩阵、图像、直方图等等常见的多维数据。
    Mat有3个重要的方法:
         1、Mat mat = imread(const String* filename);            读取图像
         2、imshow(const string frameName, InputArray mat);      显示图像
         3、imwrite (const string& filename, InputArray img);    储存图像
    Mat类型较CvMat与IplImage类型来说,有更强的矩阵运算能力,支持常见的矩阵运算。在计算密集型的应用当中,将CvMat与IplImage类型转化为Mat类型将大大减少计算时间花费。
A.Mat -> IplImage
同样只是创建图像头,而没有复制数据。
例: // 假设Mat类型的imgMat图像数据存在
IplImage pImg= IplImage(imgMat); 
B.Mat -> CvMat
与IplImage的转换类似,不复制数据,只创建矩阵头。
例: // 假设Mat类型的imgMat图像数据存在
     CvMat cvMat = imgMat;

二、CvMat类型与IplImage类型:“图像”类型
       在openCV中,Mat类型与CvMat和IplImage类型都可以代表和显示图像,但是,Mat类型侧重于计算,数学性较高,openCV对Mat类型的计算也进行了优化。而CvMat和IplImage类型更侧重于“图像”,openCV对其中的图像操作(缩放、单通道提取、图像阈值操作等)进行了优化。
补充:IplImageCvMat派生,而CvMatCvArr派生即CvArr -> CvMat -> IplImage
            CvArr用作函数的参数,无论传入的是CvMatIplImage,内部都是按CvMat处理。
1.CvMat
A.CvMat-> IplImage
IplImage* img = cvCreateImage(cvGetSize(mat),8,1);
cvGetImage(matI,img);
cvSaveImage("rice1.bmp",img);
B.CvMat->Mat
与IplImage的转换类似,可以选择是否复制数据。
Mat::Mat(const CvMat* m, bool copyData=false);
在openCV中,没有向量(vector)的数据结构。任何时候,但我们要表示向量时,用矩阵数据表示即可。
但是,CvMat类型与我们在线性代数课程上学的向量概念相比,更抽象,比如CvMat的元素数据类型并不仅限于基础数据类型,比如,下面创建一个二维数据矩阵:
              CvMat* cvCreatMat(int rows ,int cols , int type);
这里的type可以是任意的预定义数据类型,比如RGB或者别的多通道数据。这样我们便可以在一个CvMat矩阵上表示丰富多彩的图像了。

2.IplImage
在类型关系上,我们可以说IplImage类型继承自CvMat类型,当然还包括其他的变量将之解析成图像数据。
IplImage类型较之CvMat多了很多参数,比如depth和nChannels。在普通的矩阵类型当中,通常深度和通道数被同时表示,如用32位表示RGB+Alpha.但是,在图像处理中,我们往往将深度与通道数分开处理,这样做是OpenCV对图像表示的一种优化方案。
IplImage的对图像的另一种优化是变量origin----原点。在计算机视觉处理上,一个重要的不便是对原点的定义不清楚,图像来源,编码格式,甚至操作系统都会对原地的选取产生影响。为了弥补这一点,openCV允许用户定义自己的原点设置。取值0表示原点位于图片左上角,1表示左下角。
dataOrder参数定义数据的格式。有IPL_DATA_ORDER_PIXEL和IPL_DATA_ORDER_PLANE两种取值,前者便是对于像素,不同的通道的数据交叉排列,后者表示所有通道按顺序平行排列。
IplImage类型的所有额外变量都是对“图像”的表示与计算能力的优化。
A.IplImage -> Mat
IplImage* pImg = cvLoadImage("lena.jpg");
Mat img(pImg,0); // 0是不複製影像,也就是pImgimgdata共用同個記憶體位置,header各自有
B.IplImage -> CvMat
1CvMat mathdr, *mat = cvGetMat( img, &mathdr );
法2CvMat *mat = cvCreateMat( img->height, img->width, CV_64FC3 );
  cvConvert( img, mat );
C.IplImage*-> BYTE*
BYTE* data= img->imageData;

CvMat和IplImage创建时的一个小区别:
1、建立矩阵时,第一个参数为行数,第二个参数为列数。
CvMat* cvCreateMat( int rows, int cols, int type );
2、建立图像时,CvSize第一个参数为宽度,即列数;第二个参数为高度,即行数。这 个和CvMat矩阵正好相反。
IplImage* cvCreateImage(CvSize size, int depth, int channels );
CvSize cvSize( int width, int height );

IplImage内部buffer每行是按4字节对齐的,CvMat没有这个限制

补充:
A.BYTE*-> IplImage*
img= cvCreateImageHeader(cvSize(width,height),depth,channels);
cvSetData(img,data,step);
//首先由cvCreateImageHeader()创建IplImage图像头,制定图像的尺寸,深度和通道数;
//然后由cvSetData()根据BYTE*图像数据指针设置IplImage图像头的数据数据,
//其中step指定该IplImage图像每行占的字节数,对于1通道的IPL_DEPTH_8U图像,step可以等于width

Wednesday, September 5, 2012

Resolving tbb_debug.dll in OpenCV 2.3.1


Resolving tbb_debug.dll in OpenCV 2.3.1

origin:

To resolve the tbb_debug.dll, for windows: 

Download tbb files at 
http://threadingbuildingblocks.org/download#stable-releases

You may choose to place the folder at ..\OpenCV2.3\build\common
 

Set up the following:

• Environment variables 
$(TBBROOT)bin\ia32\vc10

C/C++ Properties 
• General: add an additional include directory:
"$(TBBROOT)\include"

Linker Properties 
• General: add an additional library directory (shown for Visual
Studio 2010 32-bit library):
$(TBBROOT)lib\ia32\vc10

• Input: add an additional dependency 
tbb_debug.lib or tbb.lib


This should resolve the error message.


Monday, June 11, 2012

Install OpenCV 2.4 with Visual Studio 2010

Download OpenCV 2.4
http://sourceforge.net/projects/opencvlibrary/files/opencv-win/

Extract the files into a folder ‘D:\Software\opencv’

Copy the following things to a folder in C:\OpenCV2.4
  D:\Software\opencv\build\x86\vc10
  D:\Software\opencv\build\include


Add the follwoing into system variable.
C:\OpenCV2.4\bin;C:\OpenCV2.4\bin\Debug;C:\OpenCV2.4\bin\Release


Create a new project and add this to your project properties:
1. Go to VC++ Directories;
2. Add 3 new Include Directories (it's the path where you installed OpenCV, include folder):
      C:\OpenCV2.4\include\
      C:\OpenCV2.4\include\opencv
      C:\OpenCV2.4\include\opencv2
3. Add 1 new Library Directory (it's the path where you installed OpenCV, lib folder):
      C:\OpenCV2.4\lib
4. Go to Linker in the left menu and select Input option
5. Add these entries on Additional Dependencies option(debug)

      C:\OpenCV2.4\lib\opencv_core240d.lib
      C:\OpenCV2.4\lib\opencv_highgui240d.lib
      C:\OpenCV2.4\lib\opencv_video240d.lib
      C:\OpenCV2.4\lib\opencv_ml240d.lib
      C:\OpenCV2.4\lib\opencv_legacy240d.lib
      C:\OpenCV2.4\lib\opencv_imgproc240d.lib

6. Add these entries on Additional Dependencies option(release)
      C:\OpenCV2.4\lib\opencv_core240.lib
      C:\OpenCV2.4\lib\opencv_highgui240.lib
      C:\OpenCV2.4\lib\opencv_video240.lib
      C:\OpenCV2.4\lib\opencv_ml240.lib
      C:\OpenCV2.4\lib\opencv_legacy240.lib
      C:\OpenCV2.4\lib\opencv_imgproc240.lib