opencv圖像特徵點的提取和匹配(一)
opencv中進行特徵點的提取和匹配的思路一般是:提取特徵點、生成特徵點的描述子,然後進行匹配。opencv提供了一個三個類分別完成圖像特徵點的提取、描述子生成和特徵點的匹配,三個類分別是:FeatureDetector,DescriptorExtractor,DescriptorMatcher。從這三個基類派生出了不同的類來實現不同的特徵提取算法、描述及匹配。
首先是特徵提取基類:FeatureDetector,實現二維圖像特徵的提取。這個類是派生於Algorithm類,這個類應該是封裝了大量的算法。FeatureDetector的具體實現如下:
class CV_EXPORTS FeatureDetector
{
public:
virtual ~FeatureDetector();
void detect( const Mat& image, vector<KeyPoint>& keypoints,
const Mat& mask=Mat() ) const;
void detect( const vector<Mat>& images,
vector<vector<KeyPoint> >& keypoints,
const vector<Mat>& masks=vector<Mat>() ) const;
virtual void read(const FileNode&);
virtual void write(FileStorage&) const;
static Ptr<FeatureDetector> create( const string& detectorType );
protected:
...
};
通過定義FeatureDetector對象,並調用靜態成員函數create函數根據名字來實現多種特徵檢測方法,具體實現如下:
Ptr<FeatureDetector> FeatureDetector::create(const string& detectorType)
支持的算子主要包括以下幾種:
“FAST”—FastFeatureDetector;
"STAR"—StarFeatureDetector;
"SIFT"— SiftFeatureDetector;
"SURF"—SurfFeatureDetector;
"ORB" — OrbFeatureDetecotr;
"MSER"—MserFeatureDetector;
"GFTT"— GoodFeatureDetector;
"HARRIS"—GoodFeatureToTrackDetector;
"Dense" —DenseFeatureDetector;
"SimpleBlob"—SimpleBlobDectector;
還支持組合類型:特徵檢測算子的適配器名字("Grid"-GridAdaptedFeatureDetector,"Pyramid"-PyramidAdaptedFeatureDetector)+對應特徵檢測算子(上面支持的類型)的名字構成。比如:“GridFAST”、“PyramidSTAR”等。
從FeatureDetector類派生出了對應於不同檢測算法的子類:FastFeatureDetector、MserFeatureDetector、StarFeatureDetector、SiftFeatureDetector、SurfFeatureDetector、OrbFeatureDetector、SimpleBlobDetector等。
頭文件的處理:在opencv2.4.9中,如果要提取sift特徵或者surf特徵的話,應該添加頭文件<opencv2/nonfree/feature2d.hpp>。在這個頭文件中聲明瞭兩個類:SIFT和SURF。具體源碼如下:
class CV_EXPORTS_W SIFT : public Feature2D
{
public:
CV_WRAP explicit SIFT( int nfeatures=0, int nOctaveLayers=3,
double contrastThreshold=0.04, double edgeThreshold=10,
double sigma=1.6);
//! returns the descriptor size in floats (128)
CV_WRAP int descriptorSize() const;
//! returns the descriptor type
CV_WRAP int descriptorType() const;
//! finds the keypoints using SIFT algorithm
void operator()(InputArray img, InputArray mask,
vector<KeyPoint>& keypoints) const;
//! finds the keypoints and computes descriptors for them using SIFT algorithm.
//! Optionally it can compute descriptors for the user-provided keypoints
void operator()(InputArray img, InputArray mask,
vector<KeyPoint>& keypoints,
OutputArray descriptors,
bool useProvidedKeypoints=false) const;
AlgorithmInfo* info() const;
void buildGaussianPyramid( const Mat& base, vector<Mat>& pyr, int nOctaves ) const;
void buildDoGPyramid( const vector<Mat>& pyr, vector<Mat>& dogpyr ) const;
void findScaleSpaceExtrema( const vector<Mat>& gauss_pyr, const vector<Mat>& dog_pyr,
vector<KeyPoint>& keypoints ) const;
protected:
...
};
typedef SIFT SiftFeatureDetector;
typedef SIFT SiftDescriptorExtractor;
可以看出SIFT類派生於feature2D類,而feature2D有派生於FeatureDetector類和DescriptorExtractor類;因此派生於feature2D類的類的對象既可以調用FeatureDetector的成員函數實現特徵的提取,又可以調用DescroptorExtractor的成員函數來生成特徵描述子。由SIFT類定義的後面兩句(typedef...)也可以看出,通過SIFT類聲明的對象擁有提取特徵和生成特徵描述子的雙重功能。
注意:可能是由於opencv版本的原因,網上有些參考資料(包括opencv自帶的用戶手冊)解釋說:在提取SIFT特徵和SURF特徵時,需要添加<opencv2/nonfree/nonfree.hpp>,並在程序的開始添加:initModule_nonfree();。我在實驗的過程中發現,即使添加頭文件<opencv2/nonfree/nonfree.hpp>,vc++還是不能識別initModule_nonfree()。通過源碼可以發現:頭文件<opencv2/nonfree/nonfree.hpp>包含頭文件<opencv2/nonfree/feature2d.hpp>,只是多聲明瞭bool initModule_nonfree()函數;
#include "opencv2/nonfree/features2d.hpp"
namespace cv
{
CV_EXPORTS_W bool initModule_nonfree();
}
我在opencv2.4.9中直接添加頭文件<opencv2/nonfree/feature2d.hpp>即可提取並檢測sift和surf特徵點。
SURF類的應用和SIFT類一樣;ORB特徵檢測算子也可以像SIFT類一樣直接聲明對象然後使用,因爲也存在一個派生於feature2D類的ORB類。其他算子好像都不行,但存在對應的子類來單獨實現特徵的檢測。
下面給出對圖像進行SIFT特徵檢測和匹配的程序:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/nonfree/features2d.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc,char* argv[])
{
Ptr<DescriptorMatcher> siftMatcher = DescriptorMatcher::create("BruteForce");
SiftFeatureDetector siftDetector;
Mat img1 = imread("box.png");
Mat img2 = imread("box_in_scene.png");
vector<KeyPoint> keypoints1,keypoints2;
siftDetector.detect(img1,keypoints1);
siftDetector.detect(img2,keypoints2);
cout<<"Number of detected keypoints img1:"<<keypoints1.size()<<"points.--- img2:"
<<keypoints2.size()<<"points."<<endl;
SiftDescriptorExtractor siftExtractor;
Mat descriptor1,descriptor2;
siftExtractor.compute(img1,keypoints1,descriptor1);
siftExtractor.compute(img2,keypoints2,descriptor2);
cout<<"Number of Descriptors1:"<<descriptor1.rows<<endl;
cout<<"Number of Descriptors2:"<<descriptor2.rows<<endl;
cout<<"Demension of sift Descriptors:"<<descriptor1.cols<<endl;
Mat imgkey1,imgkey2;
drawKeypoints(img1,keypoints1,imgkey1,Scalar::all(-1));
drawKeypoints(img2,keypoints2,imgkey2,Scalar::all(-1));
imshow("box",imgkey1);
imshow("box_in_scene",imgkey2);
vector<DMatch> matches;
siftMatcher->match(descriptor1,descriptor2,matches,Mat());
Mat imgmatches;
drawMatches(img1,
keypoints1,
img2,
keypoints2,
matches,
imgmatches,
Scalar::all(-1),
Scalar::all(-1),
vector<char>(),
DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
imshow("Match Results:",imgmatches);
waitKey(0);
return 0;
}
程序運行結果:
匹配結果:
上面的結果是在opencv2.4.9+vs2010+win7中運行的。鑑於水平有限,難免有錯誤,希望指正,共同進步!!