對應於Opencv源碼文件:\sources\modules\calib3d\src\calibinit.cpp
圖1 :原始圖片
第一步,局部平均自適應閾值化方法對亮度不均勻情況適應性強,因此用該方法對圖像二值化,均衡化後得到了理想的門限,效果如圖2所示。
第二步,圖像膨脹分離各個黑塊四邊形的銜接,由於膨脹的是白色像素點,因此能夠縮小黑塊四邊形,斷掉銜接,效果如圖3所示。
第三步,檢測四邊形,計算每個輪廓的凸包,多邊形檢測,以及判斷是否只有四個頂點,若是則爲四邊形,再用長寬比、周長和麪積等約束去除一些干擾四邊形,效果如圖4所示。
第四步,將每個四邊形作爲一個單元,它分別有鄰近的四邊形,無鄰近四邊形的爲干擾四邊形,兩個鄰近四邊形爲邊界處四邊形,四個鄰近四邊形爲內部四邊形。每個四邊形的序號可按鄰近關係排序,然後按對角兩個四邊形相對的兩個點,取其連線的中間點作爲角點, 效果如圖5 所示。
整個棋盤定位過程是一個循環過程,先對讀入的棋盤圖像直方圖均衡化,接着自適應(取決於flag參數)二值化,再對二值化後的圖像膨脹。爲了定位的魯棒性,自適應二值化和膨脹所採用核的大小不能是唯一的,故不斷的循環用不同的參數用對棋盤圖像處理,膨脹所採用核的大小逐漸變大。
在每次的循環過程都需要,經過以下步驟。
1、在二值化後圖像外圍畫一白色的矩形框(方便輪廓提取),然後進行輪廓提取cvFindContours。經過膨脹後的二值圖像,每個黑色的方格已經被分開,輪廓提取後可以得到每個方格的輪廓,當然還有很多幹擾輪廓。對輪廓進行多邊形擬合cvApproxPoly,排除不是矩形的輪廓,利用矩形的其他性質,再排除一些干擾輪廓。這些工作主要由icvGenerateQuads函數完成。
2、尋找每個方格的相鄰方格,並記相鄰方格的個數,連同相鄰方格的信息存在相應CvCBQuad結構體中。二值圖像在膨脹後原本相鄰的方格,分開了,原來相連部分有一個公共點,現在分開變成了兩個點。找到相鄰的方格之後,計算出原來的公共點,用公共點替代膨脹後分開的點。這主要由icvFindQuadNeighborhors函數完成。
3、對所有“方格”(包括被誤判的)分類,分類的原則是類內所有方格是相鄰的。由icvFindConnectedQuads函數完成。
4、根據已知所求的角點個數,判別每個類中方格是否爲所求的棋盤方格,並對棋盤方格排序,即該方格位於哪行那列。在這個過程中,可以添加每類方格總缺少的方格,也可以刪除每類方格中多餘的方格。icvOrderFoundConnetedQuads函數完成該過程。
5、icvCleanFoundConnectedQuads函數、icvCheckQuadGroup函數根據已知棋盤的方格個數(由棋盤的角點數計算出來)確認方格位置及個數是否正確,並確定粗略強角點的位置(兩個方格的相連位置)。icvCheckBoardMonotony再次檢驗棋盤方格是否提取正確。
6、以上如果有一步所有方格都不符合要求,則進入一個新的循環。若循環結束,還尚未找到符合要求的方格,則棋盤定位失敗,退出函數。
最後,cvFindCornerSubpix()根據上步的強角點位置,確定強角點的精確位置。
bool cv::findChessboardCorners ( InputArray image, Size patternSize, OutputArray corners, int flags = CALIB_CB_ADAPTIVE_THRESH+CALIB_CB_NORMALIZE_IMAGE)
這個函數用來檢測一幅圖像中是否含有棋盤格,如果圖像中不含有指定數目的棋盤格角點(即黑色方塊相交的點爲角點,因此製作標定板時,最好選用大一點白色或淺色木板作爲棋盤格背景)或對它們排序失敗時,函數返回0; 如果圖像中棋盤格內部角點都被確定了位置並準確排列的話,該函數將它們按制定行列排序並存儲在corners向量中。該函數確定的角點的大概位置(approximate),如果想更準確地確定角點的位置,你可以該函數檢測角點成功後繼續調用cornerSubPix函數。
參數:
image:8位灰度圖像或彩色圖像。
patternSizeNumber: 棋盤格內部角點的行和列,( patternSize = cvSize(points_per_row, points_per_colum) = cvSize(columns,rows) )。
cornersOutput:檢測到的角點存儲的數組,
flags:棋盤格檢測角點方法設置標置位
CALIB_CB_ADAPTIVE_THRESH 使用自適應閾值將灰度圖像轉化爲二值圖像,而不是固定的由圖像的平均亮度計算出來的閾值
CALIB_CB_NORMALIZE_IMAGE 在利用固定閾值或者自適應的閾值進行二值化之前,先使用equalizeHist來均衡化圖像gamma值。
CALIB_CB_FILTER_QUADS 使用其他的準則(如輪廓面積,周長,類似方形的形狀)來去除在輪廓檢測階段檢測到的錯誤方塊。
CALIB_CB_FAST_CHECK 在圖像上快速檢測一下棋盤格角點,如果沒有棋盤格焦點被發現,繞過其它費時的函數調用,這可以在圖像沒有棋盤格被觀測以及惡劣情況下縮短整個函數執行時間。
檢測棋盤格角點並在圖像中畫出來的例子:
Size patternsize(8,6); //interior number of corners
Mat gray = …; //source image
vector corners; //this will be filled by the detected corners
// CALIB_CB_FAST_CHECK saves a lot of time on images that do not contain any chessboard corners
bool patternfound = findChessboardCorners(gray, patternsize, corners, CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE + CALIB_CB_FAST_CHECK);
if(patternfound)
cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
OpenCV findChessboardCorners 實現代碼閱讀, 主要實現在calibinit.cpp
bool cv::findChessboardCorners( InputArray _image, Size patternSize, OutputArray corners, int flags ){
CV_INSTRUMENT_REGION()
int count = patternSize.area()*2;
std::vector<Point2f> tmpcorners(count+1);
Mat image = _image.getMat();
CvMat c_image = image;
bool ok = cvFindChessboardCorners(&c_image, patternSize, (CvPoint2D32f*)&tmpcorners[0], &count, flags ) > 0;
if( count > 0 ) {
tmpcorners.resize(count);
Mat(tmpcorners).copyTo(corners);
}
else
corners.release();
return ok;
}
InputArray和OutputArray兩個類都是代理數據類型,用來接收Mat和Vector<>作爲輸入參數,OutputArray繼承自InputArray。
InputArray作爲輸入參數的時候,傳入的參數加了const限定符,即它只接收參數作爲純輸入參數,無法更改輸入參數的內容。而OutputArray則沒有加入限定符,可以對參數的內容進行更改。
InputArray這個接口類可以是Mat、Mat_、Mat_<T, m, n>、vector、vector<vector>、vector。也就意味着如果看見函數的參數類型是InputArray型時,把上訴幾種類型作爲參數都是可以的。這個類只能作爲函數的形參參數使用,不要試圖聲明一個InputArray類型的變量,可以用cv::Mat()或cv::noArray()作爲空參。在函數的內部可以使用InputArray::getMat()函數將傳入的參數轉換爲Mat的結構,方便函數內的操作。可能還需要InputArray::kind()用來區分Mat結構或者vector<>結構.
CV_IMPL int cvFindChessboardCorners( const void* arr, CvSize pattern_size,
CvPoint2D32f* out_corners, int* out_corner_count, int flags )
{
int found = 0; CvCBQuad *quads = 0; CvCBCorner *corners = 0;
cv::Ptr<CvMemStorage> storage;
try{
int k = 0; const int min_dilations = 0; const int max_dilations = 7;
if( out_corner_count )
*out_corner_count = 0;
Mat img = cvarrToMat((CvMat*)arr).clone();
if( img.depth() != CV_8U || (img.channels() != 1 && img.channels() != 3) )
CV_Error( CV_StsUnsupportedFormat, "Only 8-bit grayscale or color images are supported" );
if( pattern_size.width <= 2 || pattern_size.height <= 2 )
CV_Error( CV_StsOutOfRange, "Both width and height of the pattern should have bigger than 2" );
if( !out_corners )
CV_Error( CV_StsNullPtr, "Null pointer to corners" );
if (img.channels() != 1) cvtColor(img, img, COLOR_BGR2GRAY);
Mat thresh_img_new = img.clone();
icvBinarizationHistogramBased( thresh_img_new ); // process image in-place
SHOW("New binarization", thresh_img_new);
if( flags & CV_CALIB_CB_FAST_CHECK)
{
//perform new method for checking chessboard using a binary image.
//image is binarised using a threshold dependent on the image histogram
if (checkChessboardBinary(thresh_img_new, pattern_size) <= 0) //fall back to the old method
{
if (checkChessboard(img, pattern_size) <= 0)
{
return found;
}
}
}
storage.reset(cvCreateMemStorage(0));
int prev_sqr_size = 0;
// Try our standard "1" dilation, but if the pattern is not found, iterate the whole procedure with higher dilations.
// This is necessary because some squares simply do not separate properly with a single dilation. However,
// we want to use the minimum number of dilations possible since dilations cause the squares to become smaller,
// making it difficult to detect smaller squares.
for( int dilations = min_dilations; dilations <= max_dilations; dilations++ )
{
if (found)
break; // already found it
//USE BINARY IMAGE COMPUTED USING icvBinarizationHistogramBased METHOD
dilate( thresh_img_new, thresh_img_new, Mat(), Point(-1, -1), 1 );
// So we can find rectangles that go to the edge, we draw a white line around the image edge.
// Otherwise FindContours will miss those clipped rectangle contours.
// The border color will be the image mean, because otherwise we risk screwing up filters like cvSmooth()...
rectangle( thresh_img_new, Point(0,0), Point(thresh_img_new.cols-1, thresh_img_new.rows-1), Scalar(255,255,255), 3, LINE_8);
int max_quad_buf_size = 0;
cvFree(&quads);
cvFree(&corners);
int quad_count = icvGenerateQuads( &quads, &corners, storage, thresh_img_new, flags, &max_quad_buf_size );
PRINTF("Quad count: %d/%d\n", quad_count, (pattern_size.width/2+1)*(pattern_size.height/2+1));
SHOW_QUADS("New quads", thresh_img_new, quads, quad_count);
if (processQuads(quads, quad_count, pattern_size, max_quad_buf_size, storage, corners, out_corners, out_corner_count, prev_sqr_size))
found = 1;
}
PRINTF("Chessboard detection result 0: %d\n", found);
// revert to old, slower, method if detection failed
if (!found)
{
if( flags & CV_CALIB_CB_NORMALIZE_IMAGE )
{
equalizeHist( img, img );
}
Mat thresh_img;
prev_sqr_size = 0;
PRINTF("Fallback to old algorithm\n");
const bool useAdaptive = flags & CV_CALIB_CB_ADAPTIVE_THRESH;
if (!useAdaptive)
{
// empiric threshold level
// thresholding performed here and not inside the cycle to save processing time
double mean = cv::mean(img).val[0];
int thresh_level = MAX(cvRound( mean - 10 ), 10);
threshold( img, thresh_img, thresh_level, 255, THRESH_BINARY );
}
//if flag CV_CALIB_CB_ADAPTIVE_THRESH is not set it doesn't make sense to iterate over k
int max_k = useAdaptive ? 6 : 1;
for( k = 0; k < max_k; k++ )
{
for( int dilations = min_dilations; dilations <= max_dilations; dilations++ )
{
if (found)
break; // already found it
// convert the input grayscale image to binary (black-n-white)
if (useAdaptive)
{
int block_size = cvRound(prev_sqr_size == 0
? MIN(img.cols, img.rows) * (k % 2 == 0 ? 0.2 : 0.1)
: prev_sqr_size * 2);
block_size = block_size | 1;
// convert to binary
adaptiveThreshold( img, thresh_img, 255, ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, block_size, (k/2)*5 );
if (dilations > 0)
dilate( thresh_img, thresh_img, Mat(), Point(-1, -1), dilations-1 );
}
else
{
dilate( thresh_img, thresh_img, Mat(), Point(-1, -1), 1 );
}
SHOW("Old binarization", thresh_img);
// So we can find rectangles that go to the edge, we draw a white line around the image edge.
// Otherwise FindContours will miss those clipped rectangle contours.
// The border color will be the image mean, because otherwise we risk screwing up filters like cvSmooth()...
rectangle( thresh_img, Point(0,0), Point(thresh_img.cols-1, thresh_img.rows-1), Scalar(255,255,255), 3, LINE_8);
int max_quad_buf_size = 0;
cvFree(&quads);
cvFree(&corners);
int quad_count = icvGenerateQuads( &quads, &corners, storage, thresh_img, flags, &max_quad_buf_size);
PRINTF("Quad count: %d/%d\n", quad_count, (pattern_size.width/2+1)*(pattern_size.height/2+1));
SHOW_QUADS("Old quads", thresh_img, quads, quad_count);
if (processQuads(quads, quad_count, pattern_size, max_quad_buf_size, storage, corners, out_corners, out_corner_count, prev_sqr_size))
found = 1;
}
}
}
PRINTF("Chessboard detection result 1: %d\n", found);
if( found )
found = icvCheckBoardMonotony( out_corners, pattern_size );
PRINTF("Chessboard detection result 2: %d\n", found);
// check that none of the found corners is too close to the image boundary
if( found )
{
const int BORDER = 8;
for( k = 0; k < pattern_size.width*pattern_size.height; k++ )
{
if( out_corners[k].x <= BORDER || out_corners[k].x > img.cols - BORDER ||
out_corners[k].y <= BORDER || out_corners[k].y > img.rows - BORDER )
break;
}
found = k == pattern_size.width*pattern_size.height;
}
PRINTF("Chessboard detection result 3: %d\n", found);
if( found )
{
if ( pattern_size.height % 2 == 0 && pattern_size.width % 2 == 0 )
{
int last_row = (pattern_size.height-1)*pattern_size.width;
double dy0 = out_corners[last_row].y - out_corners[0].y;
if( dy0 < 0 )
{
int n = pattern_size.width*pattern_size.height;
for(int i = 0; i < n/2; i++ )
{
CvPoint2D32f temp;
CV_SWAP(out_corners[i], out_corners[n-i-1], temp);
}
}
}
int wsize = 2;
CvMat old_img(img);
cvFindCornerSubPix( &old_img, out_corners, pattern_size.width*pattern_size.height,
cvSize(wsize, wsize), cvSize(-1,-1),
cvTermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 15, 0.1));
}
}
catch(...)
{
cvFree(&quads);
cvFree(&corners);
throw;
}
cvFree(&quads);
cvFree(&corners);
return found;
}
cornerSubPix
功能:在角點檢測中精確化角點位置
函數原型:void cornerSubPix(InputArray image, InputOutputArray corners, Size winSize, Size zeroZone, TermCriteria criteria);
C: void cvFindCornerSubPix(const CvArr* image, CvPoint2D32f* corners, int count, CvSize win, CvSize zero_zone, CvTermCriteria criteria);
參數:
image:輸入圖像
corners:輸入角點的初始座標以及精準化後的座標用於輸出
winSize:搜索窗口邊長的一半,例如如果winSize=Size(5,5),則一個大小爲:的搜索窗口將被使用。
zeroZone:搜索區域中間的dead region邊長的一半,有時用於避免自相關矩陣的奇異性。如果值設爲(-1,-1)則表示沒有這個區域。
criteria:角點精準化迭代過程的終止條件。也就是當迭代次數超過criteria.maxCount,或者角點位置變化小於criteria.epsilon時,停止迭代過程。
CvTermCriteria 類:迭代算法的終止準則
原型:
`typedef struct CvTermCriteria
{
int type; /* CV_TERMCRIT_ITER 和CV_TERMCRIT_EPS二值之一,或者二者的組合 /
int max_iter; / 最大迭代次數 /
double epsilon; / 結果的精確性 */
宏定義:
CV_TERMCRIT_ITER:代終止條件爲達到最大迭代次數終止
CV_TERMCRIT_EPS:迭代到閾值終止
A corner can be defined as the intersection of two edges. A corner can also be defined as a point for which there are two dominant and different edge directions in a local neighbourhood of the point。
下面是opencv使用的harris cornor 檢測法
首先, (x , y)像素點的與周圍臨近點像素差的均方值的和爲
R ≈ ∑ ( I ( x + u , y + v ) − I ( x, y) )^2
對I(x+u, y+v)進行泰勒展開
R≈∑ ( I ( x, y) +(δI/δx) +(δI/δy)− I ( x, y) )^2