We sometimes need to find the center of gravity of a particular object, here we generally binarize the image to derive the outline of the object, and then calculate the center of each object based on the grayscale center of gravity method.
The steps are as follows:
1) Appropriate threshold binarization
2) Finding the contour
3) Calculate the center of gravity
otsu algorithm to find the optimal threshold value
The otsu method (maximum interclass variance method, sometimes called Otsu algorithm) uses the idea of clustering, dividing the number of grays of an image into 2 parts according to the gray level, so that the difference in gray value between the two parts is the largest, and the difference in grays between each part is the smallest, and the calculation of variance is used to find a suitable gray level to be divided, the otsu algorithm is considered to be the best in the image segmentation of threshold selection algorithm, which is simple to compute and is not affected by image brightness and contrast. Therefore, segmentation that maximizes the variance between classes implies minimum probability of misclassification.
Calculate the profile
opencv function findContours function
findContours (binarized image, contours, hierarchy, contour retrieval mode, contour approximation approach, offset)
grayscale center of gravity method
Using grayscale center of gravity method to calculate the center, grayscale center of gravity method will be each pixel location in the region of the grayscale value as the "quality" of the point, the center of the region of the formula is:
Where, f(u,v) is the gray value of the pixel point with coordinates (u,v), is the set of target regions, is the region center coordinates, and the gray center of gravity method extracts the energy center of the region.
//otsu algorithm implementation function int Otsu(Mat &image) { int width = ; int height = ; int x = 0, y = 0; int pixelCount[256]; float pixelPro[256]; int i, j, pixelSum = width * height, threshold = 0; uchar* data = (uchar*); //Initialization for (i = 0; i < 256; i++) { pixelCount[i] = 0; pixelPro[i] = 0; } //count the number of each pixel in the gray level in the whole image for (i = y; i < height; i++) { for (j = x; j<width; j++) { pixelCount[data[i * + j]]++; } } // Calculate the proportion of each pixel in the whole image for (i = 0; i < 256; i++) { pixelPro[i] = (float)(pixelCount[i]) / (float)(pixelSum); } //Classical ostu algorithm to get foreground and background segmentation. // traverse the gray level [0,255], calculate the gray value with the largest variance, which is the optimal threshold value float w0, w1, u0tmp, u1tmp, u0, u1, u, deltaTmp, deltaMax = 0; for (i = 0; i < 256; i++) { w0 = w1 = u0tmp = u1tmp = u0 = u1 = u = deltaTmp = 0; for (j = 0; j < 256; j++) { if (j <= i) //Background section { // classified with i as the threshold, the total probability of the first class w0 += pixelPro[j]; u0tmp += j * pixelPro[j]; } else //Foreground section { // classified with i as the threshold, the total probability of the second category w1 += pixelPro[j]; u1tmp += j * pixelPro[j]; } } u0 = u0tmp / w0; //Average gray scale for the first category u1 = u1tmp / w1; //Average gray scale of the second category u = u0tmp + u1tmp; //Average gray scale of the whole image //Calculate inter-class variance deltaTmp = w0 * (u0 - u)*(u0 - u) + w1 * (u1 - u)*(u1 - u); //find the maximum interclass variance and the corresponding threshold value if (deltaTmp > deltaMax) { deltaMax = deltaTmp; threshold = i; } } //returns the optimal threshold value. return threshold; } int main() { Mat White=imread("");//Read the image int threshold_white = otsu(White);//threshold calculation using otsu cout << "Optimal threshold:" << threshold_white << endl; Mat thresholded = Mat::zeros((), ()); threshold(White, thresholded, threshold_white, 255, CV_THRESH_BINARY);// Binarization vector<vector<Point>>contours; vector<Vec4i>hierarchy; findContours(thresholded, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);//Find Outline int i = 0; int count = 0; Point pt[10];//Assuming three connected areas Moments moment;//Moment vector<Point>Center;//create a vector to hold the center of gravity coordinates for (; i >= 0; i = hierarchy[i][0])// Read each contour to find the center of gravity. { Mat temp((i)); Scalar color(0, 0, 255); moment = moments(temp, false); if (moment.m00 != 0)//Divisor cannot be 0 { pt[i].x = cvRound(moment.m10 / moment.m00);// Calculate the horizontal coordinates of the center of gravity pt[i].y = cvRound(moment.m01 / moment.m00);// Calculation of the vertical coordinates of the center of gravity } Point p = Point(pt[i].x, pt[i].y);//Center of gravity coordinates circle(White, p, 1, color, 1, 8);//Original drawing of the center of gravity coordinates count++;// Number of centers of gravity or number of connected areas Center.push_back(p);// Save the center of gravity coordinates to the Center vector. } } cout << "Number of Center of Gravity Points:" << () << endl; cout << "Number of contours:" << () << endl; imwrite("", White); }
Original image:
Binarization:
Center of gravity point:
Additional knowledge:opencv Combination of thresholded contours based on template convex envelopes
In image processing, high contrast between features and background is required and also proper image segmentation is the key to solve the problem.
The blogger's previous method defaulted to the feature necessarily being the largest connected domain, so after thresholding, look for the contour and just extract the contour with the largest area directly.
But there may be another situation where, no matter how much thresholding and expansion is done, the desired feature is split into several pieces, i.e. disconnected. At this point, with some unpredictable interference and noise, after findcontours, you get a lot of contours.
So the question is, which contour, or which combination of contours do we need for the area?
The significance of this paper also lies here.
Find the most similar combination of contours in the image based on the convex envelope of the template.
This method, mainly uses the matchshapes function, and is based on the premise that the similarity between the 2/3 part of the template convex package, and the template convex package, is greater than the 1/2 part of the template convex package.
Without further ado, on to the code.
void getAlikeContours(std::vector<cv::Point> Inputlist, cv::Mat InputImage, std::vector<cv::Point> &Outputlist) { Mat image; (image); vector<vector<Point> > contours; findContours(image, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);// Find the outermost contour for (int idx = () - 1; idx >= 0; idx--) { for (int i = contours[idx].size() - 1; i >= 0; i--) { if (contours[idx][i].x == 1 || contours[idx][i].y == 1 || contours[idx][i].x == - 2 || contours[idx][i].y == - 2) { swap(contours[idx][i], contours[idx][contours[idx].size() - 1]); contours[idx].pop_back(); } } // there may be empty outlines, remove them for (int idx = () - 1; idx >= 0; idx--) { if (contours[idx].size() == 0) (() + idx); } while (true) { if (() == 0) break; if (() == 1) { vector<Point> finalList; (contours[0].begin(), contours[0].end()); convexHull(Mat(finalList), Outputlist, true); break; } int maxContourIdx = 0; int maxContourPtNum = 0; for (int index = () - 1; index >= 0; index--) { if (contours[index].size() > maxContourPtNum) { maxContourPtNum = contours[index].size(); maxContourIdx = index; } } //Second largest profile int secondContourIdx = 0; int secondContourPtNum = 0; for (int index = () - 1; index >= 0; index--) { if (index == maxContourIdx) continue; if (contours[index].size() > secondContourPtNum) { secondContourPtNum = contours[index].size(); secondContourIdx = index; } } vector<Point> maxlist; vector<Point> maxAndseclist; vector<Point> maxlistHull; vector<Point> maxAndseclistHull; ((), contours[maxContourIdx].begin(), contours[maxContourIdx].end()); ((), contours[maxContourIdx].begin(), contours[maxContourIdx].end()); ((), contours[secondContourIdx].begin(), contours[secondContourIdx].end()); convexHull(Mat(maxlist), maxlistHull, true); convexHull(Mat(maxAndseclist), maxAndseclistHull, true); double maxcontourScore = matchShapes(Inputlist, maxlistHull, CV_CONTOURS_MATCH_I1, 0); double maxandseccontourScore = matchShapes(Inputlist, maxAndseclistHull, CV_CONTOURS_MATCH_I1, 0); if (maxcontourScore>maxandseccontourScore) { contours[maxContourIdx].insert(contours[maxContourIdx].end(), contours[secondContourIdx].begin(), contours[secondContourIdx].end()); } (() + secondContourIdx); } }
The above example of this Opencv to find the center of gravity of the connected region is all that I have shared with you, I hope it can give you a reference, and I hope you will support me more.