SoFunction
Updated on 2024-11-20

opencv3/C++ Planar Object Recognition & Perspective Transformation Methods

findHomography( )

The function findHomography( ) finds the perspective transformation H between two planes.

Parameter Description:

Mat findHomography( 
InputArray srcPoints, //Coordinates of points in the original plane
InputArray dstPoints, //Coordinates of the midpoint of the target plane
int method = 0, //Methods used to calculate the mono-stress matrix
double ransacReprojThreshold = 3, 
OutputArray mask=noArray(), //Optional output mask set by robust method (RANSAC or LMEDS)
const int maxIters = 2000, Maximum number of //RANSAC iterations, 2000 is the maximum value it can reach
const double confidence = 0.995 //Confidence
);

The methods used to compute the monoresponsivity matrix are:

0 : Use the usual method for all points;

RANSAC: RANSAC-based robust method;

LMEDS : Least Median Robust Method;

RHO : A robust method based on PROSAC;

is minimized. If the parameter method is set to the default value of 0, the function uses all pairs of points to compute the initial single responsiveness estimate in a simple least squares scheme.

However, if not all point-to all conform to a rigid perspective transformation (i.e., there are some outliers), then this initial estimate will be poor. In this case, one of three robust methods can be used. The methods RANSAC, LMeDS, and RHO attempt to use this subset and a simple least-squares algorithm to estimate individual random subsets of the monoresponse matrix (four pairs for each subset), and then compute the quality/goodness of the computed monoresponse (this is either the number of in-points for RANSAC or the median reprojection error for LMeD). The optimal subset is then used to generate an initial estimate of the monoresponse matrix and a mask for the interior/exterior points.

Regardless of whether the method is robust or not, the computed mono-stressor matrices are further refined using the Levenberg-Marquardt method (inlier is used only in the case of the robust method) in order to reduce the reprojection error even more.

The RANSAC and RHO methods can handle almost any rate of outliers, but require a threshold to distinguish outliers from outliers. The LMeDS method does not require any threshold, but will only work properly if more than 50% of the internal values are present. Finally, if there are no outliers and the noise is fairly small, the default method (method = 0) is used.

perspectiveTransform()

The function perspectiveTransform() performs a perspective matrix transformation of a vector.

Parameter Description:

void perspectiveTransform(
InputArray src, // Input dual or triple channel floating point arrays/images
OutputArray dst, //Output an array/image of the same size and type as src
InputArray m //3x3 or 4x4 Floating Point Conversion Matrix
);

Flat object recognition:

#include<opencv2/>
#include<opencv2/>
using namespace cv;
using namespace cv::xfeatures2d;

int main()
{
 Mat src1,src2;
 src1 = imread("E:/image/image/");
 src2 = imread("E:/image/image/");
 if (() || ())
 {
  printf("can ont load images....\n");
  return -1;
 }
 imshow("image1", src1);
 imshow("image2", src2);

 int minHessian = 400;
 //Selection of SURF features
 Ptr<SURF>detector = SURF::create(minHessian);
 std::vector<KeyPoint>keypoints1;
 std::vector<KeyPoint>keypoints2;
 Mat descriptor1, descriptor2;
 // Detect keypoints and compute descriptors
 detector->detectAndCompute(src1, Mat(), keypoints1, descriptor1);
 detector->detectAndCompute(src2, Mat(), keypoints2, descriptor2);

 // Flann-based descriptor matcher
 FlannBasedMatcher matcher;
 std::vector<DMatch>matches;
 // Find the best match for each descriptor from the query set
 (descriptor1, descriptor2, matches);
 double minDist = 1000;
 double maxDist = 0;
 for (int i = 0; i < ; i++)
 {
  double dist = matches[i].distance;
  printf("%f \n", dist);
  if (dist > maxDist)
  {
   maxDist = dist;
  }
  if (dist < minDist)
  {
   minDist = dist;
  }

 }
 The //DMatch class is used to match keypoint descriptors of the
 std::vector<DMatch>goodMatches;
 for (int i = 0; i < ; i++)
 {
  double dist = matches[i].distance;
  if (dist < max(2*minDist, 0.02))
  {
   goodMatches.push_back(matches[i]);
  }
 }
 Mat matchesImg;
 drawMatches(src1, keypoints1, src2, keypoints2, goodMatches, matchesImg, Scalar::all(-1), 
  Scalar::all(-1), std::vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

 std::vector<Point2f>point1, point2;
 for (int i = 0; i < (); i++)
 {
  point1.push_back(keypoints1[goodMatches[i].queryIdx].pt);
  point2.push_back(keypoints2[goodMatches[i].trainIdx].pt);
 }

 Mat H = findHomography(point1, point2, RANSAC);
 std::vector<Point2f>cornerPoints1(4);
 std::vector<Point2f>cornerPoints2(4);
 cornerPoints1[0] = Point(0, 0);
 cornerPoints1[1] = Point(, 0);
 cornerPoints1[2] = Point(, );
 cornerPoints1[3] = Point(0,);
 perspectiveTransform(cornerPoints1, cornerPoints2, H);

 //Draw the transformed target outline, because the left side is the image src2 so the coordinate point is shifted to the right.
 line(matchesImg, cornerPoints2[0] + Point2f(, 0), cornerPoints2[1] + Point2f(, 0), Scalar(0,255,255), 4, 8, 0);
 line(matchesImg, cornerPoints2[1] + Point2f(, 0), cornerPoints2[2] + Point2f(, 0), Scalar(0,255,255), 4, 8, 0);
 line(matchesImg, cornerPoints2[2] + Point2f(, 0), cornerPoints2[3] + Point2f(, 0), Scalar(0,255,255), 4, 8, 0);
 line(matchesImg, cornerPoints2[3] + Point2f(, 0), cornerPoints2[0] + Point2f(, 0), Scalar(0,255,255), 4, 8, 0);

 //Draw the transformed target contour on the original drawing
 line(src2, cornerPoints2[0], cornerPoints2[1], Scalar(0,255,255), 4, 8, 0);
 line(src2, cornerPoints2[1], cornerPoints2[2], Scalar(0,255,255), 4, 8, 0);
 line(src2, cornerPoints2[2], cornerPoints2[3], Scalar(0,255,255), 4, 8, 0);
 line(src2, cornerPoints2[3], cornerPoints2[0], Scalar(0,255,255), 4, 8, 0);

 imshow("output", matchesImg);
 imshow("output2", src2);

 waitKey();
 return 0;
}

Above this opencv3/C++ Planar Object Recognition & Perspective Transformation method is all that I have shared with you, I hope to give you a reference, and I hope you will support me more.