Using estimateRigidTransform instead of findHomography

15,604

Solution 1

I've done it this way in the past:

cv::Mat R = cv::estimateRigidTransform(p1,p2,false);

    if(R.cols == 0)
    {
        continue;
    }

    cv::Mat H = cv::Mat(3,3,R.type());
    H.at<double>(0,0) = R.at<double>(0,0);
    H.at<double>(0,1) = R.at<double>(0,1);
    H.at<double>(0,2) = R.at<double>(0,2);

    H.at<double>(1,0) = R.at<double>(1,0);
    H.at<double>(1,1) = R.at<double>(1,1);
    H.at<double>(1,2) = R.at<double>(1,2);

    H.at<double>(2,0) = 0.0;
    H.at<double>(2,1) = 0.0;
    H.at<double>(2,2) = 1.0;


    cv::Mat warped;
    cv::warpPerspective(img1,warped,H,img1.size());

which is the same as David Nilosek suggested: add a 0 0 1 row at the end of the matrix

this code warps the IMAGES with a rigid transformation.

I you want to warp/transform the points, you must use perspectiveTransform function with a 3x3 matrix ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=perspectivetransform#perspectivetransform )

tutorial here:

http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html

or you can do it manually by looping over your vector and

cv::Point2f result;
result.x = point.x * R.at<double>(0,0) + point.y * R.at<double>(0,1) + R.at<double>(0,2);
result.y = point.x * R.at<double>(1,0) + point.y * R.at<double>(1,1) + R.at<double>(1,2);

hope that helps.

remark: didn't test the manual code, but should work. No PerspectiveTransform conversion needed there!

edit: this is the full (tested) code:

// points
std::vector<cv::Point2f> p1;
p1.push_back(cv::Point2f(0,0));
p1.push_back(cv::Point2f(1,0));
p1.push_back(cv::Point2f(0,1));

// simple translation from p1 for testing:
std::vector<cv::Point2f> p2;
p2.push_back(cv::Point2f(1,1));
p2.push_back(cv::Point2f(2,1));
p2.push_back(cv::Point2f(1,2));

cv::Mat R = cv::estimateRigidTransform(p1,p2,false);

// extend rigid transformation to use perspectiveTransform:
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);

H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);

H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;

// compute perspectiveTransform on p1
std::vector<cv::Point2f> result;
cv::perspectiveTransform(p1,result,H);

for(unsigned int i=0; i<result.size(); ++i)
    std::cout << result[i] << std::endl;

which gives output as expected:

[1, 1]
[2, 1]
[1, 2]

Solution 2

The affine transformations (the result of cv::estimateRigidTransform) are applied to the image with the function cv::warpAffine.

Solution 3

The 3x3 homography form of a rigid transform is:

 a1 a2 b1
-a2 a3 b2
  0  0  1

So when using estimateRigidTransform you could add [0 0 1] as the third row, if you want the 3x3 matrix.

Share:
15,604
Tom smith
Author by

Tom smith

Updated on June 09, 2022

Comments

  • Tom smith
    Tom smith almost 2 years

    The example in the link below is using findHomography to get the transformation between two sets of points. I want to limit the degrees of freedom used in the transformation so want to replace findHomography with estimateRigidTransform.

    http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography

    Below I use estimateRigidTransform to get the transformation between the object and scene points. objPoints and scePoints are represented by vector <Point2f>.

    Mat H = estimateRigidTransform(objPoints, scePoints, false);
    

    Following the method used in the tutorial above, I want to transform the corner values using the transformation H. The tutorial uses perspectiveTransform with the 3x3 matrix returned by findHomography. With the rigid transform it only returns a 2x3 Matrix so this method cannot be used.

    How would I transform the values of the corners, represented as vector <Point2f> with this 2x3 Matrix. I am just looking to perform the same functions as the tutorial but with less degrees of freedom for the transformation. I have looked at other methods such as warpAffine and getPerspectiveTransform as well, but so far not found a solution.

    EDIT:

    I have tried the suggestion from David Nilosek. Below I am adding the extra row to the matrix.

    Mat row = (Mat_<double>(1,3) << 0, 0, 1);
    H.push_back(row);
    

    However this gives this error when using perspectiveTransform.

    OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp, line 1486
    libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp:1486: error: (-215) mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0) in function create
    

    ChronoTrigger suggested using warpAffine. I am calling the warpAffine method below, the size of 1 x 5 is the size of objCorners and sceCorners.

    warpAffine(objCorners, sceCorners, H, Size(1,4));
    

    This gives the error below, which suggests the wrong type. objCorners and sceCorners are vector <Point2f> representing the 4 corners. I thought warpAffine would accept Mat images which may explain the error.

    OpenCV Error: Assertion failed ((M0.type() == CV_32F || M0.type() == CV_64F) && M0.rows == 2 && M0.cols == 3) in warpAffine, file /Users/cgray/Downloads/opencv-2.4.6/modules/imgproc/src/imgwarp.cpp, line 3280
    
  • Tom smith
    Tom smith almost 10 years
    I have tried to implement this, I am getting an error when calling perspectiveTransform which I have described in an edit to my original question.
  • Tom smith
    Tom smith almost 10 years
    I have updated the question with the results I got after trying your suggestion.
  • David Nilosek
    David Nilosek almost 10 years
    Guessing by the exception that it is throwing, the matrices are not of the same type. This function is simply a matrix multiplication followed by a division, you could code it if you cannot figure out how to use the function.
  • Tom smith
    Tom smith almost 10 years
    I am not using matrices for perspectiveTransform it is the two vectors of Point2f which are of the same type. These are the same vectors that are used in the tutorial link, just with a different transformation matrix.
  • Tom smith
    Tom smith almost 10 years
    I still get the same error message about type, which I assume means objCorners and sceCorners. These are not Mat matrices they are vectors of Point2f as I only want to apply the transformation to the corner values at the moment.
  • Micka
    Micka almost 10 years
    ah ok, sorry. you can just multiply manually: result.x = a.x*transf.at<double>(0,0) + a.y*transf.at<double>(0,1) + transf.at<double>(0,3); result.y = ... standard matrix multiplication. I find thats less error-prone than some point-to-mat conversions.
  • Micka
    Micka almost 10 years
    @Tomsmith added tested code for usage of perspectiveTransform on sparse vector of Point2f with a rigid transform
  • Tom smith
    Tom smith almost 10 years
    I have it working with the manual code but I will also try the edit you made, thanks!
  • Gelliant
    Gelliant almost 2 years
    That does not seem right. A rigid transform can use rotation and translation. So it confines [a1, a2; -a2, a3] of the affine transform to the values of the rotation matrix.
  • Gelliant
    Gelliant almost 2 years
    Sorry, I was confused. The naming in openCV was a bit weird imho. Rigid transform usually means you only use rotation and translation. So it confines [a1, a2; -a2, a3] of the affine transform to the values of the rotation matrix. However, estimateRigidTransform would estimate the just the affine transform. Probably the reason they updated the naming.