Using estimateRigidTransform instead of findHomography
Solution 1
I've done it this way in the past:
cv::Mat R = cv::estimateRigidTransform(p1,p2,false);
if(R.cols == 0)
{
continue;
}
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);
H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);
H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;
cv::Mat warped;
cv::warpPerspective(img1,warped,H,img1.size());
which is the same as David Nilosek suggested: add a 0 0 1 row at the end of the matrix
this code warps the IMAGES with a rigid transformation.
I you want to warp/transform the points, you must use perspectiveTransform
function with a 3x3 matrix ( http://docs.opencv.org/modules/core/doc/operations_on_arrays.html?highlight=perspectivetransform#perspectivetransform )
tutorial here:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
or you can do it manually by looping over your vector and
cv::Point2f result;
result.x = point.x * R.at<double>(0,0) + point.y * R.at<double>(0,1) + R.at<double>(0,2);
result.y = point.x * R.at<double>(1,0) + point.y * R.at<double>(1,1) + R.at<double>(1,2);
hope that helps.
remark: didn't test the manual code, but should work. No PerspectiveTransform conversion needed there!
edit: this is the full (tested) code:
// points
std::vector<cv::Point2f> p1;
p1.push_back(cv::Point2f(0,0));
p1.push_back(cv::Point2f(1,0));
p1.push_back(cv::Point2f(0,1));
// simple translation from p1 for testing:
std::vector<cv::Point2f> p2;
p2.push_back(cv::Point2f(1,1));
p2.push_back(cv::Point2f(2,1));
p2.push_back(cv::Point2f(1,2));
cv::Mat R = cv::estimateRigidTransform(p1,p2,false);
// extend rigid transformation to use perspectiveTransform:
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);
H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);
H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;
// compute perspectiveTransform on p1
std::vector<cv::Point2f> result;
cv::perspectiveTransform(p1,result,H);
for(unsigned int i=0; i<result.size(); ++i)
std::cout << result[i] << std::endl;
which gives output as expected:
[1, 1]
[2, 1]
[1, 2]
Solution 2
The affine transformations (the result of cv::estimateRigidTransform
) are applied to the image with the function cv::warpAffine
.
Solution 3
The 3x3 homography form of a rigid transform is:
a1 a2 b1
-a2 a3 b2
0 0 1
So when using estimateRigidTransform
you could add [0 0 1] as the third row, if you want the 3x3 matrix.
Tom smith
Updated on June 09, 2022Comments
-
Tom smith almost 2 years
The example in the link below is using
findHomography
to get the transformation between two sets of points. I want to limit the degrees of freedom used in the transformation so want to replacefindHomography
withestimateRigidTransform
.Below I use
estimateRigidTransform
to get the transformation between the object and scene points.objPoints
andscePoints
are represented byvector <Point2f>
.Mat H = estimateRigidTransform(objPoints, scePoints, false);
Following the method used in the tutorial above, I want to transform the corner values using the transformation
H
. The tutorial usesperspectiveTransform
with the 3x3 matrix returned byfindHomography
. With the rigid transform it only returns a 2x3 Matrix so this method cannot be used.How would I transform the values of the corners, represented as
vector <Point2f>
with this 2x3 Matrix. I am just looking to perform the same functions as the tutorial but with less degrees of freedom for the transformation. I have looked at other methods such aswarpAffine
andgetPerspectiveTransform
as well, but so far not found a solution.EDIT:
I have tried the suggestion from David Nilosek. Below I am adding the extra row to the matrix.
Mat row = (Mat_<double>(1,3) << 0, 0, 1); H.push_back(row);
However this gives this error when using perspectiveTransform.
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp, line 1486 libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/cgray/Downloads/opencv-2.4.6/modules/core/src/matrix.cpp:1486: error: (-215) mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0) in function create
ChronoTrigger suggested using
warpAffine
. I am calling thewarpAffine
method below, the size of 1 x 5 is the size ofobjCorners
andsceCorners
.warpAffine(objCorners, sceCorners, H, Size(1,4));
This gives the error below, which suggests the wrong type.
objCorners
andsceCorners
arevector <Point2f>
representing the 4 corners. I thoughtwarpAffine
would acceptMat
images which may explain the error.OpenCV Error: Assertion failed ((M0.type() == CV_32F || M0.type() == CV_64F) && M0.rows == 2 && M0.cols == 3) in warpAffine, file /Users/cgray/Downloads/opencv-2.4.6/modules/imgproc/src/imgwarp.cpp, line 3280
-
Tom smith almost 10 yearsI have tried to implement this, I am getting an error when calling perspectiveTransform which I have described in an edit to my original question.
-
Tom smith almost 10 yearsI have updated the question with the results I got after trying your suggestion.
-
David Nilosek almost 10 yearsGuessing by the exception that it is throwing, the matrices are not of the same type. This function is simply a matrix multiplication followed by a division, you could code it if you cannot figure out how to use the function.
-
Tom smith almost 10 yearsI am not using matrices for perspectiveTransform it is the two vectors of Point2f which are of the same type. These are the same vectors that are used in the tutorial link, just with a different transformation matrix.
-
Tom smith almost 10 yearsI still get the same error message about type, which I assume means objCorners and sceCorners. These are not Mat matrices they are vectors of Point2f as I only want to apply the transformation to the corner values at the moment.
-
Micka almost 10 yearsah ok, sorry. you can just multiply manually:
result.x = a.x*transf.at<double>(0,0) + a.y*transf.at<double>(0,1) + transf.at<double>(0,3); result.y = ...
standard matrix multiplication. I find thats less error-prone than some point-to-mat conversions. -
Micka almost 10 years@Tomsmith added
tested
code for usage ofperspectiveTransform
on sparse vector of Point2f with a rigid transform -
Tom smith almost 10 yearsI have it working with the manual code but I will also try the edit you made, thanks!
-
Gelliant almost 2 yearsThat does not seem right. A rigid transform can use rotation and translation. So it confines
[a1, a2; -a2, a3]
of the affine transform to the values of the rotation matrix. -
Gelliant almost 2 yearsSorry, I was confused. The naming in openCV was a bit weird imho. Rigid transform usually means you only use rotation and translation. So it confines
[a1, a2; -a2, a3]
of the affine transform to the values of the rotation matrix. However,estimateRigidTransform
would estimate the just the affine transform. Probably the reason they updated the naming.