OpenCV warpperspective

46,786

Solution 1

The problem occurs because the homography maps part of the image to negative x,y values which are outside the image area so cannot be plotted. what we wish to do is to offset the warped output by some number of pixels to 'shunt' the entire warped image into positive coordinates(and hence inside the image area).

Homographies can be combined using matrix multiplication (which is why they are so powerful). If A and B are homographies, then AB represents the homography which applies B first, and then A.

Because of this all we need to do to offset the output is create the homography matrix for a translation by some offset, and then pre-multiply that by our original homography matrix

A 2D homography matrix looks like this :

[R11,R12,T1]
[R21,R22,T2]
[ P , P , 1]

where R represents a rotation matrix, T represents a translation, and P represents a perspective warp. And so a purely translational homography looks like this:

[ 1 , 0 , x_offset]
[ 0 , 1 , y_offset]
[ 0 , 0 ,    1    ]

So just premultiply your homography by a matrix similar to the above, and your output image will be offset.

(Make sure you use matrix multiplication, not element wise multiplication!)

Solution 2

The secret comes in two parts: the transform matrix (homography), and the resulting image size.

  • calculate a correct transform by using the getPerspectiveTransform(). Take 4 points from the original image, calculate their correct position in the destination, put them in two vectors in the same order, and use them to compute the perspective transform matrix.

  • Make sure the destination image size (third parameter for the warpPerspective()) is exactly what you want. Define it as Size(myWidth, myHeight).

Solution 3

I have done one method... It is working.

  perspectiveTransform(obj_corners,scene_corners,H);
int maxCols(0),maxRows(0);

 for(int i=0;i<scene_corners.size();i++)
{
   if(maxRows < scene_corners.at(i).y)
        maxRows = scene_corners.at(i).y;
   if(maxCols < scene_corners.at(i).x)
        maxCols = scene_corners.at(i).x;
}

I just find the maximum of the x points and y points respectively and put it on

warpPerspective( tmp, transformedImage, homography, Size( maxCols, maxRows ) );

Solution 4

Matt's answer is a good start, and he is correct in saying you need to multiply your homography by

[ 1 , 0 , x_offset]
[ 0 , 1 , y_offset]
[ 0 , 0 ,    1    ]

But he does not specify what x_offset and y_offset are. Other answers have said just take the perspective transform, but that is not correct. You want to take the INVERSE perspective transform.

Just because a point 0,0 transforms into, say, -10,-10, does not mean that shifting the image by 10,10 will result in a non-cropped image. This is because point 10,10 does not necessarily map into 0,0.
What you want to do is find out what point would map into 0,0, and shift the image by that much. To do that you take the inverse (cv2.invert) of the homography and apply perspectiveTransform.

enter image description here does not imply: enter image description here

You need to apply a reverse transform to find the correct points.

enter image description here

This will get the correct x_offset and y_offset to align your top left point. From there to find the correct bounding box and fit the entire image perfectly, you need to figure out the skew (how much the image slants left or up after your normal, non-inverse, transformation) and add that amount to your x_offset and y_offset as well.

EDIT: This is all theory. Images are a few pixels off in my tests, I'm not sure why.

Solution 5

Try the below homography_warp.

void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst);

src is the source image.

H is your homography.

dst is the warped image.

homography_warp adjust your homography as described by https://stackoverflow.com/users/1060066/matt-freeman in his answer https://stackoverflow.com/a/8229116/15485

// Convert a vector of non-homogeneous 2D points to a vector of homogenehous 2D points.
void to_homogeneous(const std::vector< cv::Point2f >& non_homogeneous, std::vector< cv::Point3f >& homogeneous)
{
    homogeneous.resize(non_homogeneous.size());
    for (size_t i = 0; i < non_homogeneous.size(); i++) {
        homogeneous[i].x = non_homogeneous[i].x;
        homogeneous[i].y = non_homogeneous[i].y;
        homogeneous[i].z = 1.0;
    }
}

// Convert a vector of homogeneous 2D points to a vector of non-homogenehous 2D points.
void from_homogeneous(const std::vector< cv::Point3f >& homogeneous, std::vector< cv::Point2f >& non_homogeneous)
{
    non_homogeneous.resize(homogeneous.size());
    for (size_t i = 0; i < non_homogeneous.size(); i++) {
        non_homogeneous[i].x = homogeneous[i].x / homogeneous[i].z;
        non_homogeneous[i].y = homogeneous[i].y / homogeneous[i].z;
    }
}

// Transform a vector of 2D non-homogeneous points via an homography.
std::vector<cv::Point2f> transform_via_homography(const std::vector<cv::Point2f>& points, const cv::Matx33f& homography)
{
    std::vector<cv::Point3f> ph;
    to_homogeneous(points, ph);
    for (size_t i = 0; i < ph.size(); i++) {
        ph[i] = homography*ph[i];
    }
    std::vector<cv::Point2f> r;
    from_homogeneous(ph, r);
    return r;
}

// Find the bounding box of a vector of 2D non-homogeneous points.
cv::Rect_<float> bounding_box(const std::vector<cv::Point2f>& p)
{
    cv::Rect_<float> r;
    float x_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x < rhs.x; })->x;
    float x_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x < rhs.x; })->x;
    float y_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y < rhs.y; })->y;
    float y_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y < rhs.y; })->y;
    return cv::Rect_<float>(x_min, y_min, x_max - x_min, y_max - y_min);
}

// Warp the image src into the image dst through the homography H.
// The resulting dst image contains the entire warped image, this
// behaviour is the same of Octave's imperspectivewarp (in the 'image'
// package) behaviour when the argument bbox is equal to 'loose'.
// See http://octave.sourceforge.net/image/function/imperspectivewarp.html
void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst)
{
    std::vector< cv::Point2f > corners;
    corners.push_back(cv::Point2f(0, 0));
    corners.push_back(cv::Point2f(src.cols, 0));
    corners.push_back(cv::Point2f(0, src.rows));
    corners.push_back(cv::Point2f(src.cols, src.rows));

    std::vector< cv::Point2f > projected = transform_via_homography(corners, H);
    cv::Rect_<float> bb = bounding_box(projected);

    cv::Mat_<double> translation = (cv::Mat_<double>(3, 3) << 1, 0, -bb.tl().x, 0, 1, -bb.tl().y, 0, 0, 1);

    cv::warpPerspective(src, dst, translation*H, bb.size());
}
Share:
46,786
Hien
Author by

Hien

Updated on August 26, 2020

Comments

  • Hien
    Hien over 3 years

    For some reason whenever I use OpenCV's warpPerspective() function, the final warped image does not contain everything in the original image. The left part of the image seems to get cut off. I think the reason why this is happening is because the warped image is created at the leftmost position of the canvas for the warpPerspective(). Is there some way to fix this? Thanks