OpenCV : How to find the center of mass/centroid for motion information

14,253

so, you already got your pointlists,

obj.push_back( kp_object[ good_matches[i].queryIdx ].pt );
scene.push_back( kp_image[ good_matches[i].trainIdx ].pt );

i think, it's perfectly valid, to calc the centroid based on that, no further image processing nessecary.

there's 2 methods, the 'center of mass' way, that's just the mean position of all points, like this:

Point2f cen(0,0);
for ( size_t i=0; i<scene.size(); i++ )
{
    cen.x += scene[i].x;
    cen.y += scene[i].y;
}
cen.x /= scene.size(); 
cen.y /= scene.size(); 

and the 'center of bbox' way

Point2f pmin(1000000,1000000);
Point2f pmax(0,0);
for ( size_t i=0; i<scene.size(); i++ )
{
    if ( scene[i].x < pmin.x ) pmin.x = scene[i].x;
    if ( scene[i].y < pmin.y ) pmin.y = scene[i].y;
    if ( scene[i].x > pmax.x ) pmax.x = scene[i].x;
    if ( scene[i].y > pmax.y ) pmax.y = scene[i].y;
}
Point2f cen( (pmax.x-pmin.x)/2,  (pmax.y-pmin.y)/2);

note, that the results will be different ! they're only the same for circles & squares, point symmetric objects

// now draw a circle around the centroid:
cv::circle( img, cen, 10, Scalar(0,0,255), 2 );

// and a line connecting the query and train points:
cv::line( img, scene[i], obj[i], Scalar(255,0,0), 2 );
Share:
14,253
Shreya M
Author by

Shreya M

Updated on June 04, 2022

Comments

  • Shreya M
    Shreya M almost 2 years

    The thing is I am unable to implement the center of mass with the existing code, which image object to use etc after the detected object is bounded by the rectangle so that I may get the trajectory of the path. I am using Opencv2.3 .I found out there are 2 methods - Link1 and Link2 talk about the usage of moments. And the other method is to use the information of the bounding box Link3. The method of moments requires image thresholding. However, when using SURF the image is in gray scale. So, on passing a gray image for thresholding displays a white image! Now, I am having a tough time in understanding how I should calculate the centroid using the code below (esp what should I use instead of points[i].x since I am using

    obj.push_back( kp_object[ good_matches[i].queryIdx ].pt );
    scene.push_back( kp_image[ good_matches[i].trainIdx ].pt )
    

    where in my case numPoints=good_matches.size(), denoting the number of feature points) as mentioned in the documentation. If anyone can put up an implementation of how to use SURF with centroid then it will be helpful.

    #include <stdio.h>
    #include <iostream>
    #include "opencv2/core/core.hpp"
    #include "opencv2/features2d/features2d.hpp"
    #include "opencv2/highgui/highgui.hpp"
    #include "opencv2/imgproc/imgproc.hpp"
    #include "opencv2/calib3d/calib3d.hpp"
    
    using namespace cv;
    
    int main()
    {
        Mat object = imread( "object.png", CV_LOAD_IMAGE_GRAYSCALE );
    
        if( !object.data )
        {
            std::cout<< "Error reading object " << std::endl;
            return -1;
        }
    
        //Detect the keypoints using SURF Detector
        int minHessian = 500;
    
        SurfFeatureDetector detector( minHessian );
        std::vector<KeyPoint> kp_object;
    
        detector.detect( object, kp_object );
    
        //Calculate descriptors (feature vectors)
        SurfDescriptorExtractor extractor;
        Mat des_object;
    
        extractor.compute( object, kp_object, des_object );
    
        FlannBasedMatcher matcher;
    
        VideoCapture cap(0);
    
        namedWindow("Good Matches");
    
        std::vector<Point2f> obj_corners(4);
    
        //Get the corners from the object
        obj_corners[0] = cvPoint(0,0);
        obj_corners[1] = cvPoint( object.cols, 0 );
        obj_corners[2] = cvPoint( object.cols, object.rows );
        obj_corners[3] = cvPoint( 0, object.rows );
    
        char key = 'a';
        int framecount = 0;
        while (key != 27)
        {
            Mat frame;
            cap >> frame;
    
            if (framecount < 5)
            {
                framecount++;
                continue;
            }
    
            Mat des_image, img_matches;
            std::vector<KeyPoint> kp_image;
            std::vector<vector<DMatch > > matches;
            std::vector<DMatch > good_matches;
            std::vector<Point2f> obj;
            std::vector<Point2f> scene;
            std::vector<Point2f> scene_corners(4);
            Mat H;
            Mat image;
    
            cvtColor(frame, image, CV_RGB2GRAY);
    
            detector.detect( image, kp_image );
            extractor.compute( image, kp_image, des_image );
    
            matcher.knnMatch(des_object, des_image, matches, 2);
    
            for(int i = 0; i < min(des_image.rows-1,(int) matches.size()); i++) //THIS LOOP IS SENSITIVE TO SEGFAULTS
            {
                if((matches[i][0].distance < 0.6*(matches[i][4].distance)) && ((int) matches[i].size()<=2 && (int) matches[i].size()>0))
                {
                    good_matches.push_back(matches[i][0]);
                }
            }
    
            //Draw only "good" matches
            drawMatches( object, kp_object, image, kp_image, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
    
            if (good_matches.size() >= 4)
            {
                for( int i = 0; i < good_matches.size(); i++ )
                {
                    //Get the keypoints from the good matches
                    obj.push_back( kp_object[ good_matches[i].queryIdx ].pt );
                    scene.push_back( kp_image[ good_matches[i].trainIdx ].pt );
                }
    
                H = findHomography( obj, scene, CV_RANSAC );
    
                perspectiveTransform( obj_corners, scene_corners, H);
    
                //Draw lines between the corners (the mapped object in the scene image )
                line( img_matches, scene_corners[0] + Point2f( object.cols, 0), scene_corners[1] + Point2f( object.cols, 0), Scalar(0, 255, 0), 4 );
                line( img_matches, scene_corners[1] + Point2f( object.cols, 0), scene_corners[2] + Point2f( object.cols, 0), Scalar( 0, 255, 0), 4 );
                line( img_matches, scene_corners[2] + Point2f( object.cols, 0), scene_corners[3] + Point2f( object.cols, 0), Scalar( 0, 255, 0), 4 );
                line( img_matches, scene_corners[3] + Point2f( object.cols, 0), scene_corners[0] + Point2f( object.cols, 0), Scalar( 0, 255, 0), 4 );
            }
    
            //Show detected matches
            imshow( "Good Matches", img_matches );
    
            key = waitKey(1);
        }
        return 0;
    }
    
  • Shreya M
    Shreya M about 11 years
    Thank you.Following questions/issues on implementing your answer.Plz pardon me if it appears to be trivial as my knowledge is highly limited at the beginner level.(A)Where do I put your codes?Is it outside last if loop or under for( int i = 0; i < good_matches.size(); i++ ).(B)In the first method,should not there be an array subscript with cen += scene[i]; since it throws error.(C)Is there a way to record the measured & target position or draw a line indicating the motion of the detected object since tracking operation is also being performed?(D)How do I display values using second method?
  • berak
    berak about 11 years
    (A) yes, inside the good_matches loop (B) ? yea sloppy code from me there, my bad (C) opencv has a line( img,from,to,color,thickness) function (D) you could draw a small circle(img,center,radius,color) but please look into the documentation for the exact args there
  • Shreya M
    Shreya M about 11 years
    I could not find the syntax in C++ style for displaying the measured and target position.Could you kindly append your code with it?Also,for (B) will it be cen[i]+=scene[i]; cen[0] /= scene.size(); cen[1] /= scene.size(); and all this goes inside the good_matches loop but after the lines are drawn between the mapped objects?