Improve matching of feature points with OpenCV
Solution 1
By comparing all feature detection algorithms I found a good combination, which gives me a lot more matches. Now I am using FAST for feature detection, SIFT for feature extraction and BruteForce for the matching. Combined with the check, whether the matches is inside a defined region I get a lot of matches, see the image:
(source: codemax.de)
The relevant code:
Ptr<FeatureDetector> detector;
detector = new DynamicAdaptedFeatureDetector ( new FastAdjuster(10,true), 5000, 10000, 10);
detector->detect(leftImageGrey, keypoints_1);
detector->detect(rightImageGrey, keypoints_2);
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("SIFT");
extractor->compute( leftImageGrey, keypoints_1, descriptors_1 );
extractor->compute( rightImageGrey, keypoints_2, descriptors_2 );
vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
matcher->knnMatch( descriptors_1, descriptors_2, matches, 500 );
//look whether the match is inside a defined area of the image
//only 25% of maximum of possible distance
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
for (int j = 0; j < matches[i].size(); j++)
{
Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
Point2f to = keypoints_2[matches[i][j].trainIdx].pt;
//calculate local distance for each possible match
double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));
//save as best match if local distance is in specified area and on same height
if (dist < tresholdDist && abs(from.y-to.y)<5)
{
good_matches2.push_back(matches[i][j]);
j = matches[i].size();
}
}
}
Solution 2
An alternate method of determining high-quality feature matches is the ratio test proposed by David Lowe in his paper on SIFT (page 20 for an explanation). This test rejects poor matches by computing the ratio between the best and second-best match. If the ratio is below some threshold, the match is discarded as being low-quality.
std::vector<std::vector<cv::DMatch>> matches;
cv::BFMatcher matcher;
matcher.knnMatch(descriptors_1, descriptors_2, matches, 2); // Find two nearest matches
vector<cv::DMatch> good_matches;
for (int i = 0; i < matches.size(); ++i)
{
const float ratio = 0.8; // As in Lowe's paper; can be tuned
if (matches[i][0].distance < ratio * matches[i][1].distance)
{
good_matches.push_back(matches[i][0]);
}
}
Solution 3
Besides ratio test, you can:
Only use symmetric matches:
void symmetryTest(const std::vector<cv::DMatch> &matches1,const std::vector<cv::DMatch> &matches2,std::vector<cv::DMatch>& symMatches)
{
symMatches.clear();
for (vector<DMatch>::const_iterator matchIterator1= matches1.begin();matchIterator1!= matches1.end(); ++matchIterator1)
{
for (vector<DMatch>::const_iterator matchIterator2= matches2.begin();matchIterator2!= matches2.end();++matchIterator2)
{
if ((*matchIterator1).queryIdx ==(*matchIterator2).trainIdx &&(*matchIterator2).queryIdx ==(*matchIterator1).trainIdx)
{
symMatches.push_back(DMatch((*matchIterator1).queryIdx,(*matchIterator1).trainIdx,(*matchIterator1).distance));
break;
}
}
}
}
and since its a stereo image use ransac test:
void ransacTest(const std::vector<cv::DMatch> matches,const std::vector<cv::KeyPoint>&keypoints1,const std::vector<cv::KeyPoint>& keypoints2,std::vector<cv::DMatch>& goodMatches,double distance,double confidence,double minInlierRatio)
{
goodMatches.clear();
// Convert keypoints into Point2f
std::vector<cv::Point2f> points1, points2;
for (std::vector<cv::DMatch>::const_iterator it= matches.begin();it!= matches.end(); ++it)
{
// Get the position of left keypoints
float x= keypoints1[it->queryIdx].pt.x;
float y= keypoints1[it->queryIdx].pt.y;
points1.push_back(cv::Point2f(x,y));
// Get the position of right keypoints
x= keypoints2[it->trainIdx].pt.x;
y= keypoints2[it->trainIdx].pt.y;
points2.push_back(cv::Point2f(x,y));
}
// Compute F matrix using RANSAC
std::vector<uchar> inliers(points1.size(),0);
cv::Mat fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2),inliers,CV_FM_RANSAC,distance,confidence); // confidence probability
// extract the surviving (inliers) matches
std::vector<uchar>::const_iterator
itIn= inliers.begin();
std::vector<cv::DMatch>::const_iterator
itM= matches.begin();
// for all matches
for ( ;itIn!= inliers.end(); ++itIn, ++itM)
{
if (*itIn)
{ // it is a valid match
goodMatches.push_back(*itM);
}
}
}
filla2003
Updated on September 06, 2020Comments
-
filla2003 over 3 years
I want to match feature points in stereo images. I've already found and extracted the feature points with different algorithms and now I need a good matching. In this case I'm using the FAST algorithms for detection and extraction and the
BruteForceMatcher
for matching the feature points.The matching code:
vector< vector<DMatch> > matches; //using either FLANN or BruteForce Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName); matcher->knnMatch( descriptors_1, descriptors_2, matches, 1 ); //just some temporarily code to have the right data structure vector< DMatch > good_matches2; good_matches2.reserve(matches.size()); for (size_t i = 0; i < matches.size(); ++i) { good_matches2.push_back(matches[i][0]); }
Because there are a lot of false matches I caluclated the min and max distance and remove all matches that are too bad:
//calculation of max and min distances between keypoints double max_dist = 0; double min_dist = 100; for( int i = 0; i < descriptors_1.rows; i++ ) { double dist = good_matches2[i].distance; if( dist < min_dist ) min_dist = dist; if( dist > max_dist ) max_dist = dist; } //find the "good" matches vector< DMatch > good_matches; for( int i = 0; i < descriptors_1.rows; i++ ) { if( good_matches2[i].distance <= 5*min_dist ) { good_matches.push_back( good_matches2[i]); } }
The problem is, that I either get a lot of false matches or only a few right ones (see the images below).
(source: codemax.de)
(source: codemax.de)I think it's not a problem of programming but more a matching thing. As far as I understood the
BruteForceMatcher
only regards the visual distance of feature points (which is stored in theFeatureExtractor
), not the local distance (x&y position), which is in my case important, too. Has anybody any experiences with this problem or a good idea to improve the matching results?EDIT
I changed the code, that it gives me the 50 best matches. After this I go through the first match to check, whether it's in a specified area. If it's not, I take the next match until I have found a match inside the given area.
vector< vector<DMatch> > matches; Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName); matcher->knnMatch( descriptors_1, descriptors_2, matches, 50 ); //look if the match is inside a defined area of the image double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width)); vector< DMatch > good_matches2; good_matches2.reserve(matches.size()); for (size_t i = 0; i < matches.size(); ++i) { for (int j = 0; j < matches[i].size(); j++) { //calculate local distance for each possible match Point2f from = keypoints_1[matches[i][j].queryIdx].pt; Point2f to = keypoints_2[matches[i][j].trainIdx].pt; double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y)); //save as best match if local distance is in specified area if (dist < tresholdDist) { good_matches2.push_back(matches[i][j]); j = matches[i].size(); } }
I think I don't get more matches, but with this I'm able to remove more false matches:
(source: codemax.de) -
filla2003 almost 11 yearsAs you can see in my code, I've already done a test to remove bad matches (although it's slightly different from your suggestion). That's why I have only a few matches in my second image. I need an idea how to improve the matching before they give me bad matches which I have to sort out.
-
Aurelius almost 11 years@20steffi03 What descriptor are you using to do the matching? Some descriptors are better than others, but filtering out "bad matches" is virtually certain to be necessary. You're already doing a brute-force search, which will give the highest quality of the available matchers
-
filla2003 almost 11 yearsI've compared several feature detection algorithms. In my case a FAST detector combined with a SIFT extractor gives me the best results. As you can see in my edit post, I've regarded the local position of the match which hopefully makes the matching better. My results now are much better now. Thanks for your help!
-
Leeor over 10 yearsPlease change the criterion to matches[i][1].distance
-
mkuse almost 9 yearswhat are some typical values one would use for distance, confidence, minInlierRatio?
-
Mr.WorshipMe over 8 yearsBut he's using a Knn match, which returns a vector of vector of matches.
-
cujo30227 over 7 yearsFor a university project, I'm studying the application of openCV for calculating the difference in stereo images. Apparently you managed to get a robust algorithm. Would you mind sharing more of your implementation, e.g. the interface with openCV etc.?
-
Yuriy Chernyshov about 7 yearsTwo notes - I do not see DynamicAdaptedFeatureDetector in OpenCV 3 and more important SIFT and SURF are patented algorithms.
-
C.Radford almost 7 years@YuriyChernyshov SIFT AND SURF are indeed patented algorithms but if you are using it for research purposes only take a look at their copyright license as if memory serves me you are able to use it for research purposes freely provided you state that it is not your algorithm.
-
TommyAutoMagically about 3 years+1 for mentioning that Lowe proposed using a threshold of
0.8
. I was about to roll with ≈0.5
as many other blogs/answers recommend.