How does one compare one image to another to see if they are similar by a certain percentage, on the iPhone?

14,791

Solution 1

As a quick, simple algorithm, I'd suggest iterating through about 1% of the pixels in each image and either comparing them directly against each other or keeping a running average and then comparing the two average color values at the end.

You can look at this answer for an idea of how to determine the color of a pixel at a given position in an image. You may want to optimize it somewhat to better suit your use-case (repeatedly querying the same image), but it should provide a good starting point.

Then you can use an algorithm roughly like:

float numDifferences = 0.0f;
float totalCompares = width * height / 100.0f;
for (int yCoord = 0; yCoord < height; yCoord += 10) {
    for (int xCoord = 0; xCoord < width; xCoord += 10) {
        int img1RGB[] = [image1 getRGBForX:xCoord andY: yCoord];
        int img2RGB[] = [image2 getRGBForX:xCoord andY: yCoord];
        if (abs(img1RGB[0] - img2RGB[0]) > 25 || abs(img1RGB[1] - img2RGB[1]) > 25 || abs(img1RGB[2] - img2RGB[2]) > 25) {
            //one or more pixel components differs by 10% or more
            numDifferences++;
        }
    }
}

if (numDifferences / totalCompares <= 0.1f) {
    //images are at least 90% identical 90% of the time
}
else {
    //images are less than 90% identical 90% of the time
}

Solution 2

Based on aroth's idea, this is my full implementation. It checks if some random pixels are the same. For what I needed it works flawlessly.

- (bool)isTheImage:(UIImage *)image1 apparentlyEqualToImage:(UIImage *)image2 accordingToRandomPixelsPer1:(float)pixelsPer1
{
    if (!CGSizeEqualToSize(image1.size, image2.size))
    {
        return false;
    }

    int pixelsWidth = CGImageGetWidth(image1.CGImage);
    int pixelsHeight = CGImageGetHeight(image1.CGImage);

    int pixelsToCompare = pixelsWidth * pixelsHeight * pixelsPer1;

    uint32_t pixel1;
    CGContextRef context1 = CGBitmapContextCreate(&pixel1, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
    uint32_t pixel2;
    CGContextRef context2 = CGBitmapContextCreate(&pixel2, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);

    bool isEqual = true;

    for (int i = 0; i < pixelsToCompare; i++)
    {
        int pixelX = arc4random() % pixelsWidth;
        int pixelY = arc4random() % pixelsHeight;

        CGContextDrawImage(context1, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image1.CGImage);
        CGContextDrawImage(context2, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image2.CGImage);

        if (pixel1 != pixel2)
        {
            isEqual = false;
            break;
        }
    }
    CGContextRelease(context1);
    CGContextRelease(context2);

    return isEqual;
}

Usage:

[self isTheImage:image1 apparentlyEqualToImage:image2
accordingToRandomPixelsPer1:0.001]; // Use a value between 0.0001 and 0.005

According to my performance tests, 0.005 (0.5% of the pixels) is the maximum value you should use. If you need more precision, just compare the whole images using this. 0.001 seems to be a safe and well-performing value. For large images (like between 0.5 and 2 megapixels or million pixels), I'm using 0.0001 (0.01%) and it works great and incredibly fast, it never makes a mistake.

But of course the mistake-ratio will depend on the type of images you are using. I'm using UIWebView screenshots and 0.0001 performs well, but you can probably use much less if you are comparing real photographs (even just compare one random pixel in fact). If you are dealing with very similar computer designed images you definitely need more precision.

Note: I'm always comparing ARGB images without taking into account the alpha channel. Maybe you'll need to adapt it if that's not exactly your case.

Share:
14,791

Related videos on Youtube

SolidSnake4444
Author by

SolidSnake4444

Updated on November 30, 2020

Comments

  • SolidSnake4444
    SolidSnake4444 over 3 years

    I basically want to take two images taken from the camera on the iPhone or iPad 2 and compare them to each other to see if they are pretty much the same. Obviously due to light etc the image will never be EXACTLY the same so I would like to check for around 90% compatibility.

    All the other questions like this that I saw on here were either not for iOS or were for locating objects in images. I just want to see if two images are similar.

    Thank you.

  • SolidSnake4444
    SolidSnake4444 about 13 years
    Very interesting. Two questions, in your second for loop you have img1RGB array, is this array theortically already filled in with the RGB values at this point? If so how would I populate that. That answer link would probably help for the getRGBfor part. Second question is for the if statement you use absolute number 0,1,2....why? Are you only comparing 3 spots and thats why the numbers aren't variables? Like should that remain those numbers for all cases?
  • aroth
    aroth about 13 years
    @SolidSnake4444 - The img1GRB/img2RGB arrays would be populated by the call to getRGBForX:andY:. They would contain the pixel channel values for the pixel at the given coordinate (i.e. an int for the red component, an int for the green, and an int for the blue). Use of the array index in the comparisons is just for conciseness. You could easily do something like int img1Red = img1RGB[0]; and int img1Green = img1RGB[1]; and so on, if you prefer.
  • SolidSnake4444
    SolidSnake4444 about 13 years
    So your adding the three int values for the colors and then subtracting it from the sum of the values from the second image and then comparing if it's bigger than 25? If I understood that correctly, why is it compared against 25?
  • aroth
    aroth about 13 years
    @SolidSnake4444 - I'm comparing the three int color values, not addings them. I'm taking the red, green, and blue component values from the first pixel and comparing them with the red, green, and blue values (respectively) from the second pixel. To do the comparison I do a simple subtraction, and then compare the result against 25. Why 25? Because the maximum possible difference is 255, and you said anything within 10% should count as a match. And 10% of 255 is 25. So if the difference is less than 25, the component counts as a match (within 10%), otherwise it is tallied as a difference.
  • SolidSnake4444
    SolidSnake4444 about 13 years
    Alright, thank you. I am not working on this project just yet so I won't be able to put it into work yet but I believe I got what you were suggesting. I'll mark your answer as the best one!
  • SolidSnake4444
    SolidSnake4444 over 10 years
    This looks promising. I'll look into this on the weekend and see if it works the way I need it. From what you are saying I think it will.
  • cprcrack
    cprcrack over 10 years
    Now that I re-read you question, I'm not sure if it will work for what you want. Even if two photographs are similar to the human eye, all of the pixels might be different. Calculating average colors sounds like a better approach.
  • SolidSnake4444
    SolidSnake4444 over 10 years
    Do you have a code example on how one would do that? In terms of getting down to this pixel or RGB level of photos I seem to have trouble grasping how to code that.
  • Saad Chaudhry
    Saad Chaudhry over 10 years
    @cprcrack getting image1,image2 use of undeclared identifier on view load can you share some more code so that it will be more understandable
  • Jesse Onolemen
    Jesse Onolemen over 8 years
    where did you get the width, height and the getRGBForX from?
  • Dinesh Gurrapu
    Dinesh Gurrapu about 8 years
    float totalCompares = width * height / 100.0f; in this line What is height and width can you please tel me, I am new to the ios
  • Dinesh Gurrapu
    Dinesh Gurrapu about 8 years
    hello anyone is there help me out please
  • Omkar Jadhav
    Omkar Jadhav over 6 years
    @cprcrack I have implemented this code, the problem I am facing here is that, the algorithm perfectly finds similar images which have come from WhatsApp or Facebook and are saved in my album, but when I have clicked photos from my camera and they are similar ( I have 20 of them for testing ) it doesn't find any of them. Also I have to set parameters for each phones differently ( which is a wrong approach ) how can I improve this code for the problem I am facing ?
  • Omkar Jadhav
    Omkar Jadhav over 6 years
    @cprcrack I have noticed that your algorithm also works for smaller images, so I guess that is the reason why it compares the images received from WhatsApp which are compressed already. But when bigger sized images come into picture ( like the one clicked by camera ) it doesn't get those.
  • TRVD1707
    TRVD1707 over 3 years
    @aroth, is it possible that the difference between a component be higher than 10%, but much smaller in the other two components that would still make the pixel acceptable as similar enough?
  • aroth
    aroth over 3 years
    @TRVD1707 - That's certainly possible, yes, although not what the example code will do. As written in the example, a ~10% discrepancy in any one component causes the pixel to count as "different". If you wanted to require 2 of 3 components or 3 of 3 components to have at least a 10% difference, that's doable with some relatively minor changes. Or you could take the average difference across all 3 components, etc.. There are lots of ways the approach might be tweaked to produce different sensitivities/behaviors.