How do I release a CGImageRef in iOS

11,041

Solution 1

Your memory issue results from the copied data, as others have stated. But here's another idea: Use Core Graphics's optimized pixel interpolation to calculate the average.

  1. Create a 1x1 bitmap context.
  2. Set the interpolation quality to medium (see later).
  3. Draw your image scaled down to exactly this one pixel.
  4. Read the RGB value from the context's buffer.
  5. (Release the context, of course.)

This might result in better performance because Core Graphics is highly optimized and might even use the GPU for the downscaling.

Testing showed that medium quality seems to interpolate pixels by taking the average of color values. That's what we want here.

Worth a try, at least.

Edit: OK, this idea seemed too interesting not to try. So here's an example project showing the difference. Below measurements were taken with the contained 512x512 test image, but you can change the image if you want.

It takes about 12.2 ms to calculate the average by iterating over all pixels in the image data. The draw-to-one-pixel approach takes 3 ms, so it's roughly 4 times faster. It seems to produce the same results when using kCGInterpolationQualityMedium.

I assume that the huge performance gain is a result from Quartz noticing that it does not have to decompress the JPEG fully but that it can use the lower frequency parts of the DCT only. That's an interesting optimization strategy when composing JPEG compressed pixels with a scale below 0.5. But I'm only guessing here.

Interestingly, when using your method, 70% of the time is spent in CGDataProviderCopyData and only 30% in the pixel data traversal. This hints to a lot of time spent in JPEG decompression.

Pixel Iterating Screenshot Draw-To-One-Pixel Screenshot

Note: Here's a late follow up on the example image above.

Solution 2

You don't own the CGImageRef rawImageRef because you obtain it using [image CGImage]. So you don't need to release it.

However, you own rawPixelData because you obtained it using CGDataProviderCopyData and must release it.

CGDataProviderCopyData

Return Value: A new data object containing a copy of the provider’s data. You are responsible for releasing this object.

Solution 3

Your mergedColor works great on an image loaded from a file, but not for an image capture by the camera. Because CGBitmapContextGetData() on the context created from a captured sample buffer doesn't return it bitmap. I changed your code to as following. It works on any image and it is as fast as your code.

- (UIColor *)mergedColor
{
     CGImageRef rawImageRef = [self CGImage];

    // scale image to an one pixel image

    uint8_t  bitmapData[4];
    int bitmapByteCount;
    int bitmapBytesPerRow;
    int width = 1;
    int height = 1;

    bitmapBytesPerRow = (width * 4);
    bitmapByteCount = (bitmapBytesPerRow * height);
    memset(bitmapData, 0, bitmapByteCount);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate (bitmapData,width,height,8,bitmapBytesPerRow,
             colorspace,kCGBitmapByteOrder32Little|kCGImageAlphaPremultipliedFirst);
    CGColorSpaceRelease(colorspace);
    CGContextSetBlendMode(context, kCGBlendModeCopy);
    CGContextSetInterpolationQuality(context, kCGInterpolationMedium);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), rawImageRef);
    CGContextRelease(context);
    return [UIColor colorWithRed:bitmapData[2] / 255.0f
                    green:bitmapData[1] / 255.0f
                             blue:bitmapData[0] / 255.0f
                            alpha:1];
}

Solution 4

I believe your issue is in this statement:

const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));

You should be releasing the return value of CGDataProviderCopyData.

Solution 5

CFDataRef abgrData = CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef));

const UInt8 *rawPixelData = CFDataGetBytePtr(abgrData);

...

CFRelease(abgrData);
Share:
11,041
samuelschaefer
Author by

samuelschaefer

SOreadytohelp

Updated on June 07, 2022

Comments

  • samuelschaefer
    samuelschaefer almost 2 years

    I am writing this method to calculate the average R,G,B values of an image. The following method takes a UIImage as an input and returns an array containing the R,G,B values of the input image. I have one question though: How/Where do I properly release the CGImageRef?

    -(NSArray *)getAverageRGBValuesFromImage:(UIImage *)image
    {
        CGImageRef rawImageRef = [image CGImage];
    
        //This function returns the raw pixel values
        const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));
    
        NSUInteger imageHeight = CGImageGetHeight(rawImageRef);
        NSUInteger imageWidth = CGImageGetWidth(rawImageRef);
    
        //Here I sort the R,G,B, values and get the average over the whole image
        int i = 0;
        unsigned int red = 0;
        unsigned int green = 0;
        unsigned int blue = 0;
    
        for (int column = 0; column< imageWidth; column++)
        {
            int r_temp = 0;
            int g_temp = 0;
            int b_temp = 0;
    
            for (int row = 0; row < imageHeight; row++) {
                i = (row * imageWidth + column)*4;
                r_temp += (unsigned int)rawPixelData[i];
                g_temp += (unsigned int)rawPixelData[i+1];
                b_temp += (unsigned int)rawPixelData[i+2];
    
            }
    
            red += r_temp;
            green += g_temp;
            blue += b_temp;
    
        }
    
        NSNumber *averageRed = [NSNumber numberWithFloat:(1.0*red)/(imageHeight*imageWidth)];
        NSNumber *averageGreen = [NSNumber numberWithFloat:(1.0*green)/(imageHeight*imageWidth)];
        NSNumber *averageBlue = [NSNumber numberWithFloat:(1.0*blue)/(imageHeight*imageWidth)];
    
    
        //Then I store the result in an array
        NSArray *result = [NSArray arrayWithObjects:averageRed,averageGreen,averageBlue, nil];
    
    
        return result;
    }
    

    I tried two things: Option 1: I leave it as it is, but then after a few cycles (5+) the program crashes and I get the "low memory warning error"

    Option 2: I add one line CGImageRelease(rawImageRef) before the method returns. Now it crashes after the second cycle, I get the EXC_BAD_ACCESS error for the UIImage that I pass to the method. When I try to analyze (instead of RUN) in Xcode I get the following warning at this line "Incorrect decrement of the reference count of an object that is not owned at this point by the caller"

    Where and how should I release the CGImageRef?

    Thanks!

  • Tommy
    Tommy over 11 years
    ... which you'd do by keeping the CFDataRef it returns and calling CFRelease on that at the appropriate moment. Be careful that the norm here is the exact opposite of the Objective-C norm; if you pass CFRelease(NULL) then Core Foundation will very deliberately raise an exception.
  • Sulthan
    Sulthan about 11 years
    @Tommy Of course, as there is a big difference between nil in Obj-C and NULL in plain C.
  • Nikolai Ruhe
    Nikolai Ruhe about 11 years
    @Jonny Calculating the average for non-rectangular areas could easily be done by setting a clip (path or mask) before drawing. The clip would render some pixels transparent which means they would not affect the final result. Tada!
  • Jonny
    Jonny about 11 years
    OK I might be looking to extend this class with such a feature. Oh I'm just trying to code a "magic wand" for selecting the face area having a similar skin color/tone.
  • Nikolai Ruhe
    Nikolai Ruhe about 11 years
    Testing seems to show that alpha sampling done by Core Graphics does not produce the expected average. Without further digging into this topic I'd not recommend using this approach for non-rectangular areas.
  • Daniel
    Daniel about 9 years
    What's the difference between Average and Merged color?? Thanks for the code!!
  • Nikolai Ruhe
    Nikolai Ruhe about 9 years
    @Daniel "Average" is just the straight forward code that walks over all pixels and computes the average. "Merged" is the proposed implementation that actually draws the image and uses the result as the average color.