Retrieving a pixel alpha value for a UIImage

19,339

Solution 1

Yes, CGContexts have their y-axis pointing up while in UIKit it points down. See the docs.

Edit after reading code:

You also want to set the blend mode to replace before drawing the image since you want the image's alpha value, not the one which was in the context's buffer before:

CGContextSetBlendMode(context, kCGBlendModeCopy);

Edit after thinking:

You could do the lookup much more efficient by building the smallest possible CGBitmapContext (1x1 pixel? maybe 8x8? have a try) and translating the context to your desired position before drawing:

CGContextTranslateCTM(context, xOffset, yOffset);

Solution 2

If all you want is the alpha value of a single point, all you need is an alpha-only single-point buffer. I believe this should suffice:

// assume im is a UIImage, point is the CGPoint to test
CGImageRef cgim = im.CGImage;
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 
                                         1, 1, 8, 1, NULL,
                                         kCGImageAlphaOnly);
CGContextDrawImage(context, CGRectMake(-point.x, 
                                   -point.y, 
                                   CGImageGetWidth(cgim), 
                                   CGImageGetHeight(cgim)), 
               cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;

If the UIImage doesn't have to be recreated every time, this is very efficient.

EDIT December 8 2011:

A commenter points out that under certain circumstances the image may be flipped. I've been thinking about this, and I'm a little sorry that I didn't write the code using the UIImage directly, like this (I think the reason is that at the time I didn't understand about UIGraphicsPushContext):

// assume im is a UIImage, point is the CGPoint to test
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 
                                             1, 1, 8, 1, NULL,
                                             kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[im drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;

I think that would have solved the flipping issue.

Solution 3

Do I need to translate the co-ordinates between UIKit and Core Graphics - i.e: is the y-axis inverted?

It's possible. In CGImage, the pixel data is in English reading order: left-to-right, top-to-bottom. So, the first pixel in the array is the top-left; the second pixel is one from the left on the top row; etc.

Assuming you have that right, you should also make sure you're looking at the correct component within a pixel. Perhaps you're expecting RGBA but asking for ARGB, or vice versa. Or, maybe you have the byte order wrong (I don't know what the iPhone's endianness is).

Or have I misunderstood premultiplied alpha values?

It doesn't sound like it.

For those who don't know: Premultiplied means that the color components are premultiplied by the alpha; the alpha component is the same whether the color components are premultiplied by it or not. You can reverse this (unpremultiply) by dividing the color components by the alpha.

Solution 4

I found this question/answer while researching how to do collision detection between sprites using the alpha value of the image data, rather than a rectangular bounding box. The context is an iPhone app... I am trying to do the above suggested 1 pixel draw and I am still having problems getting this to work, but I found an easier way of creating a CGContextRef using data from the image itself, and the helper functions here:

CGContextRef context = CGBitmapContextCreate(
                 rawData, 
                 CGImageGetWidth(cgiRef), 
                 CGImageGetHeight(cgiRef), 
                 CGImageGetBitsPerComponent(cgiRef), 
                 CGImageGetBytesPerRow(cgiRef), 
                 CGImageGetColorSpace(cgiRef),
                 kCGImageAlphaPremultipliedLast     
    );

This bypasses all the ugly hardcoding in the sample above. The last value can be retrieved by calling CGImageGetBitmapInfo() but in my case, it return a value from the image that caused an error in the ContextCreate function. Only certain combinations are valid as documented here: http://developer.apple.com/qa/qa2001/qa1037.html

Hope this is helpful!

Share:
19,339
teabot
Author by

teabot

Big Data Developer at Expedia Tinkerer and Maker at the London Hackspace (my personal page)

Updated on June 10, 2022

Comments

  • teabot
    teabot almost 2 years

    I am currently trying to obtain the alpha value of a pixel in a UIImageView. I have obtained the CGImage from [UIImageView image] and created a RGBA byte array from this. Alpha is premultiplied.

    CGImageRef image = uiImage.CGImage;
    NSUInteger width = CGImageGetWidth(image);
    NSUInteger height = CGImageGetHeight(image);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    rawData = malloc(height * width * 4);
    bytesPerPixel = 4;
    bytesPerRow = bytesPerPixel * width;
    
    NSUInteger bitsPerComponent = 8;
    CGContextRef context = CGBitmapContextCreate(
        rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace,
        kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
    );
    CGColorSpaceRelease(colorSpace);
    
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
    CGContextRelease(context);
    

    I then calculate the array index for the given alpha channel using the coordinates from the UIImageView.

    int byteIndex = (bytesPerRow * uiViewPoint.y) + uiViewPoint.x * bytesPerPixel;
    unsigned char alpha = rawData[byteIndex + 3];
    

    However I don't get the values I expect. For a completely black transparent area of the image I get non-zero values for the alpha channel. Do I need to translate the co-ordinates between UIKit and Core Graphics - i.e: is the y-axis inverted? Or have I misunderstood premultiplied alpha values?

    Update:

    @Nikolai Ruhe's suggestion was key to this. I did not in fact need to translate between UIKit coordinates and Core Graphics coordinates. However, after setting the blend mode my alpha values were what I expected:

    CGContextSetBlendMode(context, kCGBlendModeCopy);
    
  • teabot
    teabot almost 15 years
    The CGImageRef represents a static image. Once I have the byte array I throw the CGImage away and then repeatedly use the byte array for doing the alpha lookup. Would your optimization work in this scenario?
  • Nikolai Ruhe
    Nikolai Ruhe almost 15 years
    No, of course you'll need the image to draw it repeatedly. My approach could be more efficient for few lookups. If you're doing lots of lookups stick with your code.
  • mahboudz
    mahboudz over 14 years
    This is helpful. Just keep in mind, that here, your BytesPerRow may not be width*bytesPerPixel. For optimization, it may be padded to 16 byte boundaries. As you traverse rawData, if you don't account for this, you'll end up using padding bytes as pixel data.
  • mahboudz
    mahboudz over 14 years
    That also means that your rawData that you malloced may be to small to hold the bitmap and there may be a buffer overrun.
  • Matthew Leffler
    Matthew Leffler over 13 years
    For some reason I had to use the following line for the CGContextDrawImage(...: CGContextDrawImage(context, CGRectMake(-point.x, -(image.size.height-point.y), CGImageGetWidth(cgim), CGImageGetHeight(cgim)), cgim);
  • matt
    matt over 13 years
    @MattLeff - That makes perfect sense if your image is flipped, which can easily happen due to the impedance mismatch between Core Graphics (where y origin is at bottom) and UIKit (where y origin is at top).
  • Joe D'Andrea
    Joe D'Andrea over 12 years
    I'm trying this out as well (and I am doing lots of lookups - one for each tap). It works beautifully on the Simulator but not on an iPhone 4. The coordinates look good (lower-left is 0,0) but the hit testing is a jumbled mess. See stackoverflow.com/questions/7506248/…
  • Joe D'Andrea
    Joe D'Andrea over 12 years
    I wonder if this is why I'm having trouble getting this to work on an iPhone, but it works fine on the simulator? stackoverflow.com/questions/7506248/…
  • Kjellski
    Kjellski over 12 years
    Couldn't this also be used to extract all the other values to create a complete UIColor?
  • Bartłomiej Semańczyk
    Bartłomiej Semańczyk over 3 years
    what about Swift solution?
  • matt
    matt over 3 years
    @BartłomiejSemańczyk Do you mean stackoverflow.com/a/52743565/341994 ?