Create UiImage from NSData

10,772

imageWithData: doesn't take some arbitrary data and interpret it as an RGBA uncompressed texture. It takes a data that contains bytes of a recognized image format (like JPG, PNG, or TIFF), parses thoses and decompresses the images appropriately.

In order to do what you want you need to create a CGContext which is appropriately configured (row bytes, pixel format, etc), either using the memory you already have allocated as its backing storage, or by asking it for its storage and then copying your bytes into it. Once you do that you can use all of the CGContext related functionality, including the ability to create a UIImage from that context.

Share:
10,772
Admin
Author by

Admin

Updated on June 04, 2022

Comments

  • Admin
    Admin almost 2 years

    Below is code I copied (from this site) and modified only slightly since the original code would not compile. I want to manipulate the byte array for edge detection and eventually simple changes to colors, but first I wanted to get the basic code working. Currently, the system compiles and runs. It displays a badly drawn elephant on screen. When I touch the image, it disappears. Stepping through shows the result of imageWithData as 0x0. I have tried this with both a png and a bmp and same result

    Any clues to what I am doing wrong ?!

    ImageViewDrawable is defined as:

    @interface ImageViewDrawable : UIImageView
    
    // I am using the following code to initialize this ImageView
    ImageViewDrawable * uiv = [[ImageViewDrawable alloc] initWithImage:[UIImage imageNamed:@"ele.png"] ];
    
    -(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
        // get and work on pixel data
        NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(self.image.CGImage));
        char* bytes =[pixelData bytes];
    
        // Take away the red pixel, assuming 32-bit RGBA
        for(int i = 0; i < [pixelData length]; i += 4) {
            bytes[i] = bytes[i]; // red
            bytes[i+1] = bytes[i+1]; // green
            bytes[i+2] = bytes[i+2]; // blue
            bytes[i+3] = bytes[i+3]; // alpha
        }
    
        // convert pixel back to uiiimage
        NSData* newPixelData = [NSData dataWithBytes:bytes length:[pixelData length]];
        char * bytes2 =[pixelData bytes];   
        UIImage * newImage = [UIImage imageWithData:newPixelData] ; 
        [self setImage: newImage ]; 
    }
    
  • Admin
    Admin over 14 years
    So it doesn't matter that the data I am feeding in essentially came from an existing UIImage?
  • Admin
    Admin over 14 years
    I see my problem. This nested set of calls -- CGDataProviderCopyData(CGImageGetDataProvider( ) ) -- gets the data provider, and then JUST copies the data. Which means I am just getting the pixels and not the rest of the configuration from the original image. So I have to recreate an image using those pixels and the CG information you referred to. Thanks again for help