Convert np.array of type float64 to type uint8 scaling values

179,781

Solution 1

A better way to normalize your image is to take each value and divide by the largest value experienced by the data type. This ensures that images that have a small dynamic range in your image remain small and they're not inadvertently normalized so that they become gray. For example, if your image had a dynamic range of [0-2], the code right now would scale that to have intensities of [0, 128, 255]. You want these to remain small after converting to np.uint8.

Therefore, divide every value by the largest value possible by the image type, not the actual image itself. You would then scale this by 255 to produced the normalized result. Use numpy.iinfo and provide it the type (dtype) of the image and you will obtain a structure of information for that type. You would then access the max field from this structure to determine the maximum value.

So with the above, do the following modifications to your code:

import numpy as np
import cv2
[...]
info = np.iinfo(data.dtype) # Get the information of the incoming image type
data = data.astype(np.float64) / info.max # normalize the data to 0 - 1
data = 255 * data # Now scale by 255
img = data.astype(np.uint8)
cv2.imshow("Window", img)

Note that I've additionally converted the image into np.float64 in case the incoming data type is not so and to maintain floating-point precision when doing the division.

Solution 2

Considering that you are using OpenCV, the best way to convert between data types is to use normalize function.

img_n = cv2.normalize(src=img, dst=None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)

However, if you don't want to use OpenCV, you can do this in numpy

def convert(img, target_type_min, target_type_max, target_type):
    imin = img.min()
    imax = img.max()

    a = (target_type_max - target_type_min) / (imax - imin)
    b = target_type_max - a * imax
    new_img = (a * img + b).astype(target_type)
    return new_img

And then use it like this

imgu8 = convert(img16u, 0, 255, np.uint8)

This is based on the answer that I found on crossvalidated board in comments under this solution https://stats.stackexchange.com/a/70808/277040

Solution 3

you can use skimage.img_as_ubyte(yourdata) it will make you numpy array ranges from 0->255

from skimage import img_as_ubyte

img = img_as_ubyte(data)
cv2.imshow("Window", img)
Share:
179,781
decadenza
Author by

decadenza

I'm a student interested mainly in web programming and Python 3. See github.com/decadenza

Updated on November 27, 2020

Comments

  • decadenza
    decadenza over 3 years

    I have a particular np.array data which represents a particular grayscale image. I need to use SimpleBlobDetector() that unfortunately only accepts 8bit images, so I need to convert this image, obviously having a quality-loss.

    I've already tried:

    import numpy as np
    import cv2
    [...]
    data = data / data.max() #normalizes data in range 0 - 255
    data = 255 * data
    img = data.astype(np.uint8)
    cv2.imshow("Window", img)
    

    But cv2.imshow is not giving the image as expected, but with strange distortion...

    In the end, I only need to convert a np.float64 to np.uint8 scaling all the values and truncating the rest, eg. 65535 becomes 255, 65534 becomes 254 and so on.... Any help?

    Thanks.