How to normalize image in openCV?
21,995
Solution 1
OpenCV has a function does exactly what you want. It's called convertTo
.
cv::Mat img3;
img2.convertTo(img3, CV_32F, 1.0 / 255, 0);
Solution 2
To normalize cv::Mat you can use cv::normalize. I write some code to help you.
uchar data[] = {0, 63, 127, 255};
cv::Mat im(2, 2, CV_8UC1, data), output;
# this what you need 0 -> min value after norm
# 1 -> max value after nom
# cv::NORM_MINMAX normalize for min and max values
cv::normalize(im, output, 0, 1, cv::NORM_MINMAX);
std::cout
<< im << '\n'
<< output << '\n';
#==================================================
# output
[ 0, 63;
127, 255]
[ 0, 0;
0, 1]
Author by
New iOS Dev
Updated on November 17, 2020Comments
-
New iOS Dev over 3 years
I am totally new in openCV and stuck at one point.
I have a grey scale image but I need to normalize this image. For example: I have a cv::mat image which may have a matrix containing some values and each index. As I have gray image it may contain only 1 value per index. Now I need to divide each value by 255.
Is there any method or facility available in openCV in C++? For this scenario, I believe the method I want to use is called normalization in openCV?
cv::Mat originalMat = [OSInference cvMatFromUIImage:imgBeforeProccessing]; cv::Mat img2; cv::cvtColor(originalMat, img2,CV_BGR2GRAY); cv::resize(img2, img2, cv::Size(128, 128), 0, 0, CV_INTER_CUBIC); cv::Mat img3;
Now how to normalise (this means to divide each value in the matrix by 255)??
I am converting the mat image to an iOS image as follows:
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat{ NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()]; CGColorSpaceRef colorspace; if (cvMat.elemSize() == 1) { colorspace = CGColorSpaceCreateDeviceGray(); }else{ colorspace = CGColorSpaceCreateDeviceRGB(); } CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data); // Create CGImage from cv::Mat CGImageRef imageRef = CGImageCreate(cvMat.cols, cvMat.rows, 8, 8 * cvMat.elemSize(), cvMat.step[0], colorspace, kCGImageAlphaNone | kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault); // get uiimage from cgimage UIImage *finalImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorspace); return finalImage; }
-
New iOS Dev about 7 yearsis normalisation concept is same what i want?
-
New iOS Dev about 7 yearsif I use my grey image converting to somewhat colour image (yellowish) is this possible?
-
slawekwin about 7 years@sss I don't quite understand what you mean
-
New iOS Dev about 7 yearsAs i am converting colour image to grey first then applying convertTo method....but after this my image turns yellowish ...as per my consideration grey image should remain grey even after convertTo method
-
slawekwin about 7 years@sss how do you display the converted image? resulting values are in range <0, 1> and
cv::imshow
should display it the same as before conversion -
New iOS Dev about 7 yearsPlease find my updated question..i am just calling method to convert image to IOS UIImage . like this img2.convertTo(img3, CV_32F, 1.0 / 255, 0); processedImageForModel = [OSInference UIImageFromCVMat:img3];
-
New iOS Dev about 7 yearsUIImageFromCVMat . is my custom method which I write in question
-
Miki about 7 yearsThe answer is correct. I think that the issue is that UIImageFromCVMat handles only
CV_8U
images, while this isCV_32F
. So, for visualization purposes, the image should be of typeCV_8U
with values in[0,255]
-
slawekwin about 7 years@sss I believe it would behave the same as the solution in my answer, but it also has to calculate the factor (1/255) to multiply all values by itself (see docs)
-
New iOS Dev about 7 yearsshould I Convert CV_8U to CV_32F in my custom method? or is there openCV method to do this?
-
Miki about 7 years@slawekwin This is not the same as your question (and is in fact not correct). It's the same only if the input image happens to have both 0 and 255 values, otherwise the scaling factor is not
1/255
-
Miki about 7 yearsThe easiest way is to pass to
UIImageFromCVMat
matrices of the correct type. So you can do:Mat matToShow; img3.convertTo(matToShow, CV_8U, 255);
However, this is the opposite of the operation you just did. So you can visualize directlyimg2
, and keepimg3
for further processing -
slawekwin about 7 years@sss I see you use
CGImageCreate
function for your conversion. Read the doc for it, I think specyfingCGColorSpaceCreateDeviceGray
color space and 32 forbitsPerComponent
andbitsPerPixel
might help -
New iOS Dev about 7 years@Miki is this correct what I am trying to do based on your information . //------ Open CV preproccessing cv::Mat originalMat = [OSInference cvMatFromUIImage:imgBeforeProccessing]; cv::Mat img2; cv::cvtColor(originalMat, img2,CV_BGR2GRAY); cv::resize(img2, img2, cv::Size(128, 128), 0, 0, CV_INTER_CUBIC); cv::Mat img3; img2.convertTo(img3, CV_32F, 1.0 / 255.0, 0); img3.convertTo(img3, CV_8U, 255); processedImageForModel = [OSInference UIImageFromCVMat:img3];
-
Miki about 7 yearsYes, but
img3
will be exactly equal toimg2
. I think you can do:cv::Mat img2; cv::cvtColor(originalMat, img2,CV_BGR2GRAY); cv::resize(img2, img2, cv::Size(128, 128), 0, 0, CV_INTER_CUBIC); cv::Mat img3; img2.convertTo(img3, CV_32F, 1.0 / 255.0, 0); processedImageForModel = [OSInference UIImageFromCVMat:img2];
and keep img3 for further processing. -
Miki about 7 yearsHowever isn't clear why you need to scale by 1/255 at all ;)
-
New iOS Dev about 7 yearsI don't have further processing just needed iOS image from that cvmat which i am passing to UIImageFromCVMat method
-
Miki about 7 yearsThen just
cv::Mat img2; cv::cvtColor(originalMat, img2,CV_BGR2GRAY); processedImageForModel = [OSInference UIImageFromCVMat:img2]