How to reduce the number of colors in an image with OpenCV?

39,710

Solution 1

There are many ways to do it. The methods suggested by jeff7 are OK, but some drawbacks are:

  • method 1 have parameters N and M, that you must choose, and you must also convert it to another colorspace.
  • method 2 answered can be very slow, since you should compute a 16.7 Milion bins histogram and sort it by frequency (to obtain the 64 higher frequency values)

I like to use an algorithm based on the Most Significant Bits to use in a RGB color and convert it to a 64 color image. If you're using C/OpenCV, you can use something like the function below.

If you're working with gray-level images I recommed to use the LUT() function of the OpenCV 2.3, since it is faster. There is a tutorial on how to use LUT to reduce the number of colors. See: Tutorial: How to scan images, lookup tables... However I find it more complicated if you're working with RGB images.

void reduceTo64Colors(IplImage *img, IplImage *img_quant) {
    int i,j;
    int height   = img->height;   
    int width    = img->width;    
    int step     = img->widthStep;

    uchar *data = (uchar *)img->imageData;
    int step2 = img_quant->widthStep;
    uchar *data2 = (uchar *)img_quant->imageData;

    for (i = 0; i < height ; i++)  {
        for (j = 0; j < width; j++)  {

          // operator XXXXXXXX & 11000000 equivalent to  XXXXXXXX AND 11000000 (=192)
          // operator 01000000 >> 2 is a 2-bit shift to the right = 00010000 
          uchar C1 = (data[i*step+j*3+0] & 192)>>2;
          uchar C2 = (data[i*step+j*3+1] & 192)>>4;
          uchar C3 = (data[i*step+j*3+2] & 192)>>6;

          data2[i*step2+j] = C1 | C2 | C3; // merges the 2 MSB of each channel
        }     
    }
}

Solution 2

This subject was well covered on OpenCV 2 Computer Vision Application Programming Cookbook:

Chapter 2 shows a few reduction operations, one of them demonstrated here in C++ and later in Python:

#include <iostream>
#include <vector>

#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>


void colorReduce(cv::Mat& image, int div=64)
{    
    int nl = image.rows;                    // number of lines
    int nc = image.cols * image.channels(); // number of elements per line

    for (int j = 0; j < nl; j++)
    {
        // get the address of row j
        uchar* data = image.ptr<uchar>(j);

        for (int i = 0; i < nc; i++)
        {
            // process each pixel
            data[i] = data[i] / div * div + div / 2;
        }
    }
}

int main(int argc, char* argv[])
{   
    // Load input image (colored, 3-channel, BGR)
    cv::Mat input = cv::imread(argv[1]);
    if (input.empty())
    {
        std::cout << "!!! Failed imread()" << std::endl;
        return -1;
    } 

    colorReduce(input);

    cv::imshow("Color Reduction", input);   
    cv::imwrite("output.jpg", input);   
    cv::waitKey(0);

    return 0;
}

Below you can find the input image (left) and the output of this operation (right):

The equivalent code in Python would be the following: (credits to @eliezer-bernart)

import cv2
import numpy as np

input = cv2.imread('castle.jpg')

# colorReduce()
div = 64
quantized = input // div * div + div // 2

cv2.imwrite('output.jpg', quantized)

Solution 3

You might consider K-means, yet in this case it will most likely be extremely slow. A better approach might be doing this "manually" on your own. Let's say you have image of type CV_8UC3, i.e. an image where each pixel is represented by 3 RGB values from 0 to 255 (Vec3b). You might "map" these 256 values to only 4 specific values, which would yield 4 x 4 x 4 = 64 possible colors.

I've had a dataset, where I needed to make sure that dark = black, light = white and reduce the amount of colors of everything between. This is what I did (C++):

inline uchar reduceVal(const uchar val)
{
    if (val < 64) return 0;
    if (val < 128) return 64;
    return 255;
}

void processColors(Mat& img)
{
    uchar* pixelPtr = img.data;
    for (int i = 0; i < img.rows; i++)
    {
        for (int j = 0; j < img.cols; j++)
        {
            const int pi = i*img.cols*3 + j*3;
            pixelPtr[pi + 0] = reduceVal(pixelPtr[pi + 0]); // B
            pixelPtr[pi + 1] = reduceVal(pixelPtr[pi + 1]); // G
            pixelPtr[pi + 2] = reduceVal(pixelPtr[pi + 2]); // R
        }
    }
}

causing [0,64) to become 0, [64,128) -> 64 and [128,255) -> 255, yielding 27 colors:

enter image description here enter image description here

To me this seems to be neat, perfectly clear and faster than anything else mentioned in other answers.

You might also consider reducing these values to one of the multiples of some number, let's say:

inline uchar reduceVal(const uchar val)
{
    if (val < 192) return uchar(val / 64.0 + 0.5) * 64;
    return 255;
}

which would yield a set of 5 possible values: {0, 64, 128, 192, 255}, i.e. 125 colors.

Solution 4

Here's a Python implementation of color quantization using K-Means Clustering with cv2.kmeans. The idea is to reduce the number of distinct colors in an image while preserving the color appearance of the image as much as possible. Here's the result:

Input -> Output

Code

import cv2
import numpy as np

def kmeans_color_quantization(image, clusters=8, rounds=1):
    h, w = image.shape[:2]
    samples = np.zeros([h*w,3], dtype=np.float32)
    count = 0

    for x in range(h):
        for y in range(w):
            samples[count] = image[x][y]
            count += 1

    compactness, labels, centers = cv2.kmeans(samples,
            clusters, 
            None,
            (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10000, 0.0001), 
            rounds, 
            cv2.KMEANS_RANDOM_CENTERS)

    centers = np.uint8(centers)
    res = centers[labels.flatten()]
    return res.reshape((image.shape))

image = cv2.imread('1.jpg')
result = kmeans_color_quantization(image, clusters=8)
cv2.imshow('result', result)
cv2.waitKey()     

Solution 5

The answers suggested here are really good. I thought I would add my idea as well. I follow the formulation of many comments here, in which it is said that 64 colors can be represented by 2 bits of each channel in an RGB image.

The function in code below takes as input an image and the number of bits required for quantization. It uses bit manipulation to 'drop' the LSB bits and keep only the required number of bits. The result is a flexible method that can quantize the image to any number of bits.

#include "include\opencv\cv.h"
#include "include\opencv\highgui.h"

// quantize the image to numBits 
cv::Mat quantizeImage(const cv::Mat& inImage, int numBits)
{
    cv::Mat retImage = inImage.clone();

    uchar maskBit = 0xFF;

    // keep numBits as 1 and (8 - numBits) would be all 0 towards the right
    maskBit = maskBit << (8 - numBits);

    for(int j = 0; j < retImage.rows; j++)
        for(int i = 0; i < retImage.cols; i++)
        {
            cv::Vec3b valVec = retImage.at<cv::Vec3b>(j, i);
            valVec[0] = valVec[0] & maskBit;
            valVec[1] = valVec[1] & maskBit;
            valVec[2] = valVec[2] & maskBit;
            retImage.at<cv::Vec3b>(j, i) = valVec;
        }

        return retImage;
}


int main ()
{
    cv::Mat inImage;
    inImage = cv::imread("testImage.jpg");
    char buffer[30];
    for(int i = 1; i <= 8; i++)
    {
        cv::Mat quantizedImage = quantizeImage(inImage, i);
        sprintf(buffer, "%d Bit Image", i);
        cv::imshow(buffer, quantizedImage);

        sprintf(buffer, "%d Bit Image.png", i);
        cv::imwrite(buffer, quantizedImage);
    }

    cv::waitKey(0);
    return 0;
}

Here is an image that is used in the above function call:

enter image description here

Image quantized to 2 bits for each RGB channel (Total 64 Colors):

enter image description here

3 bits for each channel:

enter image description here

4 bits ...

enter image description here

Share:
39,710
Felipe Hummel
Author by

Felipe Hummel

Updated on September 20, 2021

Comments

  • Felipe Hummel
    Felipe Hummel over 2 years

    I have a set of image files, and I want to reduce the number of colors of them to 64. How can I do this with OpenCV?

    I need this so I can work with a 64-sized image histogram. I'm implementing CBIR techniques

    What I want is color quantization to a 4-bit palette.