how to use Kinect with openni and opencv

18,214

Solution 1

AFAIK out of the box OpenCV supports OpenNI 1.5.x. If you haven't installed OpenNI, do so first, in this particular order(which is important):

  1. Install OpenNI 1.5.7
  2. Install NITE(compatible for 1.5.7)
  3. If you're using a Kinect (and not an Asus) sensor, also install Avin's SensorKinect driver

At this point you should have OpenNI installed, so go ahead and run one of the samples.

The prebuilt opencv library isn't compiled with OpenCV support by default so you will need to build opencv from source to enable OpenNI support.

Install CMakeGUI if you haven't done so already. This will allow you to easily configure the opencv built process. Run it, browse to the opencv source folder, pick a destination directory to place your build files and hit configure.

You should have a large list of options. If you scroll, your should see the OpenNI install folder detected (if not you should fix the path) and also you should and WITH_OPENNI flag you can enable.

When you're done press generate, wich should generate the visual studio project files you need to easily compile the opencv library.

For more details on building opencv from source on windows also check out the official documentation

When you're done compiling you should have opencv built with openni support and you should be able run something simple as:

#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"

#include <iostream>

using namespace cv;
using namespace std;

int main(){
    cout << "opening device(s)" << endl;

    VideoCapture sensor1;
    sensor1.open(CV_CAP_OPENNI);

    if( !sensor1.isOpened() ){
        cout << "Can not open capture object 1." << endl;
        return -1;
    }

    for(;;){
        Mat depth1;

        if( !sensor1.grab() ){
            cout << "Sensor1 can not grab images." << endl;
            return -1;
        }else if( sensor1.retrieve( depth1, CV_CAP_OPENNI_DEPTH_MAP ) ) imshow("depth1",depth1);

        if( waitKey( 30 ) == 27 )   break;//ESC to exit

   }
}

Also see this similar answer. If you need to use OpenNI 2.x see these resources:

Solution 2

Here is I think the most simple and efficient way to use the kinect with OpenCV.

  • You DON'T have to rebuild OpenCV with the WITH_OPENNI flag : you just need OpenNI installed (tested whith the 1.3.2.1-4 version).
  • No superfluous memory copy or allocation are made : only the headers are allocated and then the pointers to the data copied.

The code allows you to collect successively the depth and the color images, but if you just want one of these streams, free to you not to open the other stream.

Here is the code with the C++ OpenCV API and Mat objects :

#include <openni2/OpenNI.h>
#include <opencv2/opencv.hpp>


using namespace openni;

main()
{
    OpenNI::initialize();
    puts( "Kinect initialization..." );
    Device device;
    if ( device.open( openni::ANY_DEVICE ) != 0 )
    {
        puts( "Kinect not found !" ); 
        return -1;
    }
    puts( "Kinect opened" );
    VideoStream depth, color;
    color.create( device, SENSOR_COLOR );
    color.start();
    puts( "Camera ok" );
    depth.create( device, SENSOR_DEPTH );
    depth.start();
    puts( "Depth sensor ok" );
    VideoMode paramvideo;
    paramvideo.setResolution( 640, 480 );
    paramvideo.setFps( 30 );
    paramvideo.setPixelFormat( PIXEL_FORMAT_DEPTH_100_UM );
    depth.setVideoMode( paramvideo );
    paramvideo.setPixelFormat( PIXEL_FORMAT_RGB888 );
    color.setVideoMode( paramvideo );
    puts( "Réglages des flux vidéos ok" );

    // If the depth/color synchronisation is not necessary, start is faster :
    device.setDepthColorSyncEnabled( false );

    // Otherwise, the streams can be synchronized with a reception in the order of our choice :
    //device.setDepthColorSyncEnabled( true );
    //device.setImageRegistrationMode( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR );

    VideoStream** stream = new VideoStream*[2];
    stream[0] = &depth;
    stream[1] = &color;
    puts( "Kinect initialization completed" );


    if ( device.getSensorInfo( SENSOR_DEPTH ) != NULL )
    {
        VideoFrameRef depthFrame, colorFrame;
        cv::Mat colorcv( cv::Size( 640, 480 ), CV_8UC3, NULL );
        cv::Mat depthcv( cv::Size( 640, 480 ), CV_16UC1, NULL );
        cv::namedWindow( "RGB", CV_WINDOW_AUTOSIZE );
        cv::namedWindow( "Depth", CV_WINDOW_AUTOSIZE );

        int changedIndex;
        while( device.isValid() )
        {
            OpenNI::waitForAnyStream( stream, 2, &changedIndex );
            switch ( changedIndex )
            {
                case 0:
                    depth.readFrame( &depthFrame );

                    if ( depthFrame.isValid() )
                    {
                        depthcv.data = (uchar*) depthFrame.getData();
                        cv::imshow( "Depth", depthcv );
                    }
                    break;

                case 1:
                    color.readFrame( &colorFrame );

                    if ( colorFrame.isValid() )
                    {
                        colorcv.data = (uchar*) colorFrame.getData();
                        cv::cvtColor( colorcv, colorcv, CV_BGR2RGB );
                        cv::imshow( "RGB", colorcv );
                    }
                    break;

                default:
                    puts( "Error retrieving a stream" );
            }
            cv::waitKey( 1 );
        }

        cv::destroyWindow( "RGB" );
        cv::destroyWindow( "Depth" );
    }
    depth.stop();
    depth.destroy();
    color.stop();
    color.destroy();
    device.close();
    OpenNI::shutdown();
}

And for those who prefere to use the C API of OpenCV with the IplImage structures :

#include <openni2/OpenNI.h>
#include <opencv/cv.h>
#include <opencv/highgui.h>


using namespace openni;

main()
{
    OpenNI::initialize();
    puts( "Kinect initialization..." );
    Device device;
    if ( device.open( openni::ANY_DEVICE ) != 0 )
    {
        puts( "Kinect not found !" ); 
        return -1;
    }
    puts( "Kinect opened" );
    VideoStream depth, color;
    color.create( device, SENSOR_COLOR );
    color.start();
    puts( "Camera ok" );
    depth.create( device, SENSOR_DEPTH );
    depth.start();
    puts( "Depth sensor ok" );
    VideoMode paramvideo;
    paramvideo.setResolution( 640, 480 );
    paramvideo.setFps( 30 );
    paramvideo.setPixelFormat( PIXEL_FORMAT_DEPTH_100_UM );
    depth.setVideoMode( paramvideo );
    paramvideo.setPixelFormat( PIXEL_FORMAT_RGB888 );
    color.setVideoMode( paramvideo );
    puts( "Réglages des flux vidéos ok" );

    // If the depth/color synchronisation is not necessary, start is faster :
    device.setDepthColorSyncEnabled( false );

    // Otherwise, the streams can be synchronized with a reception in the order of our choice :
    //device.setDepthColorSyncEnabled( true );
    //device.setImageRegistrationMode( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR );

    VideoStream** stream = new VideoStream*[2];
    stream[0] = &depth;
    stream[1] = &color;
    puts( "Kinect initialization completed" );


    if ( device.getSensorInfo( SENSOR_DEPTH ) != NULL )
    {
        VideoFrameRef depthFrame, colorFrame;
        IplImage* colorcv = cvCreateImageHeader( cvSize( 640, 480 ), IPL_DEPTH_8U, 3 );
        IplImage* depthcv = cvCreateImageHeader( cvSize( 640, 480 ), IPL_DEPTH_16U, 1 );
        cvNamedWindow( "RGB", CV_WINDOW_AUTOSIZE );
        cvNamedWindow( "Depth", CV_WINDOW_AUTOSIZE );

        int changedIndex;
        while( device.isValid() )
        {
            OpenNI::waitForAnyStream( stream, 2, &changedIndex );
            switch ( changedIndex )
            {
                case 0:
                    depth.readFrame( &depthFrame );

                    if ( depthFrame.isValid() )
                    {
                        depthcv->imageData = (char*) depthFrame.getData();
                        cvShowImage( "Depth", depthcv );
                    }
                    break;

                case 1:
                    color.readFrame( &colorFrame );

                    if ( colorFrame.isValid() )
                    {
                        colorcv->imageData = (char*) colorFrame.getData();
                        cvCvtColor( colorcv, colorcv, CV_BGR2RGB );
                        cvShowImage( "RGB", colorcv );
                    }
                    break;

                default:
                    puts( "Error retrieving a stream" );
            }
            cvWaitKey( 1 );
        }

        cvReleaseImageHeader( &colorcv );
        cvReleaseImageHeader( &depthcv );
        cvDestroyWindow( "RGB" );
        cvDestroyWindow( "Depth" );
    }
    depth.stop();
    depth.destroy();
    color.stop();
    color.destroy();
    device.close();
    OpenNI::shutdown();
}

I hope it will be useful for the most.

Enjoy !

Share:
18,214
Admin
Author by

Admin

Updated on June 28, 2022

Comments

  • Admin
    Admin almost 2 years

    for the beginning, I just need to capture the RGBstream and convert it in a sequence of opencv image. it shouldn`t be so hard, but I found more that one codes online, but they don't run in my computer. I don't know where is the mistake.

    could you suggest me a tutorial o a really simple code that allow me to understand how to use Kinect libraries? in the beginning I tried Kinect sdk, after a while a choose OPENNI.

    help me, thx!

    ps: I'm using c++ AND VISUAL STUDIO 2010

  • dannyxyz22
    dannyxyz22 over 9 years
    Since OpenNI site has been down, here are some links for 1.5.7 OpenNI installers: github.com/JavaOpenCVBook/code/tree/master/OpenNI