How to: Convert an OpenCV cv::Mat to an OpenGL texture

I’m working on using OpenCV to get Kinect sensor data via OpenNI, and needed a way to get a matrix (cv::Mat) into an OpenGL texture – so I wrote a function to do just that – woo! Apologies in advance for the terrible juggling ;-)

The function used to perform the sensor data to texture conversion is:

// Function turn a cv::Mat into a texture, and return the texture ID as a GLuint for use
GLuint matToTexture(cv::Mat &mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
{
	// Generate a number for our textureID's unique handle
	GLuint textureID;
	glGenTextures(1, &textureID);
 
	// Bind to our texture handle
	glBindTexture(GL_TEXTURE_2D, textureID);
 
	// Catch silly-mistake texture interpolation method for magnification
	if (magFilter == GL_LINEAR_MIPMAP_LINEAR  ||
	    magFilter == GL_LINEAR_MIPMAP_NEAREST ||
	    magFilter == GL_NEAREST_MIPMAP_LINEAR ||
	    magFilter == GL_NEAREST_MIPMAP_NEAREST)
	{
		cout << "You can't use MIPMAPs for magnification - setting filter to GL_LINEAR" << endl;
		magFilter = GL_LINEAR;
	}
 
	// Set texture interpolation methods for minification and magnification
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
 
	// Set texture clamping method
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);
 
	// Set incoming texture format to:
	// GL_BGR       for CV_CAP_OPENNI_BGR_IMAGE,
	// GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
	// Work out other mappings as required ( there's a list in comments in main() )
	GLenum inputColourFormat = GL_BGR;
	if (mat.channels() == 1)
	{
		inputColourFormat = GL_LUMINANCE;
	}
 
	// Create the texture
	glTexImage2D(GL_TEXTURE_2D,     // Type of texture
	             0,                 // Pyramid level (for mip-mapping) - 0 is the top level
	             GL_RGB,            // Internal colour format to convert to
	             mat.cols,          // Image width  i.e. 640 for Kinect in standard mode
	             mat.rows,          // Image height i.e. 480 for Kinect in standard mode
	             0,                 // Border width in pixels (can either be 1 or 0)
	             inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
	             GL_UNSIGNED_BYTE,  // Image data type
	             mat.ptr());        // The actual image data itself
 
	// If we're using mipmaps then generate them. Note: This requires OpenGL 3.0 or higher
	if (minFilter == GL_LINEAR_MIPMAP_LINEAR  ||
	    minFilter == GL_LINEAR_MIPMAP_NEAREST ||
	    minFilter == GL_NEAREST_MIPMAP_LINEAR ||
	    minFilter == GL_NEAREST_MIPMAP_NEAREST)
	{
		glGenerateMipmap(GL_TEXTURE_2D);
	}
 
	return textureID;
}

You can then use the above function like this:

// Create our capture object
cv::VideoCapture capture( CV_CAP_OPENNI );
 
// Check that we have actually opened a connection to the sensor
if( !capture.isOpened() )
{
	cout << "Cannot open capture object." << endl;
	exit(-1);
}
 
// Create our cv::Mat object
cv::Mat camFrame;
 
	// *** loop ***
 
	// Grab the device
	capture.grab();
 
	// Retrieve desired sensor data (in this case the standard camera image)
	capture.retrieve(camFrame, CV_CAP_OPENNI_BGR_IMAGE);
 
	// Convert to texture
	GLuint tex = matToTexture(camFrame, GL_NEAREST, GL_NEAREST, GL_CLAMP);
 
	// Bind texture
	glBindTexture(GL_TEXTURE_2D, tex);
 
	// Do whatever you want with the texture here...
 
	// Free the texture memory
	glDeleteTextures(1, &tex);
 
	// *** End of loop ***
 
// Release the device
capture.release();

There’s one very important issue to watch out for when using OpenCV and OpenNI together which I’ve commented in the code, but I’ll place here as well as it can be a real deal breaker:

There appears to be a threading issue with the OpenCV grab() function where if you try to grab the device before it’s ready to provide the next frame it takes up to 2 seconds to provide the frame, which it might do for a little while before crashing the XnSensorServer process & then you can’t get any more frames without restarting the application. This results in horrible, stuttery framerates and garbled sensor data.

I’ve found that this can be worked around by playing an mp3 in the background. No, really. I’m guessing the threading of the mp3 player introduces some kind of latency which prevents the grab() function being called too soon. Try it if you don’t believe me!

So just be aware that if you’re using a Kinect you have to be careful with the grab() function… The source code used to create the above video is provided in full after the jump, if you’re interested.

Cheers!

// cv::Mat to Texture | Jan 2012 | r3dux
// Library dependencies: OpenCV core, video and highgui libs, GL, glew, glfw
// We do NOT need to link in libOpenNI.so (it comes with building OpenCV with OpenNI support)
// Kinect with OpenCV usage guide: http://opencv.itseez.com/doc/user_guide/ug_highgui.html
 
#include <iostream>
using std::cout;
using std::endl;
 
#include "opencv.hpp"
#include "glew.h"
#include "glfw.h"
 
GLint   windowWidth  = 640;     // Define our window width
GLint   windowHeight = 480;     // Define our window height
GLfloat fieldOfView  = 45.0f;   // FoV
GLfloat zNear        = 0.1f;    // Near clip plane
GLfloat zFar         = 200.0f;  // Far clip plane
 
// Frame counting and limiting
int    frameCount = 0;
double frameStartTime, frameEndTime, frameDrawTime;
 
bool quit = false;
 
// Function turn a cv::Mat into a texture, and return the texture ID as a GLuint for use
GLuint matToTexture(cv::Mat &mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
{
	// Generate a number for our textureID's unique handle
	GLuint textureID;
	glGenTextures(1, &textureID);
 
	// Bind to our texture handle
	glBindTexture(GL_TEXTURE_2D, textureID);
 
	// Catch silly-mistake texture interpolation method for magnification
	if (magFilter == GL_LINEAR_MIPMAP_LINEAR  ||
	    magFilter == GL_LINEAR_MIPMAP_NEAREST ||
	    magFilter == GL_NEAREST_MIPMAP_LINEAR ||
	    magFilter == GL_NEAREST_MIPMAP_NEAREST)
	{
		cout << "You can't use MIPMAPs for magnification - setting filter to GL_LINEAR" << endl;
		magFilter = GL_LINEAR;
	}
 
	// Set texture interpolation methods for minification and magnification
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
 
	// Set texture clamping method
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);
 
	// Set incoming texture format to:
	// GL_BGR       for CV_CAP_OPENNI_BGR_IMAGE,
	// GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
	// Work out other mappings as required ( there's a list in comments in main() )
	GLenum inputColourFormat = GL_BGR;
	if (mat.channels() == 1)
	{
		inputColourFormat = GL_LUMINANCE;
	}
 
	// Create the texture
	glTexImage2D(GL_TEXTURE_2D,     // Type of texture
	             0,                 // Pyramid level (for mip-mapping) - 0 is the top level
	             GL_RGB,            // Internal colour format to convert to
	             mat.cols,          // Image width  i.e. 640 for Kinect in standard mode
	             mat.rows,          // Image height i.e. 480 for Kinect in standard mode
	             0,                 // Border width in pixels (can either be 1 or 0)
	             inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
	             GL_UNSIGNED_BYTE,  // Image data type
	             mat.ptr());        // The actual image data itself
 
	// If we're using mipmaps then generate them. Note: This requires OpenGL 3.0 or higher
	if (minFilter == GL_LINEAR_MIPMAP_LINEAR  ||
	    minFilter == GL_LINEAR_MIPMAP_NEAREST ||
	    minFilter == GL_NEAREST_MIPMAP_LINEAR ||
	    minFilter == GL_NEAREST_MIPMAP_NEAREST)
	{
		glGenerateMipmap(GL_TEXTURE_2D);
	}
 
	return textureID;
}
 
void draw(cv::Mat &camFrame, cv::Mat &depthFrame)
{
	// Clear the screen and depth buffer, and reset the ModelView matrix to identity
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	glLoadIdentity();
 
	// Move things back into the screen
	glTranslatef(0.0f, 0.0f, -8.0f);
 
	// Rotate around the y-axis
	glRotatef(frameCount, 0.0f, 1.0f, 0.0f);
 
	// Rotate around the x-axis
	static float rateOfChange = 0.01f;
	static float degreesToMoveThrough = 180.0f;
	glRotatef(sin(frameCount * rateOfChange) * degreesToMoveThrough, 1.0f, 0.0f, 0.0f);
 
	// Rotate around the z-axis
	glRotatef(cos(frameCount * rateOfChange) * degreesToMoveThrough, 0.0f, 1.0f, 0.0f);
 
	glEnable(GL_TEXTURE_2D);
 
	// Quad width and height
	float w = 6.4f;
	float h = 4.8f;
 
	// Convert image and depth data to OpenGL textures
	GLuint imageTex = matToTexture(camFrame,   GL_LINEAR_MIPMAP_LINEAR,   GL_LINEAR, GL_CLAMP);
	GLuint depthTex = matToTexture(depthFrame, GL_LINEAR_MIPMAP_LINEAR, GL_LINEAR, GL_CLAMP);
 
	// Draw the textures
	// Note: Window co-ordinates origin is top left, texture co-ordinate origin is bottom left.
 
	// Front facing texture
	glBindTexture(GL_TEXTURE_2D, imageTex);
	glBegin(GL_QUADS);
		glTexCoord2f(1, 1);
		glVertex2f(-w/2,  h/2);
		glTexCoord2f(0, 1);
		glVertex2f( w/2,  h/2);
		glTexCoord2f(0, 0);
		glVertex2f( w/2, -h/2);
		glTexCoord2f(1, 0);
		glVertex2f(-w/2, -h/2);
	glEnd();
 
	// Back facing texture (facing backward because of the reversed the vertex winding)
	glBindTexture(GL_TEXTURE_2D, depthTex);
	glBegin(GL_QUADS);
		glTexCoord2f(1, 1);
		glVertex2f(-w/2,  h/2);
		glTexCoord2f(1, 0);
		glVertex2f(-w/2, -h/2);
		glTexCoord2f(0, 0);
		glVertex2f( w/2, -h/2);
		glTexCoord2f(0, 1);
		glVertex2f( w/2,  h/2);
	glEnd();
 
	// Free the texture memory
	glDeleteTextures(1, &imageTex);
	glDeleteTextures(1, &depthTex);
 
	glDisable(GL_TEXTURE_2D);
}
 
void handleKeypress(int theKey, int theAction)
{
	// If a key was pressed...
	if (theAction == GLFW_PRESS)
	{
		// ...act accordingly dependant on what key it was!
		switch (theKey)
		{
			case GLFW_KEY_ESC:
				quit = true;
				break;
 
			default:
				break;
 
		} // End of switch statement
 
	} // End of GLFW_PRESS
}
 
void initGL()
{
	// Define our buffer settings
	int redBits    = 8, greenBits = 8,  blueBits    = 8;
	int alphaBits  = 8, depthBits = 24, stencilBits = 8;
 
	// Initialise glfw
	glfwInit();
 
	// Create a window
	if(!glfwOpenWindow(windowWidth, windowHeight, redBits, greenBits, blueBits, alphaBits, depthBits, stencilBits, GLFW_WINDOW))
	{
		cout << "Failed to open window!" << endl;
		glfwTerminate();
		exit(-1);
	}
 
	glfwSetWindowTitle("OpenCV/OpenNI Sensor Data to Texture | r3dux");
 
	// Specify the callback function for key presses/releases
	glfwSetKeyCallback(handleKeypress);
 
	//  Initialise glew (must occur AFTER window creation or glew will error)
	GLenum err = glewInit();
	if (GLEW_OK != err)
	{
		cout << "GLEW initialisation error: " << glewGetErrorString(err) << endl;
		exit(-1);
	}
	cout << "GLEW okay - using version: " << glewGetString(GLEW_VERSION) << endl;
 
	// Setup our viewport to be the entire size of the window
	glViewport(0, 0, (GLsizei)windowWidth, (GLsizei)windowHeight);
 
	// Change to the projection matrix and set our viewing volume
	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
 
	// The following code is a fancy bit of math that is equivilant to calling:
	// gluPerspective(fieldOfView/2.0f, width/height , near, far);
	// We do it this way simply to avoid requiring glu.h
	GLfloat aspectRatio = (windowWidth > windowHeight)? float(windowWidth)/float(windowHeight) : float(windowHeight)/float(windowWidth);
	GLfloat fH = tan( float(fieldOfView / 360.0f * 3.14159f) ) * zNear;
	GLfloat fW = fH * aspectRatio;
	glFrustum(-fW, fW, -fH, fH, zNear, zFar);
 
	// ----- OpenGL settings -----
 
	glDepthFunc(GL_LEQUAL);		// Specify depth function to use
 
	glEnable(GL_DEPTH_TEST);    // Enable the depth buffer
 
	glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Ask for nicest perspective correction
 
	glEnable(GL_CULL_FACE);     // Cull back facing polygons
 
	glfwSwapInterval(1);        // Lock screen updates to vertical refresh
 
	// Switch to ModelView matrix and reset
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
 
	glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set our clear colour to black
}
 
void lockFramerate(double framerate)
{
	// Note: frameStartTime is called first thing in the main loop
 
	// Our allowed frame time is 1 second divided by the desired FPS
	static double allowedFrameTime = 1.0 / framerate;
 
	// Get current time
	frameEndTime = glfwGetTime();
 
	// Calc frame draw time
	frameDrawTime = frameEndTime - frameStartTime;
 
	double sleepTime = 0.0;
 
	// Sleep if we've got time to kill before the next frame
	if (frameDrawTime < allowedFrameTime)
	{
		sleepTime = allowedFrameTime - frameDrawTime;
		glfwSleep(sleepTime);
	}
 
	// Debug stuff
	double potentialFPS = 1.0 / frameDrawTime;
	double lockedFPS    = 1.0 / (glfwGetTime() - frameStartTime);
	cout << "Draw: " << frameDrawTime << " Sleep: " << sleepTime;
	cout << " Pot. FPS: " << potentialFPS << " Locked FPS: " << lockedFPS << endl;
}
 
int main()
{
	// Set up our OpenGL window, projection and options
	initGL();
 
	// Create a our video capture using the Kinect and OpenNI
	// Note: To use the cv::VideoCapture class you MUST link in the highgui lib (libopencv_highgui.so)
	cout << "Opening Kinect device ..." << endl;
	cv::VideoCapture capture( CV_CAP_OPENNI );
 
	// Set sensor to 640x480@30Hz mode as opposed to 1024x768@15Hz mode (which is available for image sensor only!)
	// Note: CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE = CV_CAP_OPENNI_IMAGE_GENERATOR + CV_CAP_PROP_OPENNI_OUTPUT_MODE
	capture.set( CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, CV_CAP_OPENNI_VGA_30HZ ); // default
	cout << "done." << endl;
 
	// Check that we have actually opened a connection to the sensor
	if( !capture.isOpened() )
	{
		cout << "Can not open a capture object." << endl;
		return -1;
	}
 
	// Create our cv::Mat objects
	cv::Mat camFrame;
	cv::Mat depthFrame;
 
	do
	{
		frameStartTime = glfwGetTime(); // Grab the time at the beginning of the frame
 
		// Grab a frame from the sensor
		// Correct procedure is to grab once per frame, then retrieve as many fields as required.
		// *****************************************************************************************
		// IMPORTANT NOTE: There appears to be a threading issue with the OpenCV grab() function
		// where if you try to grab the device before it's ready to provide the next frame it takes
		// up to 2 seconds to provide the frame, which it might do for a little while before crashing
		// the XnSensorServer process & then you can't get any more frames without restarting the
		// application. This results in horrible, stuttery framerates and garbled sensor data.
		//
		// I've found that this can be worked around by playing an mp3 in the background. No, really.
		// I'm guessing the threading of the mp3 player introduces some kind of latency which
		// prevents the grab() function being called too soon. Try it if you don't believe me!
		//
		// Config: Linux x64 LMDE, Kernel 3.1.0-5.dmz.1-liquorix-amd64, Nvidia 290.10 drivers,
		// OpenCV 2.3.2 (from git, built without TBB [same occurs with!]), openni-bin-x64-v1.5.2.23,
		// avin2-SensorKinect-git-unstable-branch-2011-01-04, NITE-bin-unstable-x64-v1.5.2.21.
		//******************************************************************************************
		if (!capture.grab())
		{
			cout << "Could not grab kinect... Skipping frame." << endl;
		}
		else
		{
			/*
			Frame retrieval formats:
				data given from depth generator:
					OPENNI_DEPTH_MAP         - depth values in mm (CV_16UC1)
					OPENNI_POINT_CLOUD_MAP   - XYZ in meters (CV_32FC3)
					OPENNI_DISPARITY_MAP     - disparity in pixels (CV_8UC1)
					OPENNI_DISPARITY_MAP_32F - disparity in pixels (CV_32FC1)
					OPENNI_VALID_DEPTH_MASK  - mask of valid pixels (not occluded, not shaded etc.) (CV_8UC1)
 
				data given from RGB image generator:
					OPENNI_BGR_IMAGE - color image (CV_8UC3)
					OPENNI_GRAY_IMAGE - gray image (CV_8UC1)
			*/
 
			// Retrieve desired sensor data
			capture.retrieve(camFrame,   CV_CAP_OPENNI_BGR_IMAGE);
			capture.retrieve(depthFrame, CV_CAP_OPENNI_DISPARITY_MAP);
 
			// Draw texture contents
			draw(camFrame, depthFrame);
 
			// Swap the active and visual pages
			glfwSwapBuffers();
		}
 
		// Quit out if the OpenGL window was closed
		if ( !glfwGetWindowParam( GLFW_OPENED) )
		{
			quit = true;
		}
 
		frameCount++;
 
		// Lock our main loop to 30fps
		lockFramerate(30.0);
 
		//if( cv::waitKey( 30 ) >= 0 )
		//break;
 
	} while (quit == false);
 
	capture.release();
 
	glfwTerminate();
 
	return 0;
}

19 thoughts on “How to: Convert an OpenCV cv::Mat to an OpenGL texture”

  1. Thanks, this is exactly what I needed here is a wxGLCanvas derived object that this works in, its not cleaned up but it works great even when resizing etc.

    /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
     
    .h 
     
     
    /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
     
    // DetectGLCanvas.h
     
    #pragma once
     
    #include 
    #include 
     
     
    // OpenGL view data
    struct GLData // todo get rid of this into the class itself
    {
        bool initialized;           // has OpenGL been initialized?
        float beginx, beginy;       // position of mouse
       // float quat[4];              // orientation of object //todo remove this entire struct 
       // float zoom;                 // field of view in degrees
    };
     
    namespace cv{		// Forward declaration of the Mat object within the cv namespace
    	class Mat;
    	class VideoCapture;
    	namespace gpu{		
    		class GpuMat;		// Forward declaration of the GpuMat object within the cv::gpu namespace
    	}
    }/*
    namespace cv{	
     
    }*/
    class DetectGLCanvas : public wxGLCanvas
    {
    public:
        DetectGLCanvas(wxWindow* parent, 
    	wxWindowID id = wxID_ANY,
        const wxPoint&amp; pos = wxDefaultPosition,
        const wxSize&amp; size = wxDefaultSize, 
    	long style = 0,
        const wxString&amp; name = wxT("DetectGLCanvas"),
    	int* attributes = (int*)0 );
    	virtual ~DetectGLCanvas();
    	void Draw(); // Draw(cv::Mat*) todo
    	cv::Mat* screenImage;
    	cv::gpu::GpuMat* screenImageGPU;
    	cv::VideoCapture* cap;
    	bool flipVertical;
     
    protected:
        void OnPaint(wxPaintEvent&amp; event);
        void OnSize(wxSizeEvent&amp; event);
        void OnEraseBackground(wxEraseEvent&amp; event);
        void OnMouse(wxMouseEvent&amp; event);
    	void OnIdle(wxIdleEvent&amp; event);
    private:
        void InitGL();
        void ResetProjectionMode();
        wxGLContext* m_glRC;
        GLData       m_gldata;
    	GLuint DetectGLCanvas::matToTexture(cv::Mat &amp;mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter);
    	GLuint DetectGLCanvas::matToTexture(cv::gpu::GpuMat &amp;mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter);
        wxDECLARE_NO_COPY_CLASS(DetectGLCanvas);
        DECLARE_EVENT_TABLE()
    	bool hasCUDA;
     
    };
     
     
    /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
     
    .cpp:
     
     
    /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
     
    // DetectGLCanvas.cpp
     
    #pragma once
     
    #include  // This needs to be included before GL/gl.h and that is included in App.h along with wx/glcanvas.h
    #include "detect_gl_canvas.h"
    #include "App.h"
     
    #include  //this is included in glew so it doesnt need to be here soon
     
    //#include "glfw.h"
     
    #include 
    #include 
    #include 
    //
    #include 
     
    //
    //#include 
    //#include 
    //#include 
    //#include 
    //#include  
     
    #include 
    #include  //todo remove later
     
    BEGIN_EVENT_TABLE(DetectGLCanvas, wxGLCanvas)
        EVT_SIZE(DetectGLCanvas::OnSize)
        EVT_PAINT(DetectGLCanvas::OnPaint)
        EVT_ERASE_BACKGROUND(DetectGLCanvas::OnEraseBackground)
        EVT_MOUSE_EVENTS(DetectGLCanvas::OnMouse)
    	EVT_IDLE(DetectGLCanvas::OnIdle)
    END_EVENT_TABLE()
     
    // Constuctor
    DetectGLCanvas::DetectGLCanvas(wxWindow *parent,
    							wxWindowID id,
                               const wxPoint&amp; pos,
                               const wxSize&amp; size,
                               long style,
                               const wxString&amp; name,
    						   int* attributes) : wxGLCanvas(parent, this, id, pos, size, style|wxFULL_REPAINT_ON_RESIZE, name, attributes)
    {
        // Explicitly create a new rendering context instance for this canvas.
        m_glRC = new wxGLContext(this);
        m_gldata.initialized = false;
        m_gldata.beginx = 0.0f;
        m_gldata.beginy = 0.0f;
        //m_gldata.zoom   = 0.0f;
     
    	//if (cv::gpu::getCudaEnabledDeviceCount() == 0)
     //   {
    	//	// No GPU found or the library is compiled without GPU support
     //       hasCUDA = true;
    	//	//wxMessageBox(_("has CUDA!")); // works
     //   }
    	// else
    	// {
    	//	 hasCUDA = false;
    	//	 
    	// }
     
    	 //if (!hasCUDA) //turn the orders around to thacv
    	 //{
     
     
    	                                      //cap = new cv::VideoCapture(0); // open the default camera
     //   if(cap-&gt;isOpened())  // check if we succeeded
    	//{
    	//	
     
    	//	if (!cap-&gt;grab())
    	//	{
    	//		//cout &lt;&lt; &quot;Could not grab kinect... Skipping frame.&quot; &lt;retrieve(*screenImage,   CV_CAP_OPENNI_BGR_IMAGE);
    	//		//capture.retrieve(depthFrame, CV_CAP_OPENNI_DISPARITY_MAP);
     //
    	//		// Draw texture contents
    	//		//draw(camFrame, depthFrame);
     //
    	//		// Swap the active and visual pages
    	//		//glfwSwapBuffers();
    	//	}
     
    	//	//cv::Mat edges;
    	//	//cv::namedWindow("edges",1);
    	//	//for(;;)
    	//	//{
    	//	//	cv::Mat screenImage;
    	//	//	cap &gt;&gt; screenImage; // get a new frame from camera
     
     
     
    	//	//	//cv::cvtColor(screenImage, edges, CV_BGR2GRAY);
    	//	//	/*cv::GaussianBlur(edges, edges, cv::Size(7,7), 1.5, 1.5);
    	//	//	cv::Canny(edges, edges, 0, 30, 3);*/
    	//	//	// cv::imshow("edges", edges);
    	//	//	// if(cv::waitKey(10) &gt;= 0) break;
     
     
    	//	//}
    	//}
    	//else
    	//{
    		try 
    		{
    			screenImage = new cv::Mat(cv::imread("..\\VidEditor4\\buttonicons\\sunlight-through-the-woods.jpg", 1));
    		}
    		catch( char* str ) 
    		{
     
    		}
     
    	//}
     
     
    		// CV_ASSERT(!screenImage-&gt;empty());
    		flipVertical = true;
    		if (flipVertical)
    		{
    			cv::flip(*screenImage, *screenImage, 0);
    		}
    		cv::Size oldSize(screenImage-&gt;cols,screenImage-&gt;rows);
    		cv::Size newSize(1024, 1024);
    		cv::resize(*screenImage, *screenImage, newSize, 0, 0, cv::INTER_LINEAR);
     
    	 //}
    	 //else
    	 //{
    		// screenImageGPU = new cv::gpu::GpuMat(cv::imread("..\\VidEditor4\\buttonicons\\woods.png", 1));
    		////*screenImageGPU = cv::imread("..\\VidEditor4\\buttonicons\\woods.png", 1); //works
    		//// check if the image loaded at all 
    		//// todo pad the c::Mat so that it can be passed with dimensions in powers of two for diaplay as a opengltexture 
    		//int swidth = screenImageGPU-&gt;rows;
    		//int sheight = screenImageGPU-&gt;cols;
    		//int newWidth, newHeight;
    		//cv::gpu::resize(*screenImageGPU, *screenImageGPU, screenImageGPU-&gt;size(), 0, 0, 1);  // tweak final param
    		//bool flipVertical = true;
    		//if (flipVertical)
    		//{
    		//	cv::gpu::flip(*screenImageGPU, *screenImageGPU, 0);
    		//}
    	 //} //that was easy
     
    }
    DetectGLCanvas::~DetectGLCanvas()
    {
        delete m_glRC;
    	screenImage-&gt;release();
    }
     
    #pragma region DetectGLCanvas event handlers
    void DetectGLCanvas::OnPaint( wxPaintEvent&amp; WXUNUSED(event) )
    {	// todo this is so screwed up
        if (!IsShownOnScreen())
    	{
            return;
    	}
        //wxPaintDC dc(this); 
        SetCurrent(*m_glRC);
    	wxPaintDC(this); // Must always be here
        // Initialize OpenGL
        if (!m_gldata.initialized)
        {
            InitGL();
            m_gldata.initialized = true;
        }
        glClearColor( 0.0f, 0.0f, 0.0f, 1.0f );
        glClear( GL_COLOR_BUFFER_BIT);
        glLoadIdentity();
        Draw();
    }
    void DetectGLCanvas::OnSize(wxSizeEvent&amp; WXUNUSED(event))
    {
        ResetProjectionMode();
    	Refresh(false);
    }
    void DetectGLCanvas::OnIdle(wxIdleEvent&amp; event)
    {
    	 if ( !IsShownOnScreen() )
    	 {
            return;
    	 }
    	//Refresh(false); // We dont need continuous rendering in this window
    }
    void DetectGLCanvas::OnMouse(wxMouseEvent&amp; event)
    {
    	// mouse event is working !
        if (event.Dragging())
        {
     
            Refresh(false);
        }
        m_gldata.beginx = event.GetX();
        m_gldata.beginy = event.GetY();
    }
    #pragma endregion DetectGLCanvas event handlers
     
    #pragma region OpenGl related functions
    void DetectGLCanvas::InitGL()
    {
    	SetCurrent(*m_glRC); //todo need to figure out where this is supposed to be
    	ResetProjectionMode();
    }
    void DetectGLCanvas::Draw() // Draw(cv::Mat*) todo or not do but over load it with cv::mat::GpuMat 
    {
    	SetCurrent(*m_glRC); // Neccessary if there are many GLCanvases
    	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    	glLoadIdentity();
    	glEnable(GL_TEXTURE_2D);
    	// Convert image and depth data to OpenGL textures
    	GLuint imageTex;
    	//if(hasCUDA)
    	//{
    	//	imageTex = matToTexture(*screenImageGPU, GL_LINEAR, GL_LINEAR, GL_CLAMP_TO_EDGE); //GL_LINEAR, GL_LINEAR, GL_CLAMP_TO_EDGE); // out of size pics cause tearing
    	//}
    	//else
    	//{
    		imageTex = matToTexture(*screenImage, GL_LINEAR, GL_LINEAR, GL_CLAMP_TO_EDGE); //GL_LINEAR, GL_LINEAR, GL_CLAMP_TO_EDGE); // out of size pics cause tearing
    	//}
    	// took out the depth data parts
     
    	// todo scale the pic to the size of the window here using the image size and the screen size 
    	glBindTexture(GL_TEXTURE_2D, imageTex);
    	//todo add the flip vertical toggle here or earlier when its a cv::Mat
    	int w, h;
        GetClientSize(&amp;w, &amp;h);
        float aspect = static_cast(h)/static_cast(w) ;
    	//get the cv::Mat size...todo
    	//if ....todo 
    	glBegin(GL_QUADS);
    		glTexCoord2f(1, 1);
    		glVertex2f((GLfloat)aspect, (GLfloat)aspect); // todo this is still not perfect need to pad out of size pics and still account for this
    		glTexCoord2f(0, 1);
    		glVertex2f((GLfloat)-aspect, (GLfloat)aspect); // todo use GLint instead glVertex2i()
    		glTexCoord2f(0, 0);
    		glVertex2f((GLfloat)-aspect, (GLfloat)-aspect);
    		glTexCoord2f(1, 0);
    		glVertex2f((GLfloat)aspect, (GLfloat)-aspect);
    	glEnd();
    	// Free the texture memory
    	glDisable(GL_TEXTURE_2D);
    	glDeleteTextures(1, &amp;imageTex);
        glFlush();
        SwapBuffers();
    }
    void DetectGLCanvas::ResetProjectionMode()
    {
        if (!IsShownOnScreen())
    	{
            return;
    	}
        wxGLCanvas::SetCurrent(*m_glRC); // Necessary when there is more than one wxGLCanvas or more than one wxGLContext in the application.
        int w, h;
        GetClientSize(&amp;w, &amp;h);
        float aspect = static_cast(h)/static_cast(w) ;	
    	glViewport(0, 0, (GLsizei)w, (GLsizei)h);
    	glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // Black background 
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);	 
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();	
    	glOrtho(-1*aspect, 1*aspect, -1*aspect, 1*aspect,-1, 1);	
    	glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
    }
     
    void DetectGLCanvas::OnEraseBackground(wxEraseEvent&amp; WXUNUSED(event))
    {
        // Do nothing, to avoid flashing on MSW
    }
    #pragma endregion OpenGl related functions
     
    #pragma region cv::Mat to gltexture
    // Function turn a cv::Mat into a texture, and return the texture ID as a GLuint for use
    GLuint DetectGLCanvas::matToTexture(cv::Mat &amp;mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
    {
    	// Generate a number for our textureID's unique handle
    	GLuint textureID;
    	glGenTextures(1, &amp;textureID);
    	// Bind to our texture handle
    	glBindTexture(GL_TEXTURE_2D, textureID);
    	// Catch silly-mistake texture interpolation method for magnification
    	if (magFilter == GL_LINEAR_MIPMAP_LINEAR  ||
    	    magFilter == GL_LINEAR_MIPMAP_NEAREST ||
    	    magFilter == GL_NEAREST_MIPMAP_LINEAR ||
    	    magFilter == GL_NEAREST_MIPMAP_NEAREST)
    	{
    		// You can't use MIPMAPs for magnification - setting filter to GL_LINEAR"
    		magFilter = GL_LINEAR;
    	}
    	// Set texture interpolation methods for minification and magnification
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
    	// Set texture clamping method
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
    	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);
    	// Set incoming texture format to:
    	// GL_BGR       for CV_CAP_OPENNI_BGR_IMAGE,
    	// GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
    	// Work out other mappings as required ( there's a list in comments in main() )
    	GLenum inputColourFormat = GL_BGR;
    	// todo check the format of the picture and set this approriately this is where it screws up for some pics...
    	if (mat.channels() == 1)
    	{
    		inputColourFormat = GL_LUMINANCE;
    	}
    	// Create the texture
    	glTexImage2D(GL_TEXTURE_2D,     // Type of texture
    	             0,                 // Pyramid level (for mip-mapping) - 0 is the top level
    	             GL_RGB,            // Internal colour format to convert to
    	             mat.cols,          // Image width  i.e. 640 for Kinect in standard mode
    	             mat.rows,          // Image height i.e. 480 for Kinect in standard mode
    	             0,                 // Border width in pixels (can either be 1 or 0)
    	             inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
    	             GL_UNSIGNED_BYTE,  // Image data type
    	             mat.ptr());        // The actual image data itself
    	// If we're using mipmaps then generate them. Note: This requires OpenGL 3.0 or higher
    	if (minFilter == GL_LINEAR_MIPMAP_LINEAR  ||
    	    minFilter == GL_LINEAR_MIPMAP_NEAREST ||
    	    minFilter == GL_NEAREST_MIPMAP_LINEAR ||
    	    minFilter == GL_NEAREST_MIPMAP_NEAREST)
    	{
    		//glGenerateMipmap(GL_TEXTURE_2D); //todo link glew for this
    	}
    	return textureID;
    }
    //// Function turn a cv::gpu::GpuMat into a texture, and return the texture ID as a GLuint for use
    //GLuint DetectGLCanvas::matToTexture(cv::gpu::GpuMat &amp;mat, GLenum minFilter, GLenum magFilter, GLenum wrapFilter)
    //{
    //	// Generate a number for our textureID's unique handle
    //	GLuint textureID;
    //	glGenTextures(1, &amp;textureID);
    //	// Bind to our texture handle
    //	glBindTexture(GL_TEXTURE_2D, textureID);
    //	// Catch silly-mistake texture interpolation method for magnification
    //	if (magFilter == GL_LINEAR_MIPMAP_LINEAR  ||
    //	    magFilter == GL_LINEAR_MIPMAP_NEAREST ||
    //	    magFilter == GL_NEAREST_MIPMAP_LINEAR ||
    //	    magFilter == GL_NEAREST_MIPMAP_NEAREST)
    //	{
    //		// You can't use MIPMAPs for magnification - setting filter to GL_LINEAR"
    //		magFilter = GL_LINEAR;
    //	}
    //	// Set texture interpolation methods for minification and magnification
    //	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minFilter);
    //	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, magFilter);
    //	// Set texture clamping method
    //	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapFilter);
    //	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapFilter);
    //	// Set incoming texture format to:
    //	// GL_BGR       for CV_CAP_OPENNI_BGR_IMAGE,
    //	// GL_LUMINANCE for CV_CAP_OPENNI_DISPARITY_MAP,
    //	// Work out other mappings as required ( there's a list in comments in main() )
    //	GLenum inputColourFormat = GL_BGR;
    //	// todo check the format of the picture and set this approriately this is where it screws up for some pics...
    //	if (mat.channels() == 1)
    //	{
    //		inputColourFormat = GL_LUMINANCE;
    //	}
    //	// Create the texture
    //	glTexImage2D(GL_TEXTURE_2D,     // Type of texture
    //	             0,                 // Pyramid level (for mip-mapping) - 0 is the top level
    //	             GL_RGB,            // Internal colour format to convert to
    //	             mat.cols,          // Image width  i.e. 640 for Kinect in standard mode
    //	             mat.rows,          // Image height i.e. 480 for Kinect in standard mode
    //	             0,                 // Border width in pixels (can either be 1 or 0)
    //	             inputColourFormat, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
    //	             GL_UNSIGNED_BYTE,  // Image data type
    //	             mat.ptr());        // The actual image data itself
    //	// If we're using mipmaps then generate them. Note: This requires OpenGL 3.0 or higher
    //	if (minFilter == GL_LINEAR_MIPMAP_LINEAR  ||
    //	    minFilter == GL_LINEAR_MIPMAP_NEAREST ||
    //	    minFilter == GL_NEAREST_MIPMAP_LINEAR ||
    //	    minFilter == GL_NEAREST_MIPMAP_NEAREST)
    //	{
    //		//glGenerateMipmap(GL_TEXTURE_2D); //todo link glew for this
    //	}
    //	return textureID;
    //
     
    //} 
    #pragma endregion cv::Mat to gl texture

    // Notes:

    // Good info on wxGLCanvas http://wiki.wxwidgets.org/WxGLCanvas#wxGLCanvas_on_a_wxPanel

    // cv::Mat to OpenGL texture http://r3dux.org/2012/01/how-to-convert-an-opencv-cvmat-to-an-opengl-texture/

    // adding glew to the OpenGL SDK http://openglbook.com/setting-up-opengl-glew-and-freeglut-in-visual-c/

    // iplimage Can be converted to an osg::image too http://stackoverflow.com/questions/10876706/converting-iplimage-to-osgimage

    //wxString mystring(glGetString(GL_VERSION).c_str(), wxConvUTF8);
    //char* version = (char*)glGetString(GL_VERSION);
    //wxString mystring = wxString::FromUTF8(version);
    //wxMessageBox(mystring);
    //delete version; 4.3.1 nice

  2. Hi Patrick – good work & glad you found the conversion to texture stuff useful!

    Unfortunately some of the code got “cleaned” on entry, so anything with angle brackets is gone – it’d be great if you could re-post the code and wrap it in pre tags so that it makes it through intact!

    If you put the code as follows it should work (in theory!):

    <pre lang="cpp">
        // Your code here
    <forwardslash-pre>

    If it still gets mangled maybe email me the .h and .cpp files (mail-at-r3dux-dot-org) and I can add it from this end.

    Again, thanks for the feedback and example code!

    Cheers!
    -r3dux

  3. Here are the incudes in “…” style:

    // DetectGLCanvas.h

    #pragma once

    #include “wx/glcanvas.h”
    #include “wx/dcclient.h”
    //
    //
    //// OpenGL view data
    //struct GLData // todo get rid of this into the class itself
    //{
    // bool initialized; // has OpenGL been initialized?
    // float beginx, beginy; // position of mouse
    // // float quat[4]; // orientation of object //todo remove this entire struct
    // // float zoom; // field of view in degrees
    //};

    namespace cv{ // Forward declaration of the Mat object within the cv namespace
    class Mat; //…

    // —————————–

    // DetectGLCanvas.cpp

    #pragma once

    #include “glew.h” // This needs to be included before GL/gl.h and that is included in App.h along with wx/glcanvas.h
    #include “detect_gl_canvas.h”
    #include “App.h”
    #include “GL/glu.h” //this is included in glew so it doesnt need to be here soon

    #include “opencv2/core/core.hpp”
    #include “opencv2/core/mat.hpp”
    #include “opencv2/highgui/highgui.hpp”
    #include “opencv2/imgproc/imgproc.hpp”

    #include “opencv2/contrib/contrib.hpp”
    #include “opencv2/objdetect/objdetect.hpp”
    #include “opencv2/features2d/features2d.hpp”
    //#include “opencv2/core/gpumat.hpp”
    //#include “opencv2/gpu/gpu.hpp”

    #include “wx/app.h”
    #include “wx/msgdlg.h” //todo remove later

    BEGIN_EVENT_TABLE(DetectGLCanvas, wxGLCanvas)
    EVT_SIZE(DetectGLCanvas::OnSize) //…

  4. Hi,

    I dont have Kinect,
    I try use my webcam and i have a white screen (no texture), only the OpenGL square.
    I just comment the 2 lines in main function:

    //capture.retrieve(camFrame, CV_CAP_OPENNI_BGR_IMAGE);
    //capture.retrieve(depthFrame, CV_CAP_OPENNI_DISPARITY_MAP);

    And change this in main function:

    cv::VideoCapture capture;
    capture.open(0);

    capture >> camFrame;

    // Draw texture contents
    draw(camFrame, camFrame);

    Any sugestion?

  5. Hi!
    I’m trying to implement on my PC with nvidea GeForce 560 your function “matToTexture”, but I’ve error – glGenerateMipmap: identifier not found.
    What can I have a problem?
    Thank you.

    1. If I remember correctly the glGenerateMipmap function isn’t in the core set of OpenGL commands, so you’ll need to enable an extension to use it.

      The easiest way to do this is to include GLEW in your project, and then initialise GLEW early on in your code – which will automatically make extensions available as needed. Using GLEW is simple, and there’s some example code you could look at here: Simple Texture Loading with DevIL Revisited, or of course there’s always Google =D

      Alternatively, you could just not use mipmaps by setting the minification and/or magnification filters to use something like GL_LINEAR instead of GL_LINEAR_MIPMAP_LINEAR or such, at which point you can completely remove the call to glGenerateMipmap!

      Hope this helps.

      1. Thanks for you last answer, but now I have a new problem with glGenerateMipmap:
        simpleGL.obj : error LNK2001: Unresolved external symbol __imp____glewGenerateMipmap
        I included all libraries.
        May be you have any idea?

  6. hi!
    I am trying to do it from OpenGL –> OpenCV
    I am recovering an image from RealSense with OpenGL and I want to use it and work with it in OpenCV. any hint?
    Thanks!

Leave a Reply to r3dux Cancel reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.