How To: Perform Real-Time OpenCV Edge Detection on a WebCam Stream

I’ve started learning OpenCV, the open-source computer vision library, and I’ve got to say – it’s absolutely brilliant. In the below screenshot I’m just running the canny edge detection filter on the live stream and outputting it in a window with some sliders which link to the edge detection parameters, and the entire thing from initialising the web cam, to displaying the frames and performing all the filtering is a little over 100 lines of code… Amazing!

OpenCV Canny Edge Detection

It’s when I start asking OpenCV to do things that it currently doesn’t do that I’m going to start screaming, and I read that optical flow analysis has stolen the youth of many a Ph.D student… But how awesome will it be if I can make a really robust gesture recognition system and tie it into OpenGL? I guess there’s only one way to find out…

Full source code after the jump for those interested (although please don’t ask me how to build/install OpenCV and link the libraries into the project – it’ll vary depending on your system and build tools of choice).

#include <iostream>
#include "opencv/cv.h"
#include "opencv/highgui.h"
using namespace std;
// Define the IplImage pointers we're going to use as globals
IplImage* pFrame;
IplImage* pProcessedFrame;
IplImage* tempFrame;
// Slider for the low threshold value of our edge detection
int maxLowThreshold = 1024;
int lowSliderPosition = 150;
// Slider for the high threshold value of our edge detection
int maxHighThreshold = 1024;
int highSliderPosition = 250;
// Function to find the edges of a given IplImage object
IplImage* findEdges(IplImage* sourceFrame, double thelowThreshold, double theHighThreshold, double theAperture)
	// Convert source frame to greyscale version (tempFrame has already been initialised to use greyscale colour settings)
	cvCvtColor(sourceFrame, tempFrame, CV_RGB2GRAY);
	// Perform canny edge finding on tempframe, and push the result back into itself!
	cvCanny(tempFrame, tempFrame, thelowThreshold, theHighThreshold, theAperture);
	// Pass back our now processed frame!
	return tempFrame;
// Callback function to adjust the low threshold on slider movement
void onLowThresholdSlide(int theSliderValue)
	lowSliderPosition = theSliderValue;
// Callback function to adjust the high threshold on slider movement
void onHighThresholdSlide(int theSliderValue)
	highSliderPosition = theSliderValue;
int main(int argc, char** argv)
	// Create two windows
	cvNamedWindow("WebCam", CV_WINDOW_AUTOSIZE);
	cvNamedWindow("Processed WebCam", CV_WINDOW_AUTOSIZE);
	// Create the low threshold slider
	// Format: Slider name, window name, reference to variable for slider, max value of slider, callback function
	cvCreateTrackbar("Low Threshold", "Processed WebCam", &lowSliderPosition, maxLowThreshold, onLowThresholdSlide);
	// Create the high threshold slider
	cvCreateTrackbar("High Threshold", "Processed WebCam", &highSliderPosition, maxHighThreshold, onHighThresholdSlide);
	// Create CvCapture object to grab data from the webcam
	CvCapture* pCapture;
	// Start capturing data from the webcam
	pCapture = cvCaptureFromCAM(CV_CAP_V4L2);
	// Display image properties
	cout << "Width of frame: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_FRAME_WIDTH) << endl; 		// Width of the frames in the video stream
	cout << "Height of frame: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_FRAME_HEIGHT) << endl; 	// Height of the frames in the video stream
	cout << "Image brightness: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_BRIGHTNESS) << endl; 	// Brightness of the image (only for cameras)
	cout << "Image contrast: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_CONTRAST) << endl; 		// Contrast of the image (only for cameras)
	cout << "Image saturation: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_SATURATION) << endl;		// Saturation of the image (only for cameras)
	cout << "Image hue: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_HUE) << endl;			// Hue of the image (only for cameras)
	// Create an image from the frame capture
	pFrame = cvQueryFrame(pCapture);
	// Create a greyscale image which is the size of our captured image
	pProcessedFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height), IPL_DEPTH_8U, 1);
	// Create a frame to use as our temporary copy of the current frame but in grayscale mode
	tempFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height), IPL_DEPTH_8U, 1);
	// Loop controling vars
	char keypress;
	bool quit = false;
	while (quit == false)
		// Make an image from the raw capture data
		// Note: cvQueryFrame is a combination of cvGrabFrame and cvRetrieveFrame
		pFrame = cvQueryFrame(pCapture);
		// Draw the original frame in our window
		cvShowImage("WebCam", pFrame);
		// Process the grame to find the edges
		pProcessedFrame = findEdges(pFrame, lowSliderPosition, highSliderPosition, 3);
		// Showed the processed output in our other window
		cvShowImage("Processed WebCam", pProcessedFrame);
		// Wait 20 milliseconds
		keypress = cvWaitKey(20);
		// Set the flag to quit if escape was pressed
		if (keypress == 27)
			quit = true;
	} // End of while loop
	// Release our stream capture object to free up any resources it has been using and release any file/device handles
	// Release our images
	// This causes errors if you don't set it to NULL before releasing it. Maybe because we assign
	// it to pProcessedFrame as the end result of the findEdges function, and we've already released pProcessedFrame!!
	tempFrame = NULL;
	// Destory all windows

16 thoughts on “How To: Perform Real-Time OpenCV Edge Detection on a WebCam Stream”

  1. thank you your bounch of code turned useful for me…and you are also really pretty…^^ if you want to visit italy gratis…let me know.

  2. Hi,

    As i try your code its very nice, but there in my code i am trying to get camera Brightness,and Contrast as you are trying to print, but i am getting all the 0 value. how you are getting these values?

    1. It just works for me using the code above. Maybe your cam takes a little longer to initialise so it hasn’t processed the first frame before trying to get the image properties… Try putting the following in the main loop and see they still come out as 0:

      cout << "Image brightness: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_BRIGHTNESS) << endl;
      cout << "Image contrast: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_CONTRAST) << endl;
      cout << "Image saturation: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_SATURATION) << endl;
      cout << "Image hue: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_HUE) << endl;

      As I don’t know whether OpenCV gets these properties from the hardware or calculates them itself, another possibility is that it’s a camera-chipset related issue, and those properties simply aren’t available from your webcam’s hardware.

      Hope this helps.

      1. Hi r3dux,

        Thanks to reply me i tried as you told i put these all statement in main loop and wait for 5 min.
        but still same result as 0. but when i put CV_CAP_ANY at place of CV_CAP_V4L2 then i got only frame width and height and all the remaining values i am getting same as previous one result means 0.

        1. I was only thinking to wait a few frames, not 5 mins – but it’s all good ;)

          The only other thing to try might be to get the latest OpenCV, or build a copy from the OpenCV subversion source, or maybe try a different webcam (like a USB one instead of a built-in one).

          1. Hi,

            Yes i did previously i was using opencv 2.1 now this program i used with opencv 2.3 but same result. and i have right now USB camera,because of that it may not showing values 2marrow i will try with my inbuilt cam then i will inform you and thanks a lot to reply me.

  3. Good piece of work…

    Am working on the gesture recognition you mentioned….
    Did you complete it?

    If yes, it will give me ideas to improve upon!!!
    And nice photo by the way! :)

  4. great piece of work and thanks for the code….thank you so much….i am working on virtual avatar imitating human action….still much of work left….if you could help with my topic that would be great….anyway thank you so much for this part!!

    1. You’re welcome! Are you using skeletal tracking for the virtual avatar? Easily done through Kinect frameworks (official MS, OpenNI, Skeltrack etc.).

  5. hy i read this, its all most 4 year old ,i wish you do much more in image processing. i m a embedded programmer. i now i start computer programming, i have interest in image processing. i selected opencv and visual studio. and trying to detect traffic light(with shape and color). i need your guidelines. bundles of thanks in advance. :D

    1. Creating training data for OpenCV to detect things via Haar cascades and the like is tricky – if I was going to do an image recognition project, I’d use this – Consensus-based Matching & Tracking:

      It’s an improved version of Zdenek Kalal’s TLD (Tracking/Learning/Detection) algorithm created by Georg Nebehay. If you look in the comments of the above link there’s a pure C++ port of it as well which hopefully doesn’t have lots of depedencies on other number-crunching libraries.

      Best of luck!

    1. I just don’t have the time to do it for you, sorry.

      However, I can advise – if the edge detection and Hough routines modify the frame (that is, you can’t just output them to another image) then you’ll first want to clone the frame you’re working on.

      Regardless, you can use the edge detection as per the article (or you can do it via an OpenGL fragment shader – see

      For the Hough filter, take a look at the OpenCV sample houghlines.cpp, for example at:

      The OpenCV tutorials also look a lot more friendly than they used to:

      Have at it!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.