How To: Perform Real-Time OpenCV Edge Detection on a WebCam Stream

I’ve started learning OpenCV, the open-source computer vision library, and I’ve got to say – it’s absolutely brilliant. In the below screenshot I’m just running the canny edge detection filter on the live stream and outputting it in a window with some sliders which link to the edge detection parameters, and the entire thing from initialising the web cam, to displaying the frames and performing all the filtering is a little over 100 lines of code… Amazing!

OpenCV Canny Edge Detection

It’s when I start asking OpenCV to do things that it currently doesn’t do that I’m going to start screaming, and I read that optical flow analysis has stolen the youth of many a Ph.D student… But how awesome will it be if I can make a really robust gesture recognition system and tie it into OpenGL? I guess there’s only one way to find out…

Full source code after the jump for those interested (although please don’t ask me how to build/install OpenCV and link the libraries into the project – it’ll vary depending on your system and build tools of choice).

16 thoughts on “How To: Perform Real-Time OpenCV Edge Detection on a WebCam Stream”

  1. thank you your bounch of code turned useful for me…and you are also really pretty…^^ if you want to visit italy gratis…let me know.

  2. Hi,

    As i try your code its very nice, but there in my code i am trying to get camera Brightness,and Contrast as you are trying to print, but i am getting all the 0 value. how you are getting these values?

    1. It just works for me using the code above. Maybe your cam takes a little longer to initialise so it hasn’t processed the first frame before trying to get the image properties… Try putting the following in the main loop and see they still come out as 0:

      As I don’t know whether OpenCV gets these properties from the hardware or calculates them itself, another possibility is that it’s a camera-chipset related issue, and those properties simply aren’t available from your webcam’s hardware.

      Hope this helps.

      1. Hi r3dux,

        Thanks to reply me i tried as you told i put these all statement in main loop and wait for 5 min.
        but still same result as 0. but when i put CV_CAP_ANY at place of CV_CAP_V4L2 then i got only frame width and height and all the remaining values i am getting same as previous one result means 0.

        1. I was only thinking to wait a few frames, not 5 mins – but it’s all good ;)

          The only other thing to try might be to get the latest OpenCV, or build a copy from the OpenCV subversion source, or maybe try a different webcam (like a USB one instead of a built-in one).

          1. Hi,

            Yes i did previously i was using opencv 2.1 now this program i used with opencv 2.3 but same result. and i have right now USB camera,because of that it may not showing values 2marrow i will try with my inbuilt cam then i will inform you and thanks a lot to reply me.

  3. Good piece of work…

    Am working on the gesture recognition you mentioned….
    Did you complete it?

    If yes, it will give me ideas to improve upon!!!
    And nice photo by the way! :)

  4. great piece of work and thanks for the code….thank you so much….i am working on virtual avatar imitating human action….still much of work left….if you could help with my topic that would be great….anyway thank you so much for this part!!

    1. You’re welcome! Are you using skeletal tracking for the virtual avatar? Easily done through Kinect frameworks (official MS, OpenNI, Skeltrack etc.).

  5. hy i read this, its all most 4 year old ,i wish you do much more in image processing. i m a embedded programmer. i now i start computer programming, i have interest in image processing. i selected opencv and visual studio. and trying to detect traffic light(with shape and color). i need your guidelines. bundles of thanks in advance. :D

    1. Creating training data for OpenCV to detect things via Haar cascades and the like is tricky – if I was going to do an image recognition project, I’d use this – Consensus-based Matching & Tracking:
      http://www.gnebehay.com/cmt/

      It’s an improved version of Zdenek Kalal’s TLD (Tracking/Learning/Detection) algorithm created by Georg Nebehay. If you look in the comments of the above link there’s a pure C++ port of it as well which hopefully doesn’t have lots of depedencies on other number-crunching libraries.

      Best of luck!

    1. I just don’t have the time to do it for you, sorry.

      However, I can advise – if the edge detection and Hough routines modify the frame (that is, you can’t just output them to another image) then you’ll first want to clone the frame you’re working on.

      Regardless, you can use the edge detection as per the article (or you can do it via an OpenGL fragment shader – see http://r3dux.org/2011/06/glsl-image-processing/).

      For the Hough filter, take a look at the OpenCV sample houghlines.cpp, for example at:
      https://github.com/Itseez/opencv/blob/master/samples/gpu/houghlines.cpp

      The OpenCV tutorials also look a lot more friendly than they used to: http://docs.opencv.org/doc/tutorials/core/table_of_content_core/table_of_content_core.html.

      Have at it!

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">