Receiving images from the stereo camera

This post is slightly out of sequence as the first post, but I thought I’d start documenting our efforts towards the autonomous car as soon as possible and avoid a massive backlog of things that need to be reported later on.

So, we are focusing on stereo vision at the moment. We have done some research on what exactly is Stereo Vision and the theory behind calculating disparity for a given pair of images. We have also managed to successfully install Ubuntu on the Pandaboard, which will most likely be our final development environment, but we are investigating putting Android on it, to see if we gain any performance advantages (posts on these topics will be published soon).

For the past few hours, I have been focusing my attention on getting two streams of images from our stereo vision camera. The camera we are currently using is the Minoru 3D Webcam.

The camera is rather cute..

The camera has a single USB connection, but presents itself as two separate cameras to the operating system (much like if you had two individual cameras connected via a USB hub).

We are going to be using OpenCV for all (or most) of our image processing needs, so that is what I have turned to for reading the input from the camera(s).

There was a surprisingly small amount of code required to get both left and right streams to show side by side on the screen. I am not sure about the internals of the camera, and whether both the images are transferred synchronously over the USB port, but I experimented with accessing the camera streams and outputting images both sequentially and in parallel (using the Concurrency Runtime introduced in C++11).

(I’ll update this post (and add comments to the code) as soon as I work out how to post code in a nice way)

Mat left, right;
///Reading from cameras sequentially
CvCapture* capture1 = cvCaptureFromCAM(1);
CvCapture* capture2 = cvCaptureFromCAM(2);
if(capture1 && capture2)
{
    while(true)
    {
        left = cvQueryFrame(capture1);
        imshow("left", left);

        right=cvQueryFrame(capture2);
        imshow("right", right);

        int c = waitKey(5);
        if(c == 'c')
            break;
    }
} else
    printf("Failure in capture\n");

///Reading from cameras in parallel
parallel_invoke(
    [&left]()
    {
        CvCapture* capture1 = cvCaptureFromCAM(1);
        while(true)
        {
            left = cvQueryFrame(capture1);
            if(!left.empty())
                imshow("left", left);
            int c = waitKey(5);
            if(c=='c') break;
        }
    },
    [&right]()
    {
        CvCapture* capture2 = cvCaptureFromCAM(2);
        while(true)
        {
            right=cvQueryFrame(capture2);
            if(!right.empty())
                imshow("right", right);
            int c = waitKey(5);
            if(c=='c')
                break;
        }
    }
);

Getting the parallel implementation to work took some tinkering because one of the images on the camera would stall occasionally. I thought this was because of the OS putting one of the threads to sleep randomly for a short time, but putting the while(true) statement inside each of the parallel statements, instead of having a single while(true) encompassing both the parallel statements fixed it. I guess having a never ending loop enveloping the parallel statement was causing the system to continually create and destroy threads (and incurring a lot of overhead), whereas nesting the loop inside the parallel statement ensures that two new threads are created only once, and all subsequent work is carried out by those threads.

In the parallel implementation, the camera streams are running on two separate cores. I didn’t do any extensive testing, but the sequential implementation showed some lag between the left and right images, where one would update slightly before the other. The parallel implementation seemed synchronous and generally a bit less jittery.

The two output video streams.

As you can see, the two images are displaced from each other both horizontally and vertically. Horizontal displacement is good, it’s because of the cameras being physically separated from each other horizontally, but I’m not sure where the vertical displacement is coming from, as the two cameras seem on a level plane to me. This is something that I will be discussing tomorrow with people that are cleverer than I. Paying close attention to this, however, I notice that the original images exhibit a similar problem, which is then taken care of via calibration of the cameras. This is something that I need to understand better before taking the next step of calibrating the camera and finally calculating disparities in real-time.

Hassan

Update: It has just been pointed out to me that the right camera picture is warmer than the left camera picture, which looks more washed out. This is something that will need to be taken care.

Advertisements

2 thoughts on “Receiving images from the stereo camera

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s