Rectifying images from stereo cameras

I mentioned in the earlier post about calibrating stereo cameras that the output that process is a bunch of matrices. This post is going to describe what the matrices are and how to use them to correct the images.

Description of files
The calibration process produces these files:

D1.xml D2.xml
M1.xml M2.xml
mx1.xml mx2.xml
my1.xml my2.xml
P1.xml P2.xml
R1.xml R2.xml
Q.xml

The files with the 1 are for camera 1, where the files with the 2 are for camera 2. The files m*.xml are the distortion models of the individual cameras. These would be used if you wanted to rectify an individual stream independently.
The D* are the distortion matrices, M* are the camera matrices, P* are the projection matrices and R* are the rotation matrices. The book has a lot more information about these, including the maths behind them and how they are computed behind the scenes.

Usage
While using the files turned out to be really simple, it took me quite a long time of rooting around in the documentation to work out what to do.

Mat left, right; //Create matrices for storing input images

//Create transformation and rectification maps
Mat cam1map1, cam1map2;
Mat cam2map1, cam2map2;

initUndistortRectifyMap(M1, D1, R1, P1, Size(640,480) , CV_16SC2, cam1map1, cam1map2);
initUndistortRectifyMap(M2, D2, R2, P2, Size(640,480) , CV_16SC2, cam2map1, cam2map2);

Mat leftStereoUndistorted, rightStereoUndistorted; //Create matrices for storing rectified images

/*Acquire images*/

//Rectify and undistort images
remap(left, leftStereoUndistorted, cam1map1, cam1map2, INTER_LINEAR);
remap(right, rightStereoUndistorted, cam2map1, cam2map2, INTER_LINEAR);

//Show rectified and undistorted images
imshow("LeftUndistorted", left); imshow("RightUndistored", right);

That’s it! You first use the initUndistortRectifyMap() function with the appropriate parameters obtained from calibration to generate a joint undistortion and rectification matrix in the form of maps for remap().

One interesting point to make (from the documentation) is that the resulting camera from this process is oriented differently in the coordinate space, according to R. This helps to align the images so that the epipolar lines on both images become horizontal and have the same y-coordinate.

For more information, have a look at the OpenCV documentation.

Hassan

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s