It’s alive!

We haven’t posted for a while, but that’s because we’ve been too busy working on the car! A lot has changed, and much has improved. We have a car, an app that controls the car, a car that stops when an object is detected (mostly..), and some flashing lights!

We’ll post some more updates on how things are working under the hood in the future, but almost there! There is still some work to be done on the logic that controls the braking of the car, but look, flashing lights!

Advertisements

Creating a tachometer with an Arduino and some correction fluid

Image

One of the requirements for our project is being able to get the speed the car is travelling at. As well as providing useful feedback for the human controller, this information can be used in the decision making processes.

We decided to implement this in the simplest possible way, using a digital line sensor (https://www.sparkfun.com/products/9454).

From the product description page:

The board’s QRE1113 IR reflectance sensor is comprised of two parts – an IR emitting LED and an IR sensitive phototransistor. When you apply power to the VCC and GND pins the IR LED inside the sensor will illuminate. A 100Ω resistor is on-board and placed in series with the LED to limit current. The output of the phototransistor is tied to a 10nF capacitor. The faster that capacitor discharges, the more reflective the surface is.

In short, the IC outputs a lower value if more light is reflected (i.e. the object is bright), and a higher value if less light is reflected (i.e. the object is dark).

Picture1

Sample output. (synthesised)

On the Arduino microcontroller, we can test the output of the line sensor in a loop. When we detect lower numbers, we know that the sensor is over the brighter part of the wheel. Every time a drop is seen, a revolution counter is incremented (because the wheel will have spun once). Every second, an interrupt is raised, and the speed calculated by using the following formula:

speed(m/s) = rpm / (60×2πr)   where r is the radius of the wheel

This method of getting speed is extremely simple and requires a minimal amount of circuitry. It is less reliable than using something like a Hall Effect Sensor, as the surrounding light may interfere with the measurements, or the IR sensor on the IC may get dirty/somehow have it’s line of sight to the wheel blocked. For our controlled environment however, the system works well and performs to specification.

Hassan

Progress and Plans

We’ve lost a bit of momentum recently, what with having to focus on projects for other modules and waiting for some of the equipment to arrive, so I thought I’d take a moment to collate everything we’ve done so far, and things we plan to do in the future.

Progress:

  1. We have a stereo vision module that is flexible enough to change number of disparities while running (to allow for different speeds the car might be travelling at) and that shouts if there are objects which are too close.
  2. We have a server and a client that can currently send and receive an image integrated with the stereo vision code.

Plans:

  1. We need to extend the server and client so that we can transfer more than just an image.
  2. We need to devise a mechanism for controlling the car. We only just got the microcontroller that we’ll be using for this today, so we’ll get started on this soon.
  3. We need to implement a method for measuring the speed of the car, so we can adapt the stereo vision algorithm accordingly. We are still waiting for some equipment to do this, but it should be pretty interesting to implement.
  4. I’ve recently discovered that image acquisition and rectification is quite expensive. This is one part of the stereo vision process I was ignoring in terms of performance consideration, but we’ll have to do something about this.
  5. We need to design a mount (and potentially a power supply) for the car, so we can hook everything up in a neat package!
  6. We need to create a basic iPhone (or Android, or desktop) application for demonstration purposes.

It seems like we need to do a lot more than we’ve done already, but I think the two things we’ve done (plus all the setup) was quite a big chunk of it all. Also, bearing in mind that we (still!) haven’t received all of the equipment that we need (including the car, or our own Pandaboard..), there isn’t much more that we can do at the moment.

Hassan

Performance and Optimisation of the Stereo Vision algorithm on the Pandaboard

We’ve had a bit of a hiatus over the last few days, what with working on projects for other modules and all, but we’re back, and today, I’m going to talk about performance on the Pandaboard.

So far, all of the development and testing that I’ve been doing has been done on my laptop. We found in previous tests that on my laptop, using OpenCV’s StereoBM function, with images of 640 x 480, we can get roughly 10 frames per second. That’s probably good enough for controlling the car in real-time if we are not going too fast. But of course, on the Pandaboard, the frame rate is considerably slower.

One alleviating factor is that the Pandaboard’s USB bandwidth is limited so we can only stream images of 320 x 240 simultaneously. This gives us an excuse to use a lower resolution, which increases the performance. Below are the results of running the algorithm without any optimisations on the same image 200 times.

Table 1 Results of StereoBM algorithm over 200 runs, SAD window size = 21, number of disparities = 16*5

Min (s)

Max (s)

Average (s)

FPS

0.1890

0.3884

0.1963

5.0942

 

Despite the resolution of the image being half, the FPS is still half that of running the algorithm on the pc.

Optimisations

5 frames per second is not good enough. Even if the FPS was higher, we’d still be trying to optimise things to squeeze as much performance out of the board as we can. I cannot, unfortunately, disclose what optimisations I have implemented so far, as they have not been published yet.

As a starting point, I applied the techniques on a vanilla SAD algorithm, which, from a previous post, took roughly 60 seconds per frame. After said optimisations, the time dropped by a massive 30 seconds!

This was very encouraging, so I implemented the same sort of technique for use with OpenCV’s StereoBM function. To my surprise, the time taken for a frame almost doubled! I looked at the source code for the implementation, and found that it was so well optimised for (I assume) x86 architecture that it was practically unreadable! So I can only conclude that the pre-processing required for the techniques I applied introduce too much overhead, and were resulting in a deterioration of the performance.

Assuming OpenCV won’t be as well optimised for the ARM architecture (and systems with a fewer number of cores), I tested the optimisations on the Pandaboard, and sure enough there was some improvement. The following table shows the results from two functions, one which is capable of dealing with any arbitrary number of disparities (as long as they are multiples of 16), and the other one tuned specifically for 16*5 number of disparities (I did things like loop unrolling to minimise as much overhead as possible).

Table 2 Results of optimised implementations over 200 runs, SAD window size = 21, number of disparities = 16 *5

 

Min(s)

Max(s)

Average(s)

FPS

General

0.1281

0.3069

0.1349

7.4134

Specific

0.1289

0.1911

0.1324

7.5549

 

There is an increase of almost 2.5 frames per second, which is a worthwhile improvement. There is not a huge difference between the specific implementation and the general implementation, however.

We can afford to be specific on the car, though, as once the parameters are set, they will not be changed (unless we allow for variable disparities depending on the scenario the car is driving in). The specific algorithm seems more consistent as well, with a smaller difference between the maximum time/frame and the average.

Bearing in mind that all of these implementations are still sequential (I will experiment with parallelising some parts of them soon), there is hope for yet more improvement. We will also probably calculate the disparity map for only part of the image, since we only really need to know what’s straight ahead of the car.

I am wary of the fact that after we have the disparity map, we will need to do further calculations per frame as well to determine if the map contains an object close enough to warrant action by the car. That is why it is so important to get the actual computation of the disparity map as fast as possible.

Hassan

Installing Xubuntu in a Virtual Machine

In the previous post, I got the stereo camera to stream a left and right image, but there was clearly some distortion, and a massive vertical displacement between the two images (that’s what you get for using cheap cameras…). To implement stereo vision, though, the images need to be aligned vertically and only have horizontal displacement. That is why the cameras need to be calibrated.

OpenCV provides functions to allow calibration of individual and stereo cameras. But instead of writing my own program, I came across this helpful post, complete with the code required for calibration. Unfortunately, it uses the legacy OpenCV library, and I couldn’t be bothered to incorporate the code into my own Visual Studio solution and muck about with the libraries, so I decided to install Linux in a VM and just compile the program via the Makefile provided. And as we’ve been instructed to essentially write an instructions manual for everything we do, here are the instructions for installing Linux in a VM:

  1. Download VMware Player. It is a free software for running virtual machines with pretty excellent support.
  2. Download your favourite flavour of Linux. We are using Xubuntu 12.10 (so far) for this project. It’s Ubuntu but without the annoying desktop shell.
  3. Start VMware Player, and follow the Illustrated guide

Figure 1 – Main Screen – Click on “Create a New Virtual Machine.

 

Figure 2 – New Virtual Machine WIzard – Browse for the Xubuntu iso you have just downloaded, and click Next.

 

Figure 3 – Enter all of the required information.

 

Figure 4 – Give the virtual machine a name, and decided where to store it.

Figure 5 – Read all the text, and do as your heart tells you.

Figure 6 – The final screen! Review all the information provided and click Finish, and wait a long, long time.. I personally clicked Customize Hardware, and changed some parameters, but you don’t have to.

Figure 7 – After waiting a long, long time, the VM should start, and you should be good to go!

 

That’s it! God bless VMware for making installation of Virtual Machines so easy. Next up is a guide on how to install OpenCV on Linux, which is applicable to both the pandaboard and my VM install of Linux.

Hassan