Google Protobuf

Back from holidays !! Did some work on the project over the start of the holidays, so gonna share what I’ve done over the past 2 weeks. First of all, we had a problem of transferring multiple data from the server to the client. This is crucial, because for our real time system, we would want to inform the client about the speed and maybe other things as well. The implementation we’re looking for to solve this problem would be serialization. Based on wikipedia, serialization is the process of converting a data structure or object state into a format that can be stored (for example, in a file or memorybuffer, or transmitted across a network connection link) and “resurrected” later in the same or another computer environment.


In our case, we need serialization because we want to recreate the data in our client, which might be of different platform. For example, in our project we need to transfer data from the Pandaboard (ARM) server to a PC client (x86). I have yet to try whether without serialization the image remains the same. However, based on the result I got from trying a PC(x86) to an iPod Touch 3G (ARMv7), there seems to be some error. My guess would be because of different architecture type having different byte size or its just poor coding.

Having looked through multiple serialization, we have decided on using Google Protocol Buffers. There are other serialization frameworks such as Apache Thrift(used by Facebook), Boost Serialization (part of the popular Boost library). The reason why protocol buffer was chose is because it is the fastest and the cleanest amongst all. Apache Thrift would be good for multi-platform support, but there isn’t enough documentation on how to use it.

Google Protocol Buffer (Protobuf)

You can find the main page here ( ). It is a bit like XML and JSON. After installing you will need to create a .proto file using the compiler. Just use any text editor and create the file by adding a .proto extension when you save it. The .proto file looks a bit like java. You create a class, and you define its variables. For our project we used something like this.

package tutorial;

option java_package = "com.example.tutorial";
option java_outer_classname = "AddressBookProtos";

message Packet {
  repeated int32 data = 1 [packed=true];
  optional int32 speed = 2;
  optional int32 size = 3;

Not very sure the exact definition of package, but to my understanding its just the class. Message is like a new type used in protobuf. In it you can declare fields depending on what you need. You set the fields to either optional or required. Optional is good especially when you want to reuse the buffer but you want to not send certain things. Repeated means you can store data in that field multiple times, which can be accessed through index. It is similar to arrays. The downside is that multi-dimension arrays are not supported natively, so you will need google around to find out how to do it.

How to add it in your code :

	 //Declare protocol buffer
	tutorial::Packet packetSize;

    //Serializes protocol buffer into array

	int arraysize = packetData.ByteSize();
	// Sets a value for the size field in the protofbuf
	// Create an array of the size of packetSize
	char* id[packetSize.ByteSize()];
	//Serialize it to the array created
	packetSize.SerializeToArray(id, packetSize.ByteSize());
	//Send it over the network
	bytes = send(clientsock, id, sizeof(id), 0);

Installing on Xcode for iOS:

This is much more complicated than installing on a PC or Mac. I had to fiddle around a little bit.


I’ve managed to get it working between computers. I have coded for iOS as well. It seems to be slightly slow because I’ve only managed to borrow an old iPod Touch 3G from a friend. Hoping to get something faster. Although it might not improve performance significantly because I’ve yet to implement concurrency on it, but I’m sure it will work faster than now because of the better network card.

Between PC and iOS :


Will update more when I’ve implemented more stuff. Happy New Year Everyone !!

PandaBoard Streaming

In order to let us see what the car is doing, we needed a way to stream data between the car and a server. Having googled around, so far, two methods fit the bill.

First – C++ server → Websocket → HTML5 Client

This seems to be the best solution (if working) for the project as the clients can have the comfort of their HTML5 – supported browser ( while using Facebook at the same time) without any extra installation. Decided to follow the method implemented here:

Streaming from a fixed video works well. There wasn’t any frame drop or lag, and it works for mp4 and ogg formats. In order to stream captured videos, I’ve used OpenCV to capture frames, then writing every frame to video container using the built-in OpenCV VideoWriter and writeframe(). Ogg was used as the extension in this case as OpenCV doesn’t have any codec to write MP4 files. As the writing happens, it is streamed through the server mentioned above.

At the receiving end, results doesn’t look good at all. There is a massive lag and delay in the video received. This is expected as there are overhead from file I/Os, encoding and network delays. After some googling, there doesn’t seem to be a better solution in solving this problem at the moment as OpenCV doesn’t allow writing frames to memory instead of files. On the bright side, the recording from the webcam is playable and seems to be real time. We might incorporate this method into the project as part of recording the journey of the car.


  • Video stream is in colour
  • Can be accessed through a browser
  • Supports many devices
  • Streaming is recorded


  • Streaming is sluggish, massive delay of 30 seconds

Second Method – C++ Server → Websocket → C++ Client

Anyway, moving on, instead of streaming encoded data, I’ve decided to try streaming OpenCV data instead. The downside of this method is that the client end has to have OpenCV installed instead of using the more available browsers. Managed to stream data using the method from below :


Pros –

  • Faster speed
  • Much lesser delay (about 5 seconds only)
  • Doesn’t require a server to be setup.

Cons –

  • At the moment, image viewed is in gray-scale. (Working on color streaming)
  • Presence of delay defeats the objective of real-time

Verdict –

At the moment, the second method is better than the first method in displaying video  obtained from OpenCV. More options to be explored, hopefully its better than these 2. Results to be uploaded soonish in the future.