How do I capture and Process each and every frame of an image using CImg library ?

Hi, I'm working on a project based on real time image processing using CImg Library in Raspberrypi.

I need to capture images at higher frame rates (say atleast 30 fps), when I use the inbuilt Raspicam commands such as

sudo raspistill -o -img_%d.jpg -tl 5 -t 1000 -a 512

/* -tl : time lapse duration in msec
-t : total time duration (1000 msec = 1 sec)
-a : displays frame numbers
*/

with this command though it shows 34 frames per second,I could only capture maximum of 4 frames/images (and rest of the frames are skipped)

sudo raspistill -o -img_%d.jpg -tl 5 -tl 1000 -q 5 -md 7 -w 640 -h 480 -a 512

and From this above command I could capture at a maximum of 7-8 images per second but by reducing the resolution and quality of the images.

But I don't want to compromise on the quality of an image since I will be capturing an image, processing it immediately and will be deleting an image to save memory.

Later I tried using V4L2(Video for Linux) drivers to make use of the best performance of a camera, but in the internet, tutorials regarding V4l2 and cimg are quite scarce, I couldn't find one.

I have been using the following commands


# Capture a JPEG image
v4l2-ctl --set-fmt-video=width=2592,height=1944,pixelformat=3
v4l2-ctl --stream-mmap=3 --stream-count=1 –stream-to=somefile.jpg



(source : http://www.geeetech.com/wiki/index.php/Raspberry_Pi_Camera_Module)

but I couldn't get enough information about those parameters such as (stream-mmap & stream-count) what does it exactly, and how does these commands help me in capturing 30 frames/images per second ?

**CONDITIONS:**

1. Most importantly I don't want to use OPENCV, MATLAB or any other image processing softwares, since my image processing task is very simple (I.e detection of faster led light blinks as if it is transferring data as 1 or 0 at higher speed - Visible Light Communication ) also my objective is to have a light weight tool to perform these operations at the cost of higher performance.

2. And also my programming code should be in either C or C++ but not in python or Java (since processing speed matters !)

3. Please make a note that,my aim is not to record a video but to capture as many frames as possible and to process each and individual images.


For using in Cimg I searched over few docs from a reference manual, but I couldn't understand it clearly how to use it for my purpose.

The class cimg_library::CImgList represents lists of cimg_library::CImg<T> images. It can be used for instance to store different frames of an image sequence.
(source : http://cimg.eu/reference/group__cimg__overview.html )

* I found the following examples, But i'm not quite sure whether it suits my task

Load a list from a YUV image sequence file.
CImg<T>& load_yuv
(
const char *const 
filename,


const unsigned int 
size_x,


const unsigned int 
size_y,


const unsigned int 
first_frame = 0,


const unsigned int 
last_frame = ~0U,


const unsigned int 
step_frame = 1,


const bool 
yuv2rgb = true 

Parameters
filename
Filename to read data from.
size_x
Width of the images.
size_y
Height of the images.
first_frame
Index of first image frame to read.
last_frame
Index of last image frame to read.
step_frame
Step applied between each frame.
yuv2rgb
Apply YUV to RGB transformation during reading.

But here, I need rgb values from an image frames directly without compression.

Now I have the following code in OpenCv which performs my task, but I request you to help me in implementing the same using CImg libraries (which is in C++) or any other light weight libraries or something with v4l2

#include <iostream>
#include <opencv2/opencv.hpp>

using namespace std;
using namespace cv;

int main (){

VideoCapture capture (0); //Since you have your device at /dev/video0

/* You can edit the capture properties with "capture.set (property, value);" or in the driver with "v4l2-ctl --set-ctrl=auto_exposure=1"*/

waitKey (200); //Wait 200 ms to ensure the device is open

Mat frame; // create Matrix where the new frame will be stored
if (capture.isOpened()){
while (true){
capture >> frame; //Put the new image in the Matrix

imshow ("Image", frame); //function to show the image in the screen

}
}
}
}
`

* I'm a beginner to the Programming and Raspberry pi, please excuse if there are any mistakes in the above problem statements.

Awaiting your favorable suggestions and any help is really appreciated.

Thanks in advance

Best Regards
BLV Lohith Kumar
Last edited on
Questions:
1. Do you if the hardware is actually capable of capturing 30 images per second?
2. If you just need to detect the blinking of a LED, why do you need more quality than the lowest quality the camera provides? As long as the camera can see the intensity and/or the color of the light, that should do it, right?
> my objective is to have a light weight tool to perform these operations at
> the cost of higher performance.
> but not in python or Java (since processing speed matters !)
decide yourself.


> sudo raspistill -o -img_%d.jpg -tl 5 -t 1000 -a 512
Instead of using an external command, try to find a C/C++ API.
For example http://www.uco.es/investiga/grupos/ava/node/40

Also, you may want raspivid instead of raspistill
Thank you so much for your suggestions and recommendations !

@hellos

Yes, the hardware is capable of that, there is no problem with the hardware (that is why it shows 34 frames per second ), but the problem is how to capture each and every frame and to process it ?

Regarding detection of Blinking LED, (I.e detection of faster led light blinks as if it is transferring data as 1 or 0 at higher speed - Visible Light Communication ), yes as you mentioned as long as camera is capable of detecting intensity of light its quite good, but in real time due to some background noise, and because of poor quality or lower resolution there could be a possibility that image processing might take a wrong decision.

@ne555

Hi, regarding C/c++ API, I would like to know what are the pro's and con's of using external command or API, because I guess even API have been designed using the inbuilt camera commands, but i'm not quite sure about this, but anyway I would love to know more about this issue from you.

Once again thank you so much for your support.
I think I know what helios is getting to on this one. What model RaspberryPi are you using? If you're planning on capturing a FULL 30 frames per second, and not a differential, at full resolution for any prolonged period of time then the Raspberry Pi 3 only has 1 GB of RAM. So you're looking at write times to your secondary storage becoming an issue. That's just talking about capture. If you're planning on doing any processing, then what are you going to do, dedicate half of the usable RAM to each operation? You'll be thrashing to and from the disk like a bastard.

Using the Rp to capture the images and to forward them to a beefier machine makes more logistical sense. But again, 30 full frames, full resolution images every second requires a lot of buffering even for UDP.
Last edited on
@Computergeek01

Hi, I'm using Raspberrypi 2 model B+, well it need not be a full resolution, it could be 640 X 480 , and even it could be a low quality image. And i'm trying to store images in ramdisk instead of SD card, it need not be a full frames, My main idea was to detect a led light which is blinking at very higher rates.

thank you.
Though using a camera is a valid approach, in effect you just want to capture a single bit of information. It might be interesting to consider connecting some other sensor such as a photodiode or phototransistor - you might need the odd resistor or capacitor as well, unless there is a ready-made light sensor to simply connect to the Pi. The software end of the project then becomes relatively trivial.
Topic archived. No new replies allowed.