Data structure for storing and processing 3-dimensional data

Hi everybody,

I am currently doing some research with regard to the data structure that fits the following task:
I am reading in data either from a camera or from a video file:

    cv::VideoCapture capture;

    if (inputFromCamera){;        //read from camera
    else{;      //read from file
        ex = static_cast<int>(capture.get(CV_CAP_PROP_FOURCC));     // Get Codec Type- Int form
        cout << "Codec Type- Int form is " << ex << endl;

Then, the live stream from the camera or from the video file is written to a cv::Mat:
cv::Mat MatInput;
capture >> MatInput;

Apparently, I am using the OpenCV-library (2.3.1-7). Not sure, if that helps, but I am working on Ubuntu 12.04 and my IDE is Qt-Creator 2.4.1.

After reading the data into the cv::Mat it gets a bit tricky: the change of each pixel of the video/stream is considered a time series for which I need to perform the same discrete Fourier transform. Then a filter will be applied (multiplication in frequency domain) and the result will be transformed back to time domain.
For the realization of this algorithm I am looking for an efficient 3-dimensional data structure that will store ALL the frames coming from the video or from the camera. Then, after all the frames are available, the same processing (see above) will be applied to all the time series. Of course, if the video is very long and has got a very high resolution, the amount of data will be very big. Unfortunately, I think there is no way to avoid storing all the frames first. The Fourier transform will only deliver the required results if a complete time series (i. e. the complete data of one pixel) can be processed in one piece.

I think that cv::MatND might be an option, but up to now, I could not find any example of how to use it.
I can generate a cv::MatND by doing:
cv::MatND testMatND;
But then, I have no clue how to add the data from my cv::Mat to the cv::MatND. As I just said, there are no examples, or at least I did not find them.

Then, there might also be a better option than cv::MatND. I don't know.
It's also possible that it makes more sense to generate one array for each time series. So, for example if I use my webcam (resolution 352 X 640) this would mean to generate 225280 arrays (one array per pixel) to store the data and then ,to perform the Fourier transform on each of those arrays.
As you see, I am not sure at all about the best approach for such a task.
So, if anybody has got an idea on how to solve this problem, please let me know.
Many thanks in advance!
Topic archived. No new replies allowed.