|
|
c:\opencv2.4.3\build\include\opencv2\flann\logger.h(66): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. 1> c:\program files\microsoft visual studio 10.0\vc\include\stdio.h(234) : see declaration of 'fopen' 1>c:\users\safiyullaah\documents\visual studio 2010\projects\3d image reconstruction\3d image reconstruction\main.cpp(98): warning C4244: 'argument' : conversion from 'float' to 'unsigned int', possible loss of data 1>c:\users\safiyullaah\documents\visual studio 2010\projects\3d image reconstruction\3d image reconstruction\main.cpp(99): warning C4244: 'argument' : conversion from 'float' to 'unsigned int', possible loss of data 1>c:\users\safiyullaah\documents\visual studio 2010\projects\3d image reconstruction\3d image reconstruction\main.cpp(100): warning C4244: 'argument' : conversion from 'float' to 'unsigned int', possible loss of data |
std::cout
, you don't have to have it all on one line like you have on line 118, you can build it up in stages - put it an std::endl
or '\n'
where required. I am also not a big fan of really long comments on one line.camlen
& camwidth
as doubles, but everywhere else you have floats. You use these in the PtInRect function but the arguments are floats, so you have unnecessary implicit casting. In general you have a lot of casting going on everywhere, see if you can cut it down a bit.cout<<threedimension.at<int>(1,1,2); //the value is 65231512 something like this for a lot of them
at<int>
isn't doing casting, rather I am guessing it should reflect the underlying type which is float, so try this:
|
|
it says to set it to float but the error still occurst e.g. v=vertexpoints[k].x i change it to float(vertexpoints[k].x) but still says unsigned ints i don't use unsigned ints. |
how do i set the compiler warning options to max. |
most of the code in my program are variables. |
|
|
the loop is actually just generating the 3D points to use for the projectpoints function, so if i have a dimension of x=10, y=10, z=10 i want to generate the 3d points 0,0,0 0,0,1 all the way upto 9,9,9 |
|
|
Continuing on from the other topic: Still confused about the 3d array: for(int l=0;l<int(lenx);l++) { for(int w=0;w<int(leny);w++) { for(int h=0;h<int(lenz);h++) {} If the dimensions of this were all 10, there are 1000 values. With an image, wouldn't there only be 100 values? How can a camera image have a height as well as x,y? I am missing the whole point I think. |
the loop is actually just generating the 3D points to use for the projectpoints function, so if i have a dimension of x=10, y=10, z=10 i want to generate the 3d points 0,0,0 0,0,1 all the way upto 9,9,9 |
This is 1000 values, which is different to 100 xyz values. This what confuses me the most. |
To compare equality for FP numbers, you have take the absolute value of the difference between the 2 numbers, and see if that is less than a predetermined precision value (0.001 say) : const float PRECISION = 0.001f; float a = 0.1f; // really 0.0999997 float b = 10.0f * a; // 0.9999997 float c =1.0f; if (std::abs (c - b) < PRECISION ) { std::cout << "numbers c & b equal" << "\n"; } else { std::cout << "numbers c & b NOT equal" << "\n"; } |
|
|
static_cast
I mentioned earlier - did you try that?there is a 3d array which contains 1s and 0s, this loop generates all the coordinates to the elements of that 3d array, a 1 represents a texture and a 0 represents a hole, i will be rendering the parts that contain 1 and produce a 3d model. |
Mat threedimension(3,sz,CV_32F,Scalar::all(1)); //create 3dim matrix, type 32 filled with 1s
threedpoint.push_back(Point3f(l,w,h));
lenz
times too much data. If lenz
is 150 - you 150 times too much data - 150 z values for each xy value.
|
|
http://programmingexamples.net/wiki/OpenCV/WishList/ProjectPoints |
I still think your concept of 3d points is not right. A 3D array is not the right container for 3D points. As you said before the projectpoints function only needs the coordinates, but now you are talking about a texture as well. Even if you wanted to store a texture along with it's 3d point, I still wouldn't use a 3d array. |
there is not supposed to be one z value for each x,y value there should be more depending on the size of the object in the z direction. adding struct would not work, because a 3d array better represents the model than a vector of points. |
|
|
Although from your description of why it didn't work before, could it possibly be solved using an appropriate value for scale part of the transformation? Large Real World -> scale factor -> small image plane. |
Even if a 3d point represented an individual pixel,and there were multiple pixels per xy point, using a 3d array is still massively wasteful IMO, and I still don't understand why you would do this for relatively large real world coordinates. I realise that it is the texture data that is in the 3d array, but it still has real world 3d coords, so what I am saying applies to it. It just seems like a recipe for a huge waste of memory. |
To give an example from my work, I make 3d models all the time, so I can do contour plans and calculate volumes etc. The software I use certainly does not store them in 3d arrays, they are conceptually a vector of 3d points. |
The camera is a virtual one in that scenario, although you seem to be using a real one. I am not sure how that part works either. If the mask is going to hide points projected from the 3d model, wouldn't the camera image have to be made from the model? |