need help with opencv

Hi

I am doing a project, which requires me to create a program which would calibrate the camera.

I need help as i have written the code but there seems to be some problem
really would appreciate any help.

the code is below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
#include<opencv\cv.h>
#include<opencv\highgui.h>
#include<stdlib.h>
#include<stdio.h>
#include<vector>

using namespace cv;

int main()
{
	int numBoards = 0;
	int numCornersHor;
	int numCornersVer;

	printf("Enter number of corners along width: ");
	scanf("%d", &numCornersHor);
	
	printf("Enter number of corners along height: ");
	scanf("%d", &numCornersVer);

	printf("Enter number of boards: ");
	scanf("%d", &numBoards);
	int numSquares = numCornersHor * numCornersVer;
	Size board_sz = Size(numCornersHor, numCornersVer);

	VideoCapture capture = VideoCapture(0);

	vector<vector<Point3f>> object_points;		//physical position of the corners in 3d space. this has to be measured by us
	vector<vector<Point2d>> image_points;		//location of corners on in the image (2d) once the program has actual physical locations and locations
												//on the image, it can calculate the relation between the two.

	//create list of corners. temporary hold the current snapshot's chessboard corners. also declare variable that will keep a track
	// of successfully capturing a chessboard and saving it into the lists we declared above.

	vector<Point2f> corners;
	int successes=0;

	Mat image;
	Mat gray_image;
	capture >> image;

	//Next, we do a little hack with object_points. Ideally, it should contain the physical position of each corner. 
	//The most intuitive way would be to measure distances “from” the camera lens. That is, the camera is the origin and 
	//the chessboard has been displaced.

	//can set chessboard on someplace (like xy plane or xz plane), easier to do second one we just assign a constant position
	//to each vertex.

	vector<Point3f> obj;
	for(int j=0; j<numSquares; j++)
		obj.push_back(Point3f(j/numCornersHor, j%numCornersHor, 0));		//this creates a list of coordinates (0,0,0),(0,1,0)
																			//(0,2,0) ... (1,4,0) .. so on. Each corresponds to a particular vertex
	while(successes<numBoards)
	{
		//convert image to grayscale
		cvtColor(image,gray_image, CV_BGR2GRAY);

		bool found = findChessboardCorners(image, board_sz, corners, CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS);

		if(found)
        {
            cornerSubPix(gray_image, corners, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS | CV_TERMCRIT_ITER, 30, 0.1));
            drawChessboardCorners(gray_image, board_sz, corners, found);
        }

		imshow("win1", image);
		imshow("win2", gray_image);

		capture>>image;

		int key = waitKey(1);

		if(key=27)
			return 0;

		if(key==' ' && found!=0)
		{
			image_points.push_back(corners);
			object_points.push_back(obj);
			printf("Snap stored!\n");

			successes++;

			if(successes>=numBoards)
				break;
		}

	}	
	//next we get ready to do the calibration, we declare variables that will hold the unknowns:

	Mat intrinsic = Mat(3, 3, CV_32FC1);
	Mat distCoeffs;
	vector<Mat> rvecs;
	vector<Mat> tvecs;

	//modify intrinsic matrix with whatever we know. camera spect ration is 1 (that's usually the case.. if not change it as required

	intrinsic.ptr<float>(0)[0] = 1;
	intrinsic.ptr<float>(1)[1] = 1;

	//(0,0) and (1,1) are focal lengths along x and y axis
	//now finally the calibration

	calibrateCamera(object_points, image_points, image.size(),intrinsic, distCoeffs, rvecs, tvecs);
	//after this you will have the intrinsic matrix, distortion coefficients and the rotation + translation vectors.
	//intrinsic matrix and distortion coeffs are property of the camera and lens. so if you don't change it focal lengths zoom lenses you
	//can reuse it.

	//now we have distortion coeffcients we can undistor the images, using a loop will do this

	Mat imageUndistorted;
	while(1)
	{
		capture >> image;
		undistort(image, imageUndistorted, intrinsic, distCoeffs);

		imshow("win1", image);
		imshow("win2", imageUndistorted);

		waitKey(1);
	}

	//finally we'll release the camera and quit

	capture.release();

	return 0;
}


i get the following error
c:\...\basiccalibration\basiccalibration\calibrate.cpp(51): warning C4244: 'argument' : conversion from 'int' to 'float', possible loss of data

1>c:\...\basiccalibration\basiccalibration\calibrate.cpp(51): warning C4244: 'argument' : conversion from 'int' to 'float', possible loss of data

c:\...\basiccalibration\basiccalibration\calibrate.cpp(78): error C2664: 'void std::vector<_Ty>::push_back(_Ty &&)' : cannot convert parameter 1 from 'std::vector<_Ty>' to 'std::vector<_Ty> &&'
1>          with
1>          [
1>              _Ty=std::vector<cv::Point2d>
1>          ]
1>          and
1>          [
1>              _Ty=cv::Point2f
1>          ]
1>          and
1>          [
1>              _Ty=cv::Point2d
1>          ]
1>          Reason: cannot convert from 'std::vector<_Ty>' to 'std::vector<_Ty>'
1>          with
1>          [
1>              _Ty=cv::Point2f
1>          ]
1>          and
1>          [
1>              _Ty=cv::Point2d
1>          ]
1>          No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called


any help, i am a beginner to opencv, so help would be appreciated
thanks
You keep mixing up floats and doubles. Stop using floats, use only doubles. That means no Point2f, only Point2d, etc.

The error is because of this:
1
2
3
4
5
6
7
8
9
vector<vector<Point2d>> image_points;

//...

vector<Point2f> corners;

//...

image_points.push_back(corners);
You mismatched the types - use only doubles.
Last edited on
ok so after converting point2f and point3f to 2d and 3d respectively

i still get an error

1>calibrate.obj : error LNK2019: unresolved external symbol "double __cdecl cv::calibrateCamera(class cv::_InputArray const &,class cv::_InputArray const &,class cv::Size_<int>,class cv::_OutputArray const &,class cv::_OutputArray const &,class cv::_OutputArray const &,class cv::_OutputArray const &,int,class cv::TermCriteria)" (?calibrateCamera@cv@@YANABV_InputArray@1@0V?$Size_@H@1@ABV_OutputArray@1@222HVTermCriteria@1@@Z) referenced in function _main
1>calibrate.obj : error LNK2019: unresolved external symbol "void __cdecl cv::drawChessboardCorners(class cv::_OutputArray const &,class cv::Size_<int>,class cv::_InputArray const &,bool)" (?drawChessboardCorners@cv@@YAXABV_OutputArray@1@V?$Size_@H@1@ABV_InputArray@1@_N@Z) referenced in function _main
1>calibrate.obj : error LNK2019: unresolved external symbol "bool __cdecl cv::findChessboardCorners(class cv::_InputArray const &,class cv::Size_<int>,class cv::_OutputArray const &,int)" (?findChessboardCorners@cv@@YA_NABV_InputArray@1@V?$Size_@H@1@ABV_OutputArray@1@H@Z) referenced in function _main
1>C:\Users\Safiyullaah\documents\visual studio 2010\Projects\basiccalibration\Debug\basiccalibration.exe : fatal error LNK1120: 3 unresolved externals


now this is just a very confusing error, it doesn't point to a line of code in my program just says error

thanks
now this is just a very confusing error, it doesn't point to a line of code in my program just says error

That's because it's a linker error. It doesn't relate to any particular line if code - it means that it's finished compiling the code and is trying to link the objects together. It's telling you that some of your code is using some symbols, and that those symbols are not defined anywhere in the objects you've told the linker to link together.

I don't know opencv, but I'd guess the problem is that you're not linking against the correct library.
Yes, a linker error means that your code is correct syntactically but you're missing some code that it needs. Are you properly linking to the OpenCV library?
Yes, i followed an online video tutorial, which showed how to install and create a basic project which worked nicely, a camera capture.

in the debugger i listed the files with the d.lib appended to the library files
and for the release i listed the library without the d appended to the end of it like just .lib

i will try and switch the 2 around and see if that makes any difference

EDIT: I just switched the 2 property sheets the debug in the release and the release in the debug and it succeeded. that's strange.

Also when i run it, it crashes, that's why the camera screen is black and its all crashed.

i will try and implement it differently.

for camera calibration i should convert image to grayscale detect edges and from the edges detected, detect the corners and store those values in a vector, which i think the vector should store 2 values for x and y in one vector i think instead of detect chessboard corners, which takes a long time to load.

any suggestions on how i can implement the camera calibration method.

thanks
Last edited on
Which IDE do you use? By the way , not all tutorials in the Internet are accurate. Some of them missing a lot of steps. I tried some of them until I gave up but eventually I found out an excellent installation and it seems to me more standard to link your header files and obj. files. You have to understand some notions to be able to at least understand the errors not to solve them. For example, what does the linker do? What are the files that the linker needs? What are the source file, header file, obj. file, and .exe file? What is the difference between Release Mode and Debug Mode? What does the compiler do? These questions are crucial. If you don't understand these questions, then you have to follow any tutorial blindly and as a result, you will face a myriad of problems. Not to mention that you will eventually give up.
Last edited on
I use visual studio 2010. Thanks

btw does it crash because my computer is slow?
Ok so i deleted everything and starting from fresh

1
2
3
4
5
6
7
8
9
10
11
Mat image, gray_image;
	string filename="C:\\Project\\chessboard-1.gif";	//you need double \ for directory else warning will show
	image=imread(filename);
	imshow("original",image);

	cvtColor(image,gray_image,CV_BGR2GRAY);
	imshow("processed",gray_image);
	
    waitKey();

    return EXIT_SUCCESS;


when i run it, error dialog pops up saying

Unhandled exception at 0x76bdfc16 in basiccalibration.exe; Microsoft C++ exception: cv::Exception at memory location 0x0023ef04

Just error after error i don't know what to do.
This is the point at which you fall back on the programmer's best friend, a debugger. Find out where it's crashing, and that should help you discover why it's crashing.
It was crashing at imshow.
now that step is ruined as this is a problem with opencv library, damn
is there any way to find out why an unhandled exception occurs.
i tested a code which runs but when i place my chessboard in view, it crashes and says unhandled exception on the console it says

OpenCV error: Assertion failed <ncorners> = 0 && corners.depth<> == CV_32F in unknow function, file \...\...\src\opencv\modules\imgproc\src\cornersubpix.cpp, line 257.

Either there's a bug in the opencv library, or you're using it incorrectly.

For an assertion to trigger, you must be using the debug build of the opencv library. Is that deliberate?
hi

thanks for replying.

i used the debug because of the imshow was not working but it still doesn't work

it was not deliberate. if this causing the exception, how do i change it and do i change it to the release mode.

thanks a lot so far
The reason I asked is because an assertion is a kind of check that developers put into code to test that nothing's wrong with the current state. Basically, you test for a certain condition, and if the test fails, then an exception is thrown, which usually causes the program to terminate.

If done right, assertions are only active in a debug build. In a release build, they don't do anything. The idea is that you use assertions to alert you immediately if something is wrong during development, so that you can fix it. In the libraries you release to your users, the program won't terminate due to an assertion, because that would be annoying for users, but the code should be robust enough that it will handle the error in a user-friendly way.

Different developers use assertions in different ways. Some use them only to test the state of things that their own libraries are responsible for. They'll assert for things that indicate that something is wrong with their own code.

Others will use assertions to test that their users are using their libraries correctly. If you're using a debug build of a library, the assertion might indicate that you're doing something wrong in the way you're using the library.

I know nothing at all about opencv, so I have no experience that might hint one way or the other, I'm afraid. You might be best off contacting their technical support, or maybe looking online to see if there are any opencv user forums or mailing lists that might be able to help.
I have contacted the forums and still no response.

So i am not using the libraries properly.
hmm, strange, don't know what?

thanks for the help anyway
Topic archived. No new replies allowed.