MPI

Let say I have a vector of vector in my master processor. I want to assign each vector to its corresponding processor. for example the first processor will just deal with the first vector, the second processor should receive the the second vector, third processor third vector and so on... but I get stack error each time. Any help would be appreciated.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#include <mpi.h> 
#include <vector> 
int main(int argc, char *argv[])
{
	int num_procs, myrank;
	MPI_Init(&argc, &argv);
	MPI_Status status;
	MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
	MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
	std::vector<int> vec_local;
	if (myrank == 0) {
		std::vector<std::vector<int>> vec = { {1,2},{2,1,3},{2,3,7,4},{1,7,4,6,4,2,4} };
		for (int i_proc = 0; i_proc < num_procs; i_proc++) {

			if (i_proc == 0) {

				vec_local = vec[i_proc];
			}
			else {

				int size = vec[i_proc].size();
				MPI_Send(&size, 1, MPI_INT, i_proc, 0, MPI_COMM_WORLD);
				MPI_Send(vec[i_proc].data(), vec[i_proc].size(), MPI_INT, i_proc, 0, MPI_COMM_WORLD);
			}
		}
	}
	else {
		int size;
		MPI_Recv(&size, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
		vec_local.resize(size);
		MPI_Recv(vec_local.data(), size, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
	}

	printf("process %d prainting local_vector:\n", myrank);
	for (int i = 0; i < vec_local.size(); i++)
	{
			printf("%d \t", vec_local[i], myrank);
		printf("\n");
	}

	MPI_Finalize();
	return 0;
}
Last edited on
You haven't given the correct destination in your second MPI_Send call. Root (0) appears to be trying to send the vector to itself rather than i_proc. Compare the first send above.
It was a typo here but in my code it is i_proc. I fixed this point in the question now. I don't know why I'm getting stack error. It seems during the communication, the variable "size" is changing.
Last edited on
Extremely hard to see how you get a typo when it's just cut and pasted.

Anyway, you forgot
MPI_Finalize();
1
2
if (i_proc == 0) {
}

If you don't need the 0, why don't you start the loop with 1 ?
It's not a copy and paste, it is a very simple case which I wrote here to present my question. Again MPI_Finilize() existe in my code but I didn't write it in this question. Let me ask my question in different way. In this simple case we are sending the variable size and vec[i_proc] from master rank to others. And in the other processors we are receiving the size and then resizing vec_local and receiving data in vec_local. Now imagine we have different vector of vector in master rank and I want to do the same for all these vector of vectors. Using the same tag for all these comunication can make any trouble?
Last edited on
resabzr wrote:
Again MPI_Finilize() existe in my code but I didn't write it in this question.


@resabzr, think about it! You ask us to correct a problem with your code ... but then you don't present the code that has the problem! How on earth are we supposed to guess what is actually on your desktop?


Using the same tag for all these comunication [sic] can make trouble?

For the particular types of send and receive that you are using it doesn't matter if the tags are all the same. They will arrive in the right order.



If I run the latest version of your code (PLEASE STOP CHANGING POSTS AT THE TOP OF THE THREAD) then I get the following output (which appears to be OK). Obviously, the way you have set up your vector of vectors you need to use 4 processors. It's probably not quite the output you intended because the statement
printf("%d \t", vec_local[i], myrank);
is a bit screwy.

C:\c++>"C:\Program Files\Microsoft MPI\bin"\mpiexec -n 4 test 
process 0 prainting local_vector:
1       
2       
process 1 prainting local_vector:
2       
1       
3       
process 3 prainting local_vector:
1       
7       
4       
6       
4       
2       
4       
process 2 prainting local_vector:
2       
3       
7       
4

Last edited on
I knew that in this simple case the code is right. that's why I asked my question in different way in the last comment. The problem starts when we have different vector of vector in master processor and we aim to do the same for all of them. for me it seems something is mixing up there when doing that. that's why I doubted about tags. and Yes it should run with four processors, but actually vec is constructed in a way that always get the size of number of processors. so in general it's not going to be a problem when running with any number of processors. and the code is big, I tried to convert my question to a simple case to not take much time of readers.

For the particular types of send and receive that you are using it doesn't matter if the tags are all the
same.They will arrive in the right order.


for which type of send and receive tag can make problem?
Last edited on
@resabzr
You said that your code crashed - to quote:
but I get stack error each time
I ran it and it didn't crash. So what are we supposed to make of that?



I tried to convert my question to a simple case to not take much time of readers.

No, you ended up completely wasting people's time.



for which type of send and receive tag can make problem?

As long as you are careful that the send tag is the same as the receive tag (which the MPI system will look for to check validity) you should have no problem.



Your question asked why your code crashed. But it didn't crash. So there was no point in asking the question. If you want to inquire why a particular code crashed then supply that code ... or the simplest possible variant of it that still provokes the problem.
Last edited on
sorry, I didn't provide a good example of my question.
Topic archived. No new replies allowed.