[help]Data type in MKL examples.

Hi,

In the example of

https://software.intel.com/sites/products/documentation/doclib/mkl_sa/11...

The to-be-diagonalized matrix is declared to be a dcomplex type

(struct _dcomplex { double re, im; };
typedef struct _dcomplex dcomplex;)

and initialized using:

dcomplex a[LDA*N] = {
{ 9.14, 0.00}, {-4.37, 9.22}, {-1.98, 1.72}, {-8.96, 9.50},
{ 0.00, 0.00}, {-3.35, 0.00}, { 2.25, 9.51}, { 2.57, -2.40},
{ 0.00, 0.00}, { 0.00, 0.00}, {-4.82, 0.00}, {-3.24, -2.04},
{ 0.00, 0.00}, { 0.00, 0.00}, { 0.00, 0.00}, { 8.44, 0.00}
};

When I try to supply my own matrix elements to a[LDA*N] using

1
2
3
4
5
6
7
dcomplex a[LDA*N];
        for (int i=0;i<N;i++) {
        for (int j=0;j<LDA;j++) {
        int k=j+i*4*N;
        a[k] = {std::real(hamiltonian[i][j]),std::imag(hamiltonian[i][j])};
        }//j
        }//i 

It complains:

error: expected an expression
a[k] = {std::real(hamiltonian[i][j]),std::imag(hamiltonian[i][j])};
^

when I tried to icc the code using mkl link.

I guess the question is: how to initialize/access members in a[LDA*N] in a more free way. I don't know how to manippulate this data type.

As you see I mostly use vectors to store my data. So I am not familiar with other data types.



Thanks a lot,

Yue
I can't see why you should have that error message (although you have shown an absolutely minimal amount of code and your link is incomplete). Is this genuinely successive lines of code, or have you cut various bits from a larger routine and reassembled them here? This is the sort of error message you would get if something (like forgetting ';') went wrong on a previous line - that you may have chosen not to show us here.

Try initialising with constants rather than real and imaginary parts of some complex quantity; (I hope you've declared the header for that).

Are you sure that your limits on i and j are the right way round? Also, what's with the factor of 4 in line 4? If the limit on j were really LDA (and I'm not convinced) then I would expect that to be
int k=j+i*LDA;
Last edited on
Hi lastchance,

Thanks for the quick reply.
The link is
https://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_lapack_examples/zheev_ex.c.htm

Sorry the 4 is an error, and sorry for unable to put up a stand-alone code demonstrating better the situation. The full code is 9,000+ lines long.

What I can see is that replacing the block with the original block (the upper one), the code works fine.

Or I may replace
dcomplex a[LDA*N] = {
{ 9.14, 0.00}, {-4.37, 9.22}, {-1.98, 1.72}, {-8.96, 9.50},
{ 0.00, 0.00}, {-3.35, 0.00}, { 2.25, 9.51}, { 2.57, -2.40},
{ 0.00, 0.00}, { 0.00, 0.00}, {-4.82, 0.00}, {-3.24, -2.04},
{ 0.00, 0.00}, { 0.00, 0.00}, { 0.00, 0.00}, { 8.44, 0.00}
};
with
a[0]={9.14,0.00};
a[1]={-4.37,9.22};
...
and see what happens in the example (the code in the link)?

I will come back with the test result soon.

Thanks,

Yue
@RockAngel - I edited my post (and it may have crossed with your reply). Are you showing successive lines of code, or have you cut some out?

The error message definitely looks like one that might be a result of a line before the one that you flag and which you may have chosen not to show here.

BTW - I did once have a go at writing my own routine for this here:
http://www.cplusplus.com/forum/beginner/220486/2/#msg1014669
Last edited on
Hi lastcance,

I have replace the initialization part with
a[0]={9.14,0.00};
a[1]={-4.37,9.22};
...
The code is appended.

The error persists for each declearation. (this error is due to the way the vector is declared, using the original example compiles good.)
icc zheev_test00.cpp -mkl -o test
zheev_test00.cpp(93): error: expected an expression
a[0]={ 9.14, 0.00};
^

zheev_test00.cpp(94): error: expected an expression
a[1]={-4.37, 9.22};
^

zheev_test00.cpp(95): error: expected an expression
a[2]={-1.98, 1.72};
^

Thanks,
Yue


{\code}
#include <stdlib.h>
#include <stdio.h>

/* Complex datatype */
struct _dcomplex { double re, im; };
typedef struct _dcomplex dcomplex;

/* ZHEEV prototype */
extern "C" {
void zheev( char* jobz, char* uplo, int* n, dcomplex* a, int* lda,
double* w, dcomplex* work, int* lwork, double* rwork, int* info );
}
/* Auxiliary routines prototypes */
extern void print_matrix( char* desc, int m, int n, dcomplex* a, int lda );
extern void print_rmatrix( char* desc, int m, int n, double* a, int lda );

/* Parameters */
#define N 4
#define LDA N

/* Main program */
int main() {
/* Locals */
int n = N, lda = LDA, info, lwork;
dcomplex wkopt;
dcomplex* work;
/* Local arrays */
/* rwork dimension should be at least max(1,3*n-2) */
double w[N], rwork[3*N-2];
/*
dcomplex a[LDA*N] = {
{ 9.14, 0.00}, {-4.37, 9.22}, {-1.98, 1.72}, {-8.96, 9.50},
{ 0.00, 0.00}, {-3.35, 0.00}, { 2.25, 9.51}, { 2.57, -2.40},
{ 0.00, 0.00}, { 0.00, 0.00}, {-4.82, 0.00}, {-3.24, -2.04},
{ 0.00, 0.00}, { 0.00, 0.00}, { 0.00, 0.00}, { 8.44, 0.00}
};
*/
dcomplex a[LDA*N];
a[0]={ 9.14, 0.00};
a[1]={-4.37, 9.22};
a[2]={-1.98, 1.72};
a[3]={-8.96, 9.50};
a[4]={ 0.00, 0.00};
a[5]={-3.35, 0.00};
a[6]={ 2.25, 9.51};
a[7]={ 2.57, -2.40};
a[8]={ 0.00, 0.00};
a[9]={ 0.00, 0.00};
a[10]={-4.82, 0.00};
a[11]={-3.24, -2.04};
a[12]={ 0.00, 0.00};
a[13]={ 0.00, 0.00};
a[14]={ 0.00, 0.00};
a[15]={ 8.44, 0.00};
/* Executable statements */
printf( " ZHEEV Example Program Results\n" );
/* Query and allocate the optimal workspace */
lwork = -1;
zheev( "Vectors", "Lower", &n, a, &lda, w, &wkopt, &lwork, rwork, &info );
lwork = (int)wkopt.re;
work = (dcomplex*)malloc( lwork*sizeof(dcomplex) );
/* Solve eigenproblem */
zheev( "Vectors", "Lower", &n, a, &lda, w, work, &lwork, rwork, &info );
/* Check for convergence */
if( info > 0 ) {
printf( "The algorithm failed to compute eigenvalues.\n" );
exit( 1 );
}
/* Print eigenvalues */
print_rmatrix( "Eigenvalues", 1, n, w, 1 );
/* Print eigenvectors */
print_matrix( "Eigenvectors (stored columnwise)", n, n, a, lda );
/* Free workspace */
free( (void*)work );
exit( 0 );
} /* End of ZHEEV Example */

/* Auxiliary routine: printing a matrix */
void print_matrix( char* desc, int m, int n, dcomplex* a, int lda ) {
int i, j;
printf( "\n %s\n", desc );
for( i = 0; i < m; i++ ) {
for( j = 0; j < n; j++ )
printf( " (%6.2f,%6.2f)", a[i+j*lda].re, a[i+j*lda].im );
printf( "\n" );
}
}

/* Auxiliary routine: printing a real matrix */
void print_rmatrix( char* desc, int m, int n, double* a, int lda ) {
int i, j;
printf( "\n %s\n", desc );
for( i = 0; i < m; i++ ) {
for( j = 0; j < n; j++ ) printf( " %6.2f", a[i+j*lda] );
printf( "\n" );
}
}
{\code}
Last edited on
When I run
g++ -Wall -pedantic -Wextra test.cpp

on your code sample, all I get are the following warnings (which probably need dealing with, but aren't relevant to the problem)

test.cpp: In function 'int main()':
test.cpp:59:73: warning: ISO C++ forbids converting a string constant to 'char*' [-Wpedantic]
 zheev( "Vectors", "Lower", &n, a, &lda, w, &wkopt, &lwork, rwork, &info );
                                                                         ^
test.cpp:59:73: warning: ISO C++ forbids converting a string constant to 'char*' [-Wpedantic]
test.cpp:63:71: warning: ISO C++ forbids converting a string constant to 'char*' [-Wpedantic]
 zheev( "Vectors", "Lower", &n, a, &lda, w, work, &lwork, rwork, &info );
                                                                       ^
test.cpp:63:71: warning: ISO C++ forbids converting a string constant to 'char*' [-Wpedantic]
test.cpp:70:42: warning: ISO C++ forbids converting a string constant to 'char*' [-Wpedantic]
 print_rmatrix( "Eigenvalues", 1, n, w, 1 );
                                          ^
test.cpp:72:64: warning: ISO C++ forbids converting a string constant to 'char*' [-Wpedantic]
 print_matrix( "Eigenvectors (stored columnwise)", n, n, a, lda );


I don't have access to icc or to the mkl library, but it may be an issue with that.

Can you try compiling (but not linking) your code with a different compiler?

Are you sure that the mkl library and compiler are both up to date?


EDIT:
I have just tried on c++ shell with compiler features backdated to c++98.
What you are trying to do is only allowed from c++11.

Can you either upgrade your compiler, or set a -std=c++11 flag (or similar)?


Alternatively, and slightly painfully, you could set the two parts of each structure separately:
a[0]={ 9.14, 0.00};
becomes
a[0].re = 9.14; a[0].im = 0.00;
etc. This is painful. Better to either set a compiler flag (if possible) or upgrade the compiler.
Last edited on
Hi lastchance,

I just saw your own code for diagonalizing a matrix.

Actually, the project at my hand is a big one: the matrix is of the order of 100,000 by 100,000 and it has to be done multipole time for selfconsistency. I'm stuck with using zheev before using a paralleled scalapck in the future.

Any idea about how to better approach the problem is welcome.

Thanks :-)

Yue
Hello Yue, I edited my last post. (Sorry, it's a bad habit!)

Check out what I said about either:
- upgrading your compiler to c++11 (or later)
OR
- trying to set a compiler flag -std=c++11 (or later)
OR, AS A LAST RESORT,
- setting both components of the structure separately; e.g.
a[0].re = 9.14; a[0].im = 0.00;
Hi lastchance,

I think I will try to do a[0].re and im way, before asking the computer person to upgrade the compiler. I will also try to use the flag c++11, this one is pretty new to me too.

By the way, g++ compilers doesn't work with me because it seems to be unable to link the acml libraries properly(it complains: zheev.c:(.text+0x98): undefined reference to `zheev').

I will let you know asap.

But any information about diagonalization techniques is welcome to teach me :-)

Thanks,

Yue
Last edited on
RockAngel wrote:
By the way, g++ compilers doesn't work with me because it seems to be unable to link the acml libraries properly(it complains: zheev.c:(.text+0x98): undefined reference to `zheev').

It's probable that you haven't told it where to find the relevant headers (usually .h files).

RockAngel wrote:
But any information about diagonalization techniques is welcome to teach me :-)

https://www.amazon.co.uk/Numerical-Recipes-3rd-Scientific-Computing/dp/0521880688

I suspect that you are a university researcher, so ask (or get your supervisor to ask) your library.


Hi lastchance,

I try to go down your list with 3 items.

It turns out that using a newer gfortran 4.9 on my laptop, it works with 6 warnings, see below. As you said c++11 is needed; in my case I have to add options: -llapack and -lblas. But with g++-mp-4.9 the -fopenmp is not supported on my laptop. I am kind of confused among these versions on different platforms..

However, on the small cluster in my university, where gfortran version is only 4.4, it complains "unrecognized command line option "-std=c++11"", and "undefined reference to `zheev'".

If it is possible, could you please let me know the .h file you included in the header part that allowed you not needing those -llapack -lblas options.

I see things are really close to successful.

Thanks,

Yue

g++ -Wall -pedantic -Wextra -llapack -lblas -std=c++11 zheev_test00.cpp 
clang: warning: libstdc++ is deprecated; move to libc++ with a minimum deployment target of OS X 10.9 [-Wdeprecated]
zheev_test00.cpp:113:16: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
        zheev( "Vectors", "Lower", &n, a, &lda, w, &wkopt, &lwork, rwork, &info );
               ^
zheev_test00.cpp:113:27: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
        zheev( "Vectors", "Lower", &n, a, &lda, w, &wkopt, &lwork, rwork, &info );
                          ^
zheev_test00.cpp:117:16: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
        zheev( "Vectors", "Lower", &n, a, &lda, w, work, &lwork, rwork, &info );
               ^
zheev_test00.cpp:117:27: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
        zheev( "Vectors", "Lower", &n, a, &lda, w, work, &lwork, rwork, &info );
                          ^
zheev_test00.cpp:124:24: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
        print_rmatrix( "Eigenvalues", 1, n, w, 1 );
                       ^
zheev_test00.cpp:126:23: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings]
        print_matrix( "Eigenvectors (stored columnwise)", n, n, a, lda );
Last edited on
Hello @RockAngel,

I didn't LINK the code because I simply don't have the linear-algebra packages that you need. I COMPILED it and got the same warning messages as you have above. Note that, although strict adherence to the standard is not met, I think that these are only warnings and your code would still run.

You would normally have two choices with both libraries and headers - either put them in the folders where your compiler expects to find them (the source folder, or the default library and include folders) or specify their folder locations as compile/link-time options: you would have to consult the compiler documentation for that.

Did simply specifying the individual members of the struct work with your original compiler?
An alternative still would be to just write a simple constructor for the struct (UNTESTED):
1
2
3
4
5
struct _dcomplex
{ 
   double re, im;
   _dcomplex( double r, double i ) : re(r), im(i) { } // simple constructor
};


Then set with, e.g. (note ROUND brackets (), not CURLY brackets {} ):
a[k] = dcomplex( std::real(hamiltonian[i][j]),std::imag(hamiltonian[i][j]) );


I'm afraid that I can't help you with parallelisation using OpenMP. All the large-scale parallelisation that I do is with distributed-memory, using MPI.
Last edited on
Hi lastchance,

Yes, I understand that you don't need to add link which is favorable to me because I work around 4 machines each serving different purposes. Having to fix where to locate the mathematical libaries is driving me crazy. Especially when I have to decide which compiler to use on different machines ToT

It is good that I just found that your suggested item 3, a[0]re, a[0].im works without the c++11 option.

Now, please allow me to discuss with you about how to parallelize.

I understand that you're using g++ together with open mpi parallelization(?) I'm using openmp for parallelization of the do loops. And for parallelizing diagonalization, I still have no idea how to proceed. I only know that scalapack has useful tools for that. That being said, I can switch to using other parallelization schemes if necessary. What is your suggestion?

Thanks,

Yue
Hello @RockAngel,

Despite similarity in names, OpenMP and Open MPI are completely different things!!!

OpenMP is multi-threading and uses shared memory; it typically parallelises loops.

MPI uses distributed memory and tends to use domain decomposition (split up your grid into blocks, communicating between processors at boundaries).

To quote Wikipedia:
"OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes."

I use MPI (not OpenMP) and on my desktop PC I use the MPICH version, not Open MPI. The application is computational fluid dynamics.

Some compilers and libraries will automatically set up OpenMP for you, but you would need to think harder yourself how to use MPI. If scalapack has routines that already use OpenMP then you should be able to just call them and they will do the parallelisation for you, but I have no experience of that.

I'm glad that you managed to get around the C++11-dependent initialisation problems.


With regard to your application, if your matrices are Hermitian (quantum mechanics?) they will have quite considerable symmetry properties (and real eigenvalues, obviously). If they are coming from differential equations then they are probably also very sparse. If they are as large as you say then you should choose library routines that take account of their Hermitian form and sparse structure. Also, it is substantially faster to find eigenvalues than eigenvectors as well.
Last edited on
Hi lastchance,

Thanks for the detailed explanation.

What I have are 2 small intel servers with 20+ cores, and access to a small cluster with 10 nodes, each having 20 cores. That means that I should be using mpi (either openmpi or intel mpi) on the multi-node machine if the problem is larger, and openmp for smaller problems on my servers.

Regarding my problem, you're completely right about it: it's Hermitian, sparse. Unfortranately, I have to use the eigenvectors corresponding to certain eigenvalues (not the smallest). But a testing problem can be as small as 8,000+ by 8,000+. What I am thinking is that scalapack may have some routine dedicated to this type of problem and I guess those are probably the fastest choice. My surprise is that online resources are scarce, especially when it comes to the c++ codes.

BTW, the problem is not solved, with the following replacement (or in the 4by4 situation) it compiles. But the calculation exit when hit is with error:

Segmentation fault (core dumped)

Using
1
2
3
4
5
6
dcomplex a[LDA*N] = {
           { 9.14,  0.00}, {-4.37,  9.22}, {-1.98,  1.72}, {-8.96,  9.50},
           { 0.00,  0.00}, {-3.35,  0.00}, { 2.25,  9.51}, { 2.57, -2.40},
           { 0.00,  0.00}, { 0.00,  0.00}, {-4.82,  0.00}, {-3.24, -2.04},
           { 0.00,  0.00}, { 0.00,  0.00}, { 0.00,  0.00}, { 8.44,  0.00}
        };

have no problem. What on earth is going on with the initialization...

1
2
3
4
5
6
7
8
dcomplex a[LDA*N];
        for (int i=0;i<N;i++) {
        for (int j=0;j<LDA;j++) {
        int k=j+i*N;
        a[k].re = std::real(hamiltonian[i][j]);
        a[k].im = std::imag(hamiltonian[i][j]);
        }//j
        }//i 


Thanks,

Yue
Last edited on
It usually means an out-of bounds error.
Your line 4 is not consistent with the limits of i and j as expressed in the do-loops unless N happens to be equal to LDA. If the loop limits are correct, then it should be
int k= j +i*LDA
Alternatively, the i and j loop limits may be the wrong way round - only you can say this.

Beyond this, it may not reflect the size of your Hamiltonian array.
Topic archived. No new replies allowed.