### Sparse Matrix deteminant and inverse

Hi all , As in the previous post I'm implementing a sparse matrix using vecor<map<index2, valueType>> for saving resources inside the map there is only the index of columns (used as Key) and the relative value (of course just for non zero element) now i've implement the product (as you can see in the previous post)

Now I want to implement the inverse of the matrix and the determinant .. could you suggest me some algorithm for doing this saving resources as possible ?

here the class interface and the code used for the product:

 ``1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071`` ``````template class SparseMatrix { public: template friend SparseMatrix operator+(const SparseMatrix& m1 , const SparseMatrix& m2 ); template friend SparseMatrix operator-(const SparseMatrix& m1 , const SparseMatrix& m2 ); template friend SparseMatrix operator*(const SparseMatrix& m1 , const SparseMatrix& m2 ); template friend std::ostream& operator<<(std::ostream& out , const SparseMatrix& m); template friend SparseMatrix dot(const SparseMatrix& m1 , const SparseMatrix& m2 ); public: // container type ; using data_type = std::vector> ; using it_rows = typename std::map::iterator ; SparseMatrix(std::size_t rows , std::size_t cols) : rows{rows} , columns{cols} { data.resize(rows); } SparseMatrix(std::initializer_list> l ); SparseMatrix(const std::string ); auto insert(std::size_t i , std::pair p ) { assert( i < rows && p.first < columns); // , "Index out of bound" ); data.at(i).insert(p); } auto insert(std::size_t i, std::size_t j, element_type val) { assert(i

the dot product :
 ``1234567891011121314151617181920212223242526`` ``````template SparseMatrix dot(const SparseMatrix& m1 , const SparseMatrix& m2 ) { if(m1.columns != m2.rows) { throw std::runtime_error("Matrix sizes dosent match the dot-product"); } SparseMatrix result{m1.rows , m2.columns }; for(std::size_t i=0 ; i < result.rows ; i++) { for(std::size_t j=0 ; j < result.columns ; j++ ) { for(std::size_t k=0; k< m1.columns ; k++) { if( m1.data.at(i).find(k) != m1.data.at(i).end() && m2.data.at(k).find(j) != m2.data.at(k).end() ) { result.data.at(i)[j] += (m1.data.at(i).at(k) * m2.data.at(k).at(j)) ; } } } } return result ; } ``````

as I recall the best way to get the det (which is useless, by the way) is to compute the Eigen values/vectors and build it directly from that. Unless it is trivial size (2x2 or 3x3) the det is too much trouble to create directly.

inverse can be computed by solving the matrix with an identity matrix appended (standard reduced row echelon form solving). The identity matrix becomes the inverse.

Both RREF and the DET work can exploit sparseness in various places, simply because of the zeros there will be places where you have no work to do.

you may or may not be able to figure out a way to do this without expanding into a full bore real matrix storage (NXM sized storage). I am struggling to think of a way to do that. I do not know, but suspect, you may be able to BUILD the inverse if you have the det..?
Last edited on
could you please explain better ?? sound interesting !
explain which? do you know linear algebra well? Its been 20 years ... and all this stuff is online, but if you do a LU decomposition you can solve for the eigens and back fill a determinant from there.

RREF works just like it does in algebra 101 class; brute force works here but if the numbers have numerical instability you have to normalize and do other stuff. If the matrices are friendly, you can skip that. Trying to be more efficient usually leads to doing some other high complexity algorithm first, such as ... LU decomp and Eigen :P

consider:
and
http://www.purplemath.com/modules/mtrxinvr.htm
Last edited on
not the reduced row echelon form solving !!
See if those links make sense. The Eigen/det relationship would take 3 pages to explain and to be honest I haven't looked at it in an extremely long time because, as I said above, the det is about worthless. Time you have found it, you already know everything about that matrix, so it gives no value added.

Any good LA textbook or web site can explain this better than I can.
Ironically I do have code for it, somewhere, but its from the 90s and its horrible.
Last edited on