Sparse Matrix deteminant and inverse

Hi all , As in the previous post I'm implementing a sparse matrix using vecor<map<index2, valueType>> for saving resources inside the map there is only the index of columns (used as Key) and the relative value (of course just for non zero element) now i've implement the product (as you can see in the previous post)

Now I want to implement the inverse of the matrix and the determinant .. could you suggest me some algorithm for doing this saving resources as possible ?

here the class interface and the code used for the product:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
template <typename element_type>
class SparseMatrix {
  
  public:
   template<class T>
   friend SparseMatrix<T> operator+(const SparseMatrix<T>& m1 , const SparseMatrix<T>& m2 );  
  
   template<class T>
   friend SparseMatrix<T> operator-(const SparseMatrix<T>& m1 , const SparseMatrix<T>& m2 );  
   
   template <class T>
   friend SparseMatrix<T> operator*(const SparseMatrix<T>& m1 , const SparseMatrix<T>& m2  );

   template <class T>
   friend std::ostream& operator<<(std::ostream& out , const SparseMatrix<T>& m);
 
   template <class T>
   friend SparseMatrix<T> dot(const SparseMatrix<T>& m1 , const SparseMatrix<T>& m2  );
 

  
  public:
    // container type ;  
    using data_type = std::vector<std::map<std::size_t , element_type >> ;
    using it_rows   = typename std::map<std::size_t , element_type>::iterator ; 

    SparseMatrix(std::size_t rows , std::size_t cols) : rows{rows} , columns{cols}
    {
       data.resize(rows);     
    }
    
    SparseMatrix(std::initializer_list<std::initializer_list<element_type>> l );
     
    SparseMatrix(const std::string );  


    auto insert(std::size_t i , std::pair<std::size_t, element_type> p )
    { 
       assert( i < rows && p.first < columns); // , "Index out of bound" );
       data.at(i).insert(p);
    }
    
    auto insert(std::size_t i, std::size_t j, element_type val)
    {
      assert(i<rows && j <columns);
      data.at(i)[j] = val ;
    }
    
    auto identity() noexcept ;
    
    auto diag(const element_type& v) noexcept ;
    
    auto print() const noexcept ; 
    
    auto dataType() const noexcept ;
    
    auto traspose() noexcept ;
    
    auto printf()const noexcept ;
    
    element_type operator()(std::size_t , std::size_t) noexcept ;  
    const element_type operator()(std::size_t , std::size_t) const noexcept ;  

  private:
   
    std::size_t rows    ;
    std::size_t columns ;
      data_type data    ;     // vector containing row element 


};


the dot product :
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
template <class T>
SparseMatrix<T> dot(const SparseMatrix<T>& m1 , const SparseMatrix<T>& m2  )
{
      if(m1.columns != m2.rows)
      {
            throw std::runtime_error("Matrix sizes dosent match the dot-product");
      }

      SparseMatrix<T> result{m1.rows  ,  m2.columns };
      
      for(std::size_t i=0 ; i < result.rows ; i++)
      {
           for(std::size_t j=0 ; j < result.columns ; j++ )
           {
                 for(std::size_t k=0; k< m1.columns ; k++)
                 {
                        if( m1.data.at(i).find(k) != m1.data.at(i).end() && 
                            m2.data.at(k).find(j) != m2.data.at(k).end()    )
                        {    
                            result.data.at(i)[j] += (m1.data.at(i).at(k) * m2.data.at(k).at(j)) ;
                        }
                 }
           }
      }
      return result ;      
}     


as I recall the best way to get the det (which is useless, by the way) is to compute the Eigen values/vectors and build it directly from that. Unless it is trivial size (2x2 or 3x3) the det is too much trouble to create directly.

inverse can be computed by solving the matrix with an identity matrix appended (standard reduced row echelon form solving). The identity matrix becomes the inverse.

Both RREF and the DET work can exploit sparseness in various places, simply because of the zeros there will be places where you have no work to do.

you may or may not be able to figure out a way to do this without expanding into a full bore real matrix storage (NXM sized storage). I am struggling to think of a way to do that. I do not know, but suspect, you may be able to BUILD the inverse if you have the det..?
Last edited on
could you please explain better ?? sound interesting !
explain which? do you know linear algebra well? Its been 20 years ... and all this stuff is online, but if you do a LU decomposition you can solve for the eigens and back fill a determinant from there.

RREF works just like it does in algebra 101 class; brute force works here but if the numbers have numerical instability you have to normalize and do other stuff. If the matrices are friendly, you can skip that. Trying to be more efficient usually leads to doing some other high complexity algorithm first, such as ... LU decomp and Eigen :P

consider:
https://www.coin-or.org/CppAD/Doc/eigen_det.cpp.htm
and
http://www.purplemath.com/modules/mtrxinvr.htm
Last edited on
not the reduced row echelon form solving !!
See if those links make sense. The Eigen/det relationship would take 3 pages to explain and to be honest I haven't looked at it in an extremely long time because, as I said above, the det is about worthless. Time you have found it, you already know everything about that matrix, so it gives no value added.

Any good LA textbook or web site can explain this better than I can.
Ironically I do have code for it, somewhere, but its from the 90s and its horrible.
Last edited on
Topic archived. No new replies allowed.