How does recursion work while defining multidimensional template array?

This is the code I found online.

template<class T, unsigned ... RestD> struct array;

template<class T, unsigned PrimaryD >
struct array<T, PrimaryD>
{
typedef T type[PrimaryD];
type data;
T& operator[](unsigned i) { return data[i]; }

};

template<class T, unsigned PrimaryD, unsigned ... RestD >
struct array<T, PrimaryD, RestD...>
{
typedef typename array<T, RestD...>::type OneDimensionDownArrayT;
typedef OneDimensionDownArrayT type[PrimaryD];
type data;
OneDimensionDownArrayT& operator[](unsigned i) { return data[i]; }
};

int main()
{
array<int, 2, 3>::type a4 = { { 1, 2, 3}, { 1, 2, 3} };
array<int, 2, 3> a5{ { { 1, 2, 3}, { 4, 5, 6} } };
std::cout << a5[1][2] << std::endl;

array<int, 3> a6{ {1, 2, 3} };
std::cout << a6[1] << std::endl;

array<int, 1, 2, 3> a7{ { { { 1, 2, 3}, { 4, 5, 6 } } }};
std::cout << a7[0][1][2] << std::endl;
}



Could you explain what this code does exactly? I understand that recursion is used in some form here to create a multidimensional array, but I am a little confused on how this process works.

I am also confused about this line:
array<int, 2, 3>::type a4 = { { 1, 2, 3}, { 1, 2, 3} };

What is the ::type?
Last edited on
derp.
Last edited on
How bad would the memory glut be? And what do you mean when you talk about native types?

If the memory glut is indeed as severe, do you have an alternate suggestion as to handling multidimensional data, where the number of dimensions is provided at compile time?

I am aware of an approach where a multidimensional array can be indexed into a 1D array using strided indexing, but that method seems overly cumbersome for what I am aiming to do, create a C++ equivalent of a NumPy Array in Python
double derp
Last edited on
@BreakingTheBadBread,
What you are looking at are variadic templates. The ... refers to an unspecified number of parameters, which are dealt with one by one by recursion at compile time. In python you would just write *args.

The ::type is the member data of your array.

This may give you the indexing structure of numpy arrays, but it won't give you all the other features like elemental operations, striding, etc.

I'd still flatten to 1-d arrays!
Last edited on
@lastchance

What do you recommend to achieve the latter? The indexing structure looked genuinely enticing for me to consider this option, but would you recommend a 1D array + strided indexing for efficient numpy operations? My end goal is to perform a variety of matrix operations, like matrix multiplication. That is why the indexing structure seemed enticing, unless you have another suggestion?
It is much easier to do "whole-array" operations (the equivalent of NumPy's a=a+4 or b=np.sin(a) etc.) with 1-d arrays.

For a 2-d array you would just define a function index(i,j) to return i*nj+j (or amend if you didn't want zero-indexed arrays). Matrix multiplication would then be
c[index(i,j)]="sum-over-k" a[index(i,k)]*b[index(k,j)]
etc.
Topic archived. No new replies allowed.