UML class diagram to code translation

Pages: 12
You seem to be awfully interested again (see your other posts) about micro-optimising the allocation of large amounts of memory whilst completely disregarding how long it would take (for example) to just write 100M numbers to your huge matrix.

Nevermind how long it would take to do some meaningful calculation on it.

Saving a few mS on a calculation that might take hours - mmm, totally not worth the effort.

> To make my C code GPU compatible.
It's far easier to optimise a working program than it is to make an optimal program work.
I've seen nothing to convince me that you've gotten anything to work end-to-end yet.
Walk -> Run -> Fly.

> Is it possible to allocate 800MB in one chunk?
For sure, if you have a 64-bit OS and say 16GB of RAM.

Allocating it is one thing.
Whether it all remains in memory all the time is another.
https://docs.microsoft.com/en-us/windows/win32/procthread/process-working-set
I routinely allocate 4-5 GB chunks to read some large XML files into a single buffer. Its nothing to even my low end work laptop, and a 'real' system (a server) is going to have TB of memory. A decent desktop or gaming system is in the 32-64 GB range.

Realloc or letting a vector re-size is a no-no for speed. Allocate what you need all at once up front. Reallocation / resize / type operations are bad news for big matrix processing. Just don't do it. This goes back to my #1 bullet on management of your memory. Its the #1 thing in linear algebra. Linear algebra is very kind about this: for every algorithm/routine/etc you do, you know how big the inputs are when you get to the routine, and from that you know how big the intermediates will be, and how big the result will be, etc. The only thing that ever really changes size at random are if you are storing sparse matrix in a collapsed form. Those are a whole new ballgame, though. Are you doing that?!

Abuse of static etc depends on the problem, scope, hardware, and more. I kept all my temporary objects in memory as statics and shared a huge pool of memory in my library, because the hardware was slow, the matrices were not huge, and I had more ram than CPU cycles. But this needs to be tailored to your problem -- there are valid reasons to choose various approaches.

Last edited on
Hi Jonnin,
Thanks for your reply. It's nice to have such a conversation. Yes, I am thinking actually to move to * from ** of pointer. It's just because it's flexibility as well as to achieve the opportunity from GPU system. Meanwhile, i am aware of the upcoming new technology which support only a raw pointer and play a vital role on the top of the C programming.

Lets talk about the changing of the dimension of linear algebra/matrix at every time step or iteration (Lagrangian approach) in another day. But it can change due to the change in numerical grid along the object interface.
https://www.youtube.com/watch?v=tokDQ_fPPd8

Great conversation. Thanks to everyone. I am gonna mark this question as solved.

Best regards
Shafiul


we can circle back to it but even with advanced algorithms like that, you can still over-allocate to some extent and even if you can't totally avoid a resize, you can avoid doing it frequently. If its a 10x10 and you allocated a 100x100 and it changes to a 20x20 and then a 30x30 it still fits in the orginal space. But when you start talking gigantic matrices, you have to start making decisions on how to deal with the issue for your hardware and problem space.

even if you reallocate every 20 iterations instead of every 1 iteration, its a big lift.
Last edited on
Topic archived. No new replies allowed.
Pages: 12