C++ to a whole new level

Pages: 123
If we were talking about several MB worth of memory, I'd agree.

But we're literally talking about less than half a KB. In reality, with the way memory is arranged into sectors, it's likely that the few bytes of memory we're "wasting" would get "wasted" anyway.
closed account (3qX21hU5)
Lumpkin wrote:
I hate it how when someone writes inefficient code they're like "it doesn't matter, give it more hardware." Seriously, better machines are not an excuse to write inefficient code.


No one said really anything about "give it more hardware", what they did say was that with hardware these days you don't really need worry as much about memory as you used to.

Which actually is a reason to write "inefficient code" as you call it. Mainly because yes you can spend time trying to write everything as efficiently as you can but that is in most cases wasted development time because usually the user won't be able to tell the difference between the two. And to top it off it makes your code more complex and harder to follow in most cases.

So if the user can't tell the difference between the two and you can either write really efficient code that will take you a month to complete or write somewhat inefficient code that takes a week to complete what would be the better choice? One will take you 4 times as long to complete and make the code more complex with no added benefit to the end user which just seems like a waste of time to me.

Of course there will be performance critical parts of programs that need to be very critical of efficiency but that is a different story.

Basically what I am trying to say is when people say that we don't need to worry to much about writing efficient code now days because of the hardware available it is actually quite true. In general it usually isn't worth the added development time or the added complexity. Again performance critical parts are different.
Last edited on
closed account (S6k9GNh0)
It's another instance of quantity over quality. I don't think that's morally correct. Facebook, despite how much I hate it as a social service, does the more morale thing and actually goes out of its way to increase efficiency... and then takes it one step further and releases optimization methods to the public. It seems to work out quite well for them.

Writing inefficient code in a hurry is clearly not something you'd usually want to do if you can help it. I'm sure some Minecraft fans wish the client/server didn't suck so much.
It's another instance of quantity over quality. I don't think that's morally correct.


I think it's getting blown way out of proportion here. The amount of memory spent on a small array of pointers is completely negligible.

It's one thing to be conservative with resources when programming. It's another to micro-optimize. There comes a point when optimizing such a minor detail simply is not worth it.


By beef with 2D vectors is less about memory issues and more about clunkiness. Vectors of vectors are more awkward to use than an encapsulated object which abstracts the concept of a grid.
Last edited on
@Disch: you should write a proposal for a new container adaptor in the standard library for encapsulating a container of containers.

Even I don't know if I am serious or not...
closed account (Dy7SLyTq)
@LB: do you mean a container specialized to hold other container?
I thought I was clear given the context of the previous post: I mean a container to hold a container of containers (e.g. std::ranked_list<3, std::vector<std::list<std::deque<int>>>> or something)
Last edited on
closed account (Dy7SLyTq)
oh my bad
My proposal:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
template <typename T>
class vector2
{
public:
    vector2(size_t w = 0, size_t h = 0)
        : data(w*h)
        , wd(w)
        , ht(h)
    {}

    T& operator () (size_t x, size_t y)  { return data[ y*wd + x ]; }
    const T& operator () (size_t x, size_t y) const { return data[ y*wd + x ]; }

    size_t width() const { return wd; }
    size_t height() const { return ht; }

    void resize(size_t w, size_t h)
    {
        data.resize(w*h);
        wd = w;
        ht = h;
    }
private:
    std::vector<T> data;
    size_t wd, ht;
};
I ran time to watch execution times. I found that 1D Arrays and Vectors do have the same execution times and 2D Arrays and Vectors do have the same execution times. The real execution time was only larger because I was printing out the array/vectors.


1D Arrays/Vectors real 0m 0.006s   sys 0m 0.000s
2D Arrays/Vectors real 0m 0.022s   sys 0m 0.004s
Lumpkin wrote:
I hate it how when someone writes inefficient code they're like "it doesn't matter, give it more hardware." Seriously, better machines are not an excuse to write inefficient code.


@Lumpkin
The attitude isn't "give it more hardware", hardware has and will steadily improve. If it didn't I doubt this site would exist as everyone would still be pounding out Assembler code and Atari like games and using Unix only OSes with no GUI interfaces. You stress efficiency, memory, etc. and you are stressing of the lesser of the C++ evils as there are way more things that you could do to ramp up memory and drop efficiency than 1D v 2D v nthD arrays.
Last edited on by closed account z6A9GNh0
closed account (N36fSL3A)
Yes, I know. I was over exaggerating. It's just something that always pokes me every time I see, it, I have a real problem.
@BHX in regards to your 2D vector on the previous page, if you start adding more elements (dynamically) to the inner vectors then they will need to move somewhere else in memory.
@Lumpkin
You are going to have to learn to be fine with inefficient code. It is fine that you want your code to be efficient, but companies in truth aren't going to care about the code being efficient. Dont' get me wrong, there are companies out there that do care about getting it done right the first time, but more companies care about getting it from concept to finished product by the deadline to make a profit. I can recall Discreet when the released a version of 3D Studio Max for Vista then had to patch it because their code crashed after a Vista mandatory update. Bethesda Softworks is notorious for that, Fallout 3 and 4, Elder Scrolls 4 and 5, and I believe Wet all were released and then had to be fixed after due to inefficient code. Microsoft is another.

You remind me of a guy from Allegro.cc who regularly goes on about the people he works with who don't do efficient code, don't comment, or do just braindead things in code. Even talks about how he has tried for years, unsuccessfully, to get them to change from Mercurial to Git.

I do have a point, the point is, to make it in this industry you have to be able to not be bothered by that sort of thing.
closed account (S6k9GNh0)
In my opinion, I think it would be okay to be bothered with it, as long as you were able to put in the extra work to do what you want yourself. i.e. don't complain about someone else doing something incorrectly inefficiently that you can do better yourself. I complain about Minecraft because he goes out of his way to obfuscate the code so it can't easily be decompiled, thus I have little say about what it does to optimize.
Last edited on
@computerquip
I agree, you can be bothered by it, but don't waste your time trying to get it changed. A company can let you go if they think you are holding up production. It is better just to be annoyed by it and get the job done than hold up production debating on efficiency of the code.
In my opinion...

With today's hardware, unless you are programming something that needs maximum efficiency, like a graphics card driver or some sort of mathematical algorithm that needs to be run as fast as possible, code can be inefficient. Most importantly, it needs to be smooth and reasonable fast for the user, but if you have to spend a few megabytes or even tens of megabytes of RAM for your program to be easy to program and easy to read, it is definitely worth it to make your program a little more memory hungry.
Topic archived. No new replies allowed.
Pages: 123