Adding a new empty .cpp file to a code base

The compiled exec gets a new 30 odd k boost in size.

I've enabled all newbie debugging code ans gcc ( ubuntu 5.5.0 )has me on a leash
Just adding the empty module where doinputloopm is implemented means that my exec goes from 90 of ks to 130 ks
I suspect it, the debugging code.

makefile follows :
CXX=g++-7
CXXFLAGS += -std=c++17 -g -MT $@ -MP -MMD -MF $(@:.o=.d)\
-pedantic-errors -Wall -Wextra

aa: $(patsubst %.cpp,%.o,$(wildcard *.cpp))
$(LINK.cc) $(OUTPUT_OPTION) $^
-include $(patsubst %.cpp,%.d,$(wildcard *.cpp))

I suspect when the program is complete I will be in the multi Mbytes :)

1
2
3
4
5
6
7
8
9
int main( void )
{ int sortie = success1d; // note 1

 do // Note 2 21-05-2018
   { // sortie = coupactuelGV.doinputloopm(); // en construction
   }while( ( sortie != fin3d ) && ( sortie != analysed ) ) ; // 27-06-2018

 return sortie;
}

The question is "is it normal for a compiled program to jump from being a 90k program all the way up to be a 130k program just for adding an empty module ?"
I hope this makes sense to you.
Last edited on
Is there a question?
The question is "is it normal for a compiled program to jump from being a 90k program all the way up to be a 130k program just for adding an empty module ?"
I hope this makes sense to you.
yes, debugging can increase the size in odd ways that you wouldn't expect.
Meaning no disrespect but there are a LOT of stuff that is "unexpected" ( by this noob ), such as compiling
a .c file of this very small code, produces a 8.6 k exec while compiling the same code in C++17 whith all the bells in whistles produces a whampus of 94k :)
Michel Chassey
https://softwareengineering.stackexchange.com/a/246181
The difference in file size is not due to the language, but rather the libraries used. <iostream> creates global objects and a bunch of other stuff gets inlined. Also, a program compiled with debug flags will be a lot larger than one compiled with release flags.

See also: https://stackoverflow.com/a/15314861/8690169
Use -Os to make gcc/g++ optimize for size.


The only real reason to worry about executable these days is for embedded environments, imho.
At some point size does create performance hits, but it takes a lot more than a few KB.

Size is one of those black art of optimization things. A bigger exe may run faster due to loop unrolling and inlines. It may run slower due to page faults and bus contentions. You can't really predict it and generally end up building a few versions to see which is best (if you care). If you don't care about the performance, favoring small is good: it speeds up patching if nothing else. No one likes having to re-acquire a 5GB dll for an update.

c++ does bloat some vs C. Its the price paid for the tools in the language. C bloats some over good assembly, its the price paid for the tools in THAT language. Other languages bloat more than c++, the price of portability or whatever. /shrug. Do what you can when it matters, and don't worry about it the rest of the time. A cute example from days past... there used to be a full GUI high performance OS with web browser, text editor, and many other tools that you could boot off a disk that was smaller than 1.5 MB. This is a nod to the embedded reference above... if you NEED to you can do a lot with very little.

I guess the first answer is turn off the debugging mess. See what that nets you. It should not change to add an empty file if the debugging is off.
Last edited on
Page faults? Just how large would an executable have to be to cause swapping?
Generally the issue with big executables is supposed to be cache efficiency, but honestly I'm not really convinced.
pretty big these days, coupled with a lot of jumping around in the subroutines. I would guess 10MB+ might start to see some issues if it was running alongside enough other stuff. By itself, I don't think it would be easy to make one have this issue anymore. Probably more of a theoretical or historical concern than a real one outside of really weird circumstances, for sure.




The only way I see the OS paging out executable pages is if a different process is using a huge amount of memory and pushing unused code out of RAM. But if that's the case then that means your code is yielding most of the time, in which case, who cares if it runs slow? I don't see any way an OS with a proper virtual memory implementation could swap out the executable memory of a process in a hot path.
I agree. I guess I am used to all the programs that are fighting each other being mine, and having to make them play nice with each other. And when this rears its ugly head, what I get are big context switching fights. Not the same thing. /shrug I am willing to agree its a non issue!



I think I've seen somewhere ... found: https://www.centos.org/forums/viewtopic.php?p=271405#p271405

Same code. Same language. Same options. Same compiler family. Two OS versions (6 and 7). Three GCC versions (4.4, 4.8, 6.3). Four different (stripped) binary sizes:
# ls -gG foo*
-rwxr-xr-x. 1 4232 Oct  9 13:04 foo-6-44
-rwxr-xr-x. 1 6272 Oct  9 13:04 foo-6-63
-rwxr-xr-x. 1 6272 Oct  9 13:09 foo-7-44
-rwxr-xr-x. 1 6240 Oct  9 12:55 foo-7-48
-rwxr-xr-x. 1 6344 Oct  9 12:55 foo-7-63


In other words, there are multiple contributors that do affect what does end up into the binary.
Topic archived. No new replies allowed.