Let's say the .exe file of a program is 480 kb.When you run the program,does it use about 480kb of memory?
Functions use memory,variables use memory,objects use memory...
For example,I wrote two .cpp files,built exe from them.One of the exe files is very short and it is just 5.120byte.The other is longer and 497.644 byte.
When I run these exe files ,look at task manager's processes section,the shorter exe file seems using 1.044K (is this in KB) and the longer exe seems using 1.056K.Why is the difference between so small?Also why the shorter exe file seems using 1.044 KB? In it's cpp file there is only one function(main) and it has just one variable.The longer exe file has about 6 functions and has more variables including arrays.
Most operating systems keep the executable file open while the program is running, and in some cases that means that the executable file is loaded into memory, but in most cases the size of the executable file is vastly different from the runtime memory your program uses.
Most operating systems keep the executable file open while the program is running, and in some cases that means that the executable file is loaded into memory
Sometimes the executable file is loaded into memory? How is the CPU supposed to execute it if it's not in memory? The executable file is always in memory, unless it's on a memory page that's been swapped onto the hard disk.
Other than that, LB is right. The memory usage of a program is the sum of the size of its executable file, stack and heap. The stack is usually the same size for all programs. The executable file varies quite a bit, and the heap varies a lot. The heap refers to memory that you allocate and deallocate with new and delete (or malloc and free). Your code may not be using it directly, but the functions you call probably are.
The executable file is always in memory, unless it's on a memory page that's been swapped onto the hard disk.
To be swapped out, the page has to be in memory first. And by default the OS does *not* load the whole executable into memory. Actually it tries hard not to do it as much as possible, e.g by deferring dynamic linking. It loads it only when it really needs to execute the code. 80% time programs execute only 20% of code, so it is very likely that only that 20% is loaded into memory, and the rest was *never* fetched from the HDD.
I guess this is a perfect definition of "sometimes" / "some cases" as L B said.
If the OS was reading more code from the disk every time a new (i.e. one which wasn't already loaded) function was called, wouldn't that be extremely slow? Are you sure it doesn't just load the whole file into memory? To clarify, I know it doesn't load the file "flatly" into memory, i.e., it reads the headers and loads the sections separately, but you seem to be describing it loading some code and then only loading more when it needs to.
The way I understand this is how rapidcoder described. The OS will only load certain pieces of code that it needs, and page in and out others as it needs. Works due to locality of reference, ie a program will execute in a given set area for a good period of time.
The whole sparsity of the executable ready format is usually parsed/checked first by the kernel after loading it from a storage drive, and if it is "executable" to the kernel the whole entire file is usually not loaded in to a stack at once. So, yes, chrisname was wrong.