Hello, I'm writing a numerical code and I need the results in a .dat file. My question is: Is it faster to fprintf at each loop iteration or storage the results in a matrix until the end of calculations then print the entire matrix in the file?
It doesn't seem that large, only about 250000 elements. You didn't say how large the elements are, or exactly what you are fprinting. It probably won't make much difference, but in general it's probably faster to write to the file all at once.
BTW, a ".dat" file is not really anything specific! Since you are mentioning fprintf, it seems that it's just a text file and you are therefore converting all your numbers to text. If it's a binary file, then it would probably make even more sense to store it in memory and write it out in one big block.
If it's actually important, then you could just try it both ways.
fprintf is risky, though. depending on how you specify the conversion to text, it may drop precision on floating point types, so be aware of that.
Also there is a time-penalty to convert to text, so fprintf is going to be slower. Binary is the way to go for speed. Fprintf is going to have 2 extra slowdowns in it over binary (the conversion as I said, and the extra bytes issue... an 8 byte double might be written with around 20 bytes as text, sign, exponent, decimal point, delimiter between numbers, possibly end of lines on each row (2 bytes each on many systems) and so on. The file is bigger on the disk as well. You may not notice these slowdowns, but they are there.
There's always "a,A" format, if you want accurate floating point representations in a text file.
a, A (C99; not in SUSv2, but added in SUSv3) For a conversion, the double argument is converted to hexadecimal notation (using the letters abcdef) in the style [-]0xh.hhhhp±; for A conversion the
prefix 0X, the letters ABCDEF, and the exponent separator P is used. There is one hexadecimal digit before the decimal point, and the number of digits after it is equal to the precision. The
default precision suffices for an exact representation of the value if an exact representation in base 2 exists and otherwise is sufficiently large to distinguish values of type double. The
digit before the decimal point is unspecified for nonnormalized numbers, and nonzero but otherwise unspecified for normalized numbers.
You could also recast a pointer to the double's bytes and write it in raw hex-text (rather than in pieces). I don't know what these kinds of ideas get you in practice (its at best barely readable to humans, but now its being converted and bloated into text format again). In practice I have found scientific notation (%e or %E) to be a reasonable text compromise if you need it readable, and binary if you don't need it readable.