I do not think that you are calculating the entropy
correctly... Look here to see how to do it
The stackoverflow.com snippet calculates the entropy in bytes
, not bits, but it also explains the difference, so you can change that if you want.
That Hamming code
section is nonsense. You need to throw it away. (Whoever gave it to you was not your friend.)
The responses over at DaniWeb were correct also, in that it is difficult to analyze code that is using some unknown and completely non-standard object system.
In both cases, however, the returned value is a single number.
For the file's entropy, your prototype should be something like:
double entropy( const std::string& filename );
Remember, a file is an array of char
, not float
. The return value is a floating point value in the range [0.0, 1.0].
For the Hamming encoding, I think you want to create a new file that has Hamming-encoded the original file? In that case, for each input byte (char) in the original file, Hamming Code it and then pack the results into the output file. You will have to choose how you want to do it, but I recommend Hamming(7,4) or Hamming(8,4). Keep in mind that this will double the size of your data. Your prototypes should be:
unsigned char hamming84( unsigned char fourbits );
void hamming84file( const std::string& inputfilename, const std::string& outputfilename );
If what you are looking for is just to Hamming encode the file's entropy, then I recommend that you first convert the entropy to a string (like "0.47923") and Hamming-encode that
, instead of encoding the floating point value directly. This is because different processors use different formats and endiannesses to represent FP numbers -- but everyone everywhere can convert to and from the string representation of a FP number.
Whew. Hope this helps.