Convert a vector of unsigned characters to an IEEE 754 float

Hello.

I have binary data represented as a vector of unsigned characters. I need to convert it to a IEEE 754 float. Using this source (http://www.technical-recipes.com/2012/converting-between-binary-and-decimal-representations-of-ieee-754-floating-point-numbers-in-c/) I tried to implement the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
typedef unsigned char BYTE;

float bytesToFloatIEEE(std::vector<BYTE> b)
  {
    float finalResult;
    
    // Convert the array of characters to string
    std::string bString(b.begin(),b.end());
    
    // DEBUG!!!
    std::cout<<"BITSET STRING: "<<bString<<std::endl;
    
    std::bitset<32> set(bString);
    
    int HexNumber = set.to_ulong();
    
    bool negative  = !!(HexNumber & 0x80000000);
    int  exponent  =   (HexNumber & 0x7f800000) >> 23;
    int sign = negative ? -1 : 1;
    
    // Subtract 127 from the exponent
    exponent -= 127;
    
    // Convert the mantissa into decimal using the
    // last 23 bits
    int power = -1;
    float total = 0.0;
    for ( int i = 0; i < 23; i++ )
    {
        int c = b[ i + 9 ] - '0';
        total += (float) c * (float) pow( 2.0, power );
        power--;
    }
    total += 1.0;
    
    finalResult = sign * (float) pow( 2.0, exponent ) * total;
    
    return finalResult;
  }


However, this code gives the following error:
std::invalid_argument: bitset string ctor has invalid argument

1) Is there a simpler way to convert a vector of unsigned characters to IEEE 754 float?
2) Why does bitset have invalid argument?

Thanks a lot!
The bitset constructor expects a string containing a series of '0' and '1' characters.
OK. I see.

Changed the code a bit:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
    std::bitset<32> floatBitset;
    
    std::string bString;
    
    for(int i=0; i<b.size();i++)
    {
        std::bitset<8> bitsetTemp = charToBits(b[i]);
        bString += bitsetTemp.to_string();
    }

    std::bitset<32> floatBitset(bString);
    
    std::cout<<"BINARY STRING: "<<bString<<std::endl;


    std::bitset<8> IGAByteConversion::charToBits(unsigned char byte)
    {
       return std::bitset<8>(byte);
    }


Now, the binary string is as follows:

11000011011100010000000000000000

which should produce a value of -241.

However, my output is 6016.

Obviously there is something wrong with the actual conversion code.

Help would be much appreciated!
Last edited on
OK. Solved using:

finalResult = reinterpret_cast<float&>(hexNumber);
Topic archived. No new replies allowed.