Hi,
Note: The question is basically what i've written at the bottom of this text, everything above "Long story short" is just a bit of background information.
I am currently writing a program where i want to be able to switch the base data type (float / double / long double / custom) which is used for calculations using a simple preprocessor
#define PRECISION = 2
like statement.
The problem is that I need to use various constants throughout the code like 0, 0.5, 1.0, 5.0e7, ... and so on. First i did something like the following:
1 2 3 4 5 6 7 8 9 10 11

#if PRECISION == 1 //float
#define m_half = 0.5f
#define m_one = 1.0f
[...]
#endif
#if PRECISION == 2 //double
#define m_half = 0.5
#define m_one = 1.0
[...]
#endif
 
This list of course gets quite long very quickly and surely isn't a nice solution to the problem.
So i decided to try out a new c++11 feature and define a custom suffix, _V, which would do the work for me and convert all the constants at compile time:
1 2 3 4 5

template <char... Digits> constexpr
float operator "" _V() {
return ( intPart<Digits...>() + fracPart<Digits>() )
* Power<10, expPart<Digits...>>::value
}
 
So if i write
float value = 123.456e7_V
, the intPart function calculates 123.f, fracPart calculates 0.456f and expPart calculates 7.
In theory it should work nicely, but in practice we're working with finite precision of course, so i get rounding errors. That would not be a big deal if it isn't different from typing
float value = 123.456e7f
, but my algorithm introduces some rounding errors and so the result is less precise than the built in calculation.
Long story short: What's the best way (or at least a better approach than mine) of converting a decimal scientific number to a floating point value at compile time using c++11's feature of user defined literals? How does the compiler change
123.456e7f
to the internal representation of a floating point value?