Calculating the function and checking its domain

Hello, my task is to calculate the following function:

cube_root(log5(x2 - sqrt(x))

for argument x = [-5 ; 10], with increment 0.2. So in order for x to satisfy the function's domain, I guess we have to test it against 2 cases:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
double myPow(double x, double y)
{
    if (x < 0.0)
        return -pow(-x, y);
    else
        return pow(x, y);
}
//...
for(double x = -5.0 ; x <= 10.0 ; x += 0.2)
{
    //...
    if(x >= 0.0 && (pow(x, 2.0) - sqrt(x)) > 0.0)
    {
        //myPow correctly calculates the cube root if x < 0 (see def. of std. pow)
        double sum = myPow(log(pow(x, 2.0) - sqrt(x))/log(5), 1/3.0);
        printf("%f ", sum);
    }
    //...
}


However, when x == 1.0, the second condition should equal to:

pow(1.0, 2.0) - sqrt(1.0) == 0.0

And it does, when I write it by itself as above, but in my for loop it doesn't - it equals to something like that: 0.00000000000000310862...

Why is this happening and how can I rectify it to not pass the if condition (and thus satisfy the logarithm constraints)?
this took me a bit to think of but if i remember right it has to do with the double not really being equal to 0.2 but something like 0.1999999999999999999999999... or something like this.

everything below is copied from http://en.wikipedia.org/wiki/Machine_epsilon#Approximation_using_C.2B.2B

Approximation using C++

In such languages as C or C++ when you do something like while( 1.0 + eps > 1.0 ) the expression is calculated not with 64 bits (double) but with the processor precision (80 bits or more depends on the processor and compile options). Below program calculates exactly on 32 bits (float) and 64 bits (double).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#include <iostream>
#include <stdint.h>
#include <iomanip>
 
template<typename float_t, typename int_t>
float_t machine_eps()
{
	union
	{
		float_t f;
		int_t   i;
	} one, one_plus, little, last_little;
 
	one.f    = 1.0;
	little.f = 1.0;
	last_little.f = little.f;
 
	while(true)
	{
		one_plus.f = one.f;
		one_plus.f += little.f;
 
		if( one.i != one_plus.i )
		{
			last_little.f = little.f;
			little.f /= 2.0;
		}
		else
		{
			return last_little.f;
		}
	}
}
 
int main()
{
	std::cout << "machine epsilon:\n";
	std::cout << "float: " << std::setprecision(18)<< machine_eps<float, uint32_t>() << std::endl;
	std::cout << "double: " << std::setprecision(18) << machine_eps<double, uint64_t>() << std::endl;
}


machine epsilon:
float: 1.1920928955078125e-07
double: 2.22044604925031308e-16

Although, instead of typing all that, you could just use the following code:

1
2
3
4
5
6
7
8
9
10
#include <iostream>
#include <limits>
 
int main()
{
	//using a built-in function to display the machine-epsilon given the data type
	std::cout << "The machine precision for double is : " << std::numeric_limits<double>::epsilon() << std::endl;
	std::cout << "The machine precision for long double is : " << std::numeric_limits<long double>::epsilon() << std::endl;
	return 0;
}

You can use std::setprecision(desiredPrecision) to display however many digits you want. // Remember to #include <iomanip> before using std::setprecision(yourPrecision)!
Last edited on
Non-integer data types are inherently prone to be *slightly* off due to round-off errors after arithmetic. Especially dealing with a function like sqrt(), where irrational answers can only be so precise.

One solution is to compare the equality against a threshold.
1
2
3
4
5
const double EXPECTED_ANSWER = 0.0;
double answer = pow(x, 2.0) - sqrt(x);
if (answer >  EXPECTED_ANSWER - 0.0001 &&
    answer < EXPECTED_ANSWER + 0.0001)
{ ... }
Last edited on
If I remember correctly, pow is very inefficient and prone to rounding errors. Much better to use x*x
If I remember correctly, pow is very inefficient and prone to rounding errors.


So when I shoud use pow?
@ats15 If your pow(x,2) is less precise or less efficient than x*x, you should file a bug with the library vendor.

@jeremi02:

1) there's the function cbrt for cube root (technically only available in standard C++ since C++11, but on most systems it has been around for 15+ years in math.h as part of the C library)

2) The value 0.2 is not a valid double. It lies between two doubles:
0.1999999999999999833466546306226518936455249786376953125 aka 7205759403792793 * 2-55
and
0.200000000000000011102230246251565404236316680908203125 aka 7205759403792794 * 2-55

it's a little closer to the second one, so when you write "0.2" in code, you're actually writing that second value, and that's what you're adding on each iteration of the loop.

Take that into account:

1
2
3
4
5
for(int step = 0; step <= 75; ++step)
{
    double x = -5.0 + step*0.2;
    // do your stuff
}

Last edited on
1)

@ats15 If your pow(x,2) is less precise or less efficient than x*x, you should file a bug with the library vendor.


OK, but how do you comment this post at http://stackoverflow.com/questions/16881749/different-results-on-using-sqrt-and-pow-functions/16881975#16881975 ?

pow is very difficult to implement correctly --- that is, so that things like pow(x*x, 0.5) returns x for those x where that's the right answer. Very few implementations (CRlibm being a notable exception) implement pow correctly.


2)

Take that into account:


I get it now, I have to include 0.2 in a single calculation and not add it iteratively in order to avoid the cumulative round errors. And according to this page http://floating-point-gui.de/formats/binary/

Specifically, binary can only represent those numbers as a finite fraction where the denominator is a power of 2


So I should always test if I can represent a specific number as a fraction with denominator being a power of 2 - if I can, the number will be precisely represented in my variable, otherwise it won't?

3)

I have also a question regarding fragment on this page http://floating-point-gui.de/basic/

In other cases like 0.1 + 0.3, the result actually isn’t really 0.4, but close enough that 0.4 is the shortest number that is closer to the result than to any other floating-point number. Many languages then display that number instead of converting the actual result back to the closest decimal fraction.


The underlined sentence is a little bit vague, can you explain this to me?
Last edited on
pow is very difficult to implement correctly

It is, but it's been around for over a quarter century, the difficulties have been addressed.

I'll demonstrate:
1
2
3
4
5
6
7
8
9
10
11
12
13
double f1(double x)
{
        return x*x;
}
// clang++ -O3 -S
_Z2f1d:                                 # @_Z2f1d
        mulsd   %xmm0, %xmm0
        ret
//  g++ -O3 -S
_Z2f1d:
.LFB89:
        mulsd   %xmm0, %xmm0
        ret
double f2(double x)
{
        return std::pow(x, 2);
}
// clang++ -O3 -S
_Z2f2d:                                 # @_Z2f2d
        mulsd   %xmm0, %xmm0
        ret
//  g++ -O3 -S
_Z2f2d:
.LFB90:
        mulsd   %xmm0, %xmm0
        ret
Okey, and could you answer my 2 other questions? :)
So I should always test if I can represent a specific number as a fraction with denominator being a power of 2 - if I can, the number will be precisely represented in my variable, otherwise it won't?


You just take that into account when choosing your algorithms, exactly as you would take into account that 1/3 or 3/7 is never precisely represented as a decimal fraction. Would you program a calculator to add "0.3333" on every step and expect to reach exact zero having started at -5? That "x += 0.2" looked just as crazy to me.

(there are many more nuances with using computers for math, it's a whole field of study, http://en.wikipedia.org/wiki/Numerical_analysis ).

The underlined sentence is a little bit vague, can you explain this to me?

They are probably referring to how python does output: instead of simply converting binary to decimal and rounding to the output precision (as C and C++ output facilities do), it calculates the decimal fraction with the shortest number of digits that, when stored in a floating-point variable, gives the binary value you have.
Last edited on
Topic archived. No new replies allowed.