this is the code and it works but I was just wondering if someone can explain how it actually works? I understand that the d is used to output the number to the 4th place(except it doesn't work when the 4th number is 0 and I'm wondering how that 0 can be displayed) but the main thing is I can't understand how the while loop actually works when the answer=1. I'm sorry I know it's a dumb question but can anyone break it down for me?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
#include <iostream>
usingnamespace std;
int main() {
double d = 0.0001;
cout << "enter number greater than 1 to find square root" << endl;
double num;
cin >> num;
double answer = 1;
while (answer*answer <= num)
answer += d;
if (answer*answer > num)
answer -= d;
cout << "square root =" << answer;
return(0);
}
your 'root seeking' program is a very good example that makes me sad. It works perfectly and is for small numbers fast enough that there is no other reason to change the code but a gain in your knowledge (no businessman will pay for). It makes me sad as it is one source of bad software.
The while loop
1 2
while (answer*answer <= num)
answer += d;
with only one statement may be coded without braces. This is the essential part of your program. I assume you know the iedea behind it:
answer = sqrt(num) // square both sides of the equation
answer*answer = num // is an equivalent formula
As long as it is not even it will repeatedly inkrement answer by the small amout of d.
First answer is set to 1 what results for sure in wrong result as 1*1=1 but you requested to enter a number > 1.
Next iteration answer is 1.0001 so answer*answer = 1.000200010 what is a little closer to num than before.
This said you may step throug on your own the until while condition is not met any more.
BTW, you may avoid the confusing if after the loop by changing its while condition.
wow thank you I understand now! My professor actually wrote the code as a review for our exam. Honestly most of the time its hard for him to explain the code himself and thats why its so hard to actually comprehend what the code is doing. I'm taking intro to c++ right now so it's all very new..can you advise me on how I can learn the right way to code in c++?
Just practice. Re-read your material from this class and see if you understand the early parts better now. Hopefully take the next class from a different teacher if possible.
I don't see anything 'wrong' with the code you posed here, its just not the best algorithm (that has nothing to do with c++). Again, for practice or fun it may be interesting to see if you can do this in a less brute force way.
int main()
{
double x;
cin >> x;
double guess = x *pow(0.3, log10(x)); //a well known first cut is based off the # of digits in the input... yes, it won't work for zero...
for (int i = 0; i < 5; i++)
guess = (guess + (x/guess)) *0.5 ;
cout << guess;
}
#include <iostream>
usingnamespace std;
int main()
{
double x;
int e;
cin >> x;
if (x > 0)
{
// double guess = x *pow(0.3, log10(x)); //a well known first cut is based off the # of digits in the input... yes, it won't work for zero...
for (e = 1; x > 100; e *= 10)
x = x / 100;
double guess = x / 3;
for (int i = 0; i < 5; i++)
guess = (guess + (x / guess)) * 0.5 ;
cout << guess * e;
}
else
cout << "No result with this program for your input: " << x;
}
AFAIK the Babylonians iterated this formula at maximum only two times using "a good guess" (not yet known how they found it -- what is BTW still today a task for optimisation).
In addition I doubt they had an idea of *pow(0.3, log10(x)). I suggest to reduce the input for the "babylonian" loop (in fact it's Newton's Method) to the range 0 < x <= 100 by deviding by 100 as often as needed and correct the output accordingly. (Should also be done for input < 0.01)
Edit: Replace statement (Should also be done for input < 0.01) with (Should also be done for input < 1)
A human on paper can guess the root pretty well. The computer lacks that ability.
Even a child can get a ballpark by using the perfect squares they already know. An adult can run even the first 2 terms of taylor against a known perfect square and get a ballpark for bigger numbers. I started to do it that way but the number of digits trick is less trouble. The ancients would not have known taylor, either :P
#include <iostream>
#include <cmath>
usingnamespace std;
double mySqrt( double x )
{
double power = 1;
for ( ; x > 100 ; x /= 100, power *= 10 ); // Conditioning
for ( ; x < 0.01; x *= 100, power /= 10 );
double root = 1, old = 2, eps = 1.0e-20; // Set start and tolerance
while ( abs( root - old ) > eps )
{
old = root;
root = 0.5 * ( root + x / root ); // Newton-Raphson
}
return root * power;
}
int main()
{
while ( true )
{
double x;
cout << "Enter x ( <= 0 to stop ): "; cin >> x; if ( x <= 0.0 ) return 0;
cout << "Square root is " << mySqrt( x ) << '\n';
}
}
Enter x ( <= 0 to stop ): 1e100
Square root is 1e+50
Enter x ( <= 0 to stop ): 1e-100
Square root is 1e-50
Enter x ( <= 0 to stop ): 169
Square root is 13
Enter x ( <= 0 to stop ): 0.25
Square root is 0.5
Enter x ( <= 0 to stop ): 1
Square root is 1
Enter x ( <= 0 to stop ): 0
Faster? Sorry, no. I did dig through this and other sources since long, but if it's about 500...20'000 digits and more, none is faster than Newton's Method -- at least according my experience.
I am not a student attending a training, I am a hobbyist, so all is "allowed".
I think you can get it in a couple of log op
I doubt that they will be faster. But go ahead, show me how exp(log(num)/2) could be done with 20'000 digits. There is a arbitrary precision calculator -- http://www.isthe.com/chongo/tech/comp/calc/index.html , alas I was not yet able to use it as library on Windows. And behind the scenes there also will be iterative methods.
I don't know that it would be faster. It hinges on what the library you use has and how it works ... if you have an arbitrary precision number class ... how does the log function itself look for that? If that alone is iterative, you gain nothing, of course. How does the pow() function look for that, is pow to 1/2 fast? How does the sqrt routine look? I can't comment on how fast or slow various approaches will be. But for some problems, iteration is best, others, not. Sqrt iteration converges really fast, so Ill just leave it there... if you have a non-iterative solution, it will still beat it, if not, you will be hard pressed to find a faster convergence and your time may be better spent looking for a better initial guess to drive down the iterations further.
A non-iterative solution of a function (of any function) that returns 20'000 digits or more? That is beyond my imagination, sorry. Even if your favourite pocket calculator shows a result for log, sin, sqrt, younameit before you lift the finger off the corresponding key it is probably an iterative procedure behind the faceplate and rarely a table lookup. See CORDIC -- https://en.wikipedia.org/wiki/CORDIC
As I recall, the Babylonian method doubles the number of digits of accuracy with each loop.
Fast? Well, it is not compared with some Pi-finding procedures like nonic convergence for one of Borwein's algorithms -- https://en.wikipedia.org/wiki/Borwein%27s_algorithm just to mention one of several algorithms.