Newton's method alone is not a particularly good way to find polynomial roots in the way you are trying to do here.
It has well known modes of failure even when near a root, and has no guarantees about finding one. Furthermore its convergence can be quite slow in the presence of repeated roots.
If you have a range of x-values to search over, where there is a sign change in the function (which guarantees at least one root) you can use a bracketing and bisection method such as the golden ratio search. This is robust but slow, and only gives one root.
The standard technique is something like Brent's method (see Numerical Recipes in C, Section 9.3), which combines the robust bisection method with a quadratic inverse interpolation to give a much faster convergence. I believe there is also a section in that book describing an algorithm to generate the initial bracket if you have no idea of the bounds.
Once you have found one root, it may be possible to factor it out of the polynomial and search for another root, until the "remainder" polynomial is order 1 (http://en.wikipedia.org/wiki/Horner_scheme#Example_2),
or there are no more roots to find.
If you actually need to know ALL the roots, then the best way I know of is to compute the eigenvalues of the polynomial's companion matrix (http://en.wikipedia.org/wiki/Companion_matrix).
This will give you exactly n complex roots, where n is the degree of the polynomial (http://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra).
If you only need the real roots, simply reject those that have a non-zero imaginary component.
For this you will obviously need a function to compute the eigenvalues of a matrix. It is a pain to write your own, so I suggest you look for a linear algebra library that supports it.
This latter method is the one that is used in MATLAB and other numerical computing packages, and doesn't rely on arbitrary tolerances and limits.
Edit: Added link to polynomial division example by Horner's method