There are large number arithmetic libraries that already exist. If you're on windows, grab MPIR (www.mpir.org), or GMP (www.gmplib.org) the *nix equivalent. (they're both the same library, MPIR was forked off of gmp for windows development.)
anyway, MPIR/GMP can handle an outrageously large digit amount. As far as I was aware (though i may be wrong), you're only limited by the amount of memory your computer has.
All maths I've ever done that involved digits larger than an unsigned int max size I've done with MPIR, and it's worked flawlessly (esp. useful for things like project euler).
Actually, I'm pretty certain that Cire's interpretation is correct. It is the result which will have 20 digits, not the number itself.
Using type unsignedlonglong, which is (at least) 64 bits, that means the biggest factorial which can be calculated is 20! = 2432902008176640000. That result has just 19 digits.
Well, since there are a few trailing zeros, it is possible to go a little bit further, up to about 23! to make full use of precision available. Beyond that, as others have suggested, you'd need to use special large-number processing.
If we take the alternative interpretation, that the number itself has 20 digits and we want to find the factorial, lets consider how many digits there would be.
Using Stirlings approximation, if the number is 100000000000000000000, then the factorial would have 1956570551809674817246 digits.
Let's say the computer could arrive at 1 million more digits each second, it would take 62 million years to calculate the result. If we store the result on a 2 TB hard drive, it would require 978285275 such drives to hold the result.
If instead the number is printed out, using digits 1mm high and 1mm wide, it would require a piece of paper with an area of 1956570551 square km, which is roughly 4 times the surface area of planet Earth.