I've noticed that not many people use the longdouble data type. On some environments it's the same as a double, but in others it has more precision (on one I tested it was 12 bytes in size). Is there a reason that just plain double gets used instead of long double? Or is it just less commonly known?
It depends entirely on context. Float is typically precise enough to do what you want.
A lot of modern hardware has doubles, and defaults to using them internally. Double is a natural choice in that case.
x86 systems have had, for a long time, the 'extended' data type, which (IIRC) was more-or-less recently ratified as an IEEE standard type. Also, IIRC, x86 systems use extended for internal calculations. If your compiler supports it, long double is the same as the extended type. (But your compiler may not, alas, and not bother to tell you.)
The reason for higher-precision floating point types is typically driven by need. For example, scientific applications like they use at the observatory will typically want to use something with better precision than 'float'. For graphics applications like the next first-person shooter won't need the precision, but may use double anyway because of profiled performance issues.