Difference between different data types

First off, I'm still relatively new to programming. So far I've only used int, float, and char. I know that there are other data types out there. So my question is in what kind of situation would I apply other data types rather than just using int, float, char? (Ex: using long int rather than int)
Suppose you want to store how many people live on the Earth. The maximum number you can store in an unsigned int is 4,294,967,295. Since there are more people you can use an unsigned long that holds up to 18,446,744,073,709,551,615.

Suppose you want to compute 10.0 / 3.0. A float uses 4 bytes of memory so it can hold up to a certain decimal number (suppose it's 3.33333). double uses 8 bytes, so it can store more decimal numbers (suppose 3.3333333333).
A more practcal application could be storing PI with more precision, therefore getting more accurate circumferences.
Thank you, so that clears it up. Now my next question is, does the bytes matter?
What do you mean? What bytes in what context?
I'm guessing that the more bytes that a data type have the more accurate it would be. For example, using float to hold pi wouldn't be as practical as using double because double would give you a more precise answer.
Integers have more range, floating point numbers are more accurate.
If you're thinking about memory usage you don't need to concern about it in most cases. A modern computer has more than 2GB of RAM.
Choosing the best type is something that will become easier as you gain experience and see more code.
> So my question is in what kind of situation would I apply other data types rather than just using int, float, char?

Use int as the default integer type, double as the default floating point type, char as the default character type, and std::string as the default string type.

Consider using some other type in place of these only if there are externally imposed constraints that make int, double, char or std::string unsuitable.

In particular, the logic that goes something like: 'an int can represent a maximum value of x, so if you want larger values, use long' is fundamentally unsound. At best, it provides a non-portable solution.

If you want fixed or minimum width integer types, use the types in <cstdint>

If you want a signed integer type that can hold 12 decimal digits, use something like:
decltype(999999999999) i = 0 ;
Topic archived. No new replies allowed.