Sams teach yourself C ++ lesson 3

Pages: 12
Hello everybody,

I have recently started learning programming with Sams book but I have really hit a wall on the last exercise of lesson 3. Can anyone help me through it?

1. Modify enum Yourcards in quiz question 4 to demonstrate that the value of Queen can be 45.
2.Write a program that demonstrates that the size of an unsigned integer and a normal integer are the same, and that both are smaller in size than a long integer.

Thank you beforehand,

Blake
1. ¿what the hell are you talking about? ¿do you expect us to read that particular book to help you?
2. that's wrong https://en.cppreference.com/w/cpp/language/types
Thank you for the link, and no need to be rude. I was hoping someone had read the book and could help since I'm new to this. Have a good day
Care to explain the problem, post some code, etc.?
Last edited on
@ne555 why is that wrong?

For the second one you need to use sizeof(), and for the first one can you show us the entire question?
@Nwb, it's wrong since it seems to assume that a long will always be bigger than a normal int, but all that's guaranteed by the standard is that the long won't be shorter than an int (and that it'll be at least 32-bits).
For example, on an x86_64 machine (i.e., one obeying the SYSV ABI), we'd have sizeof(int) == sizeof(long) when running 32-bit code. (The assumption's not even usually true.)
Last edited on
@tpb oh I didn't know that.. So within the limits of int, long int would occupy the same space?
Then in that case wouldn't it make sense to always use long long int and double instead of int and float respectively?

When it actually matters it's useful to have the different sizes available. I certainly wouldn't use long long all the time! I'm all for using double instead of float unless there's a good reason to use float (e.g., to save space in a large arrays).
So technically it makes no difference right? So using long long int won't make a difference from using int? Except that long long int has a higher range..?
I don't know how you could have got that from what I said. Pretty much the exact opposite. Forget it.
I think best practice is to use the types from <cstdint> int32_t, int64_t so it's clear what we dealing with.
I got confused, but it seems long int always occupies 32 bits at least and int occupies at least 16 bits but can also have more than 32bits. Correct?

How would you predict how many bits a variable would occupy?

Oh and sizeof() shows 4 for both int and long int. This is confusing as heck. Oh wait are those bytes? That makes more sense but why is int foo=1 taking 4bytes (32bits)!

And also int foo=INT_MAX also takes 4 bytes itself..

Even tried unsigned but int (not long) seems to always be taking 32 bits.. whyy? And long int too takes 32bits all the time.. Even LONG_MAX is taking 4 bytes.

Only long long int takes 8bytes. (64bits)




Okay if then long int takes the same space as int, then why don't we use long int over int? By the way I'm using Microsoft Visual 2017
Last edited on
why don't we use long int over int?
Why should we?

If the number of bits/bytes is relevant do as Thomas1965 suggest: Use std::int32_t, std::int64_t ...

int is usually the most processor friendly type. So I suggest that you use int if there are no good reason to use something else.
But int and long it take the same in terms of occupying memory? Ugh why is int32_t and int64_t better? I don't get it..

Is there some reference I can read to get more information on such things? I would appreciate it.. there's just soooo many datatypes and I just don't get it.. why are there so many!
But int and long it take the same in terms of occupying memory?
You cannot rely on their memory consumption behavior.

Ugh why is int32_t and int64_t better?
You can rely on their memory consumption behavior.

Is there some reference I can read to get more information on such things?
See this:
https://en.cppreference.com/w/cpp/types/integer

Ugh why is int32_t and int64_t better?
^^ this also tells the reader of the program (not the original author, think a team of people) at least 3 things. It tells that a specific size is desired (something like auto is not acceptable here, is implied). It tells what that size is. It tells that the use can likely expect to see either logical or byte-themed code ahead, that requires the fixed sizes. There is a lot of info in using this self documenting type.

the reference may miss (didnt look) the literally over 100 integer types that are defined in older and library themed code; eg writing windows code in windows-speak will have you useing junk like BYTE (typedef for unsigned char) or WORD (who knows what this is anymore -- it used to be machine word (register size)) or whatever other goofy types. When you encounter these kinds of things, if you are doing anything that needs to know about the byte-size of the type, you need to stop and look it up or write a quick cout sizeof(blah) or trace it back to its typedef statement to see what it really IS. Older code has tons of this stuff; newer code is finally breaking away from it somewhat.

So say assigning a int32_t variable with an int variable's contents or a long int variable contents wouldn't count as datatype mismatch?

What do these phrases indicate? "fastest signed integer", "smallest unsigned integer"
So all of them say "at least so and so bits" does that mean a int8_t can occupy more than 8bits?
Last edited on
You didn't read carefully enough. the "at least" phrase does not exist in the int8_t section--only in the int8_fast_t and int8_least_t sections. The int8_t description says that it will have exactly 8 bits.

That's the whole point of this discussion.


There is no fixed definition for the size of an int (maybe minimum size of 16 bits, I don't remember). What we know is:

sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).

If you need exact sizes, you need to use the intX_t family of types. Other types may be easier for the computer to access, but you will be guaranteed of exact sizes.
not having read whatever you pulled that text from, Ill offer this:
your computer physically has a group of wires called a 'bus' that has a 'width'. For example if it had 8 wires, it could move 8 bits per clock cycle. If you had a 64 bit int and an 8 bit bus, it would need to move 8 times. This is slower than moving 1 time. All that boils down to this: the size of your machine is the fastest int, with int types that are smaller than this being either equally fast or slightly slower, and int types that are larger are almost always going to be slower. Most machines today are 32 or 64 bits; 32 and even 16 bit machines are either older (even ancient now) hardware or special purpose hardware (still some 16 bit special purpose stuff around).

smallest on almost all systems is 1 byte.
I am unaware of any way a int8_t could be more than 1 byte, and if the hardware can't support it, the type does not exist and use of it won't compile on that compiler. See below.


so put it together, this is what I see in online reference:
int8_t
int16_t
int32_t
int64_t

(optional)

signed integer type with width of exactly 8, 16, 32 and 64 bits respectively
with no padding bits and using 2's complement for negative values
(provided only if the implementation directly supports the type)
(typedef)
int_fast8_t
int_fast16_t
int_fast32_t
int_fast64_t

fastest signed integer type with width of at least 8, 16, 32 and 64 bits respectively
(typedef)

and what that tells me is that int8_t is always 1 byte (note word EXACTLY), but the fast version COULD BE 32 bits if, on that machine or hardware family, 32 bits is faster (see the bus paragraph I started with) then it could be bigger to use the faster type (note the AT LEAST wording).

don't get too worked up about all this. mostly, youll use int or auto, unless you work at a very low level, hardware interfaces or the like. But DO read it carefully. They don't ALL say at least, some say exact.
Last edited on
Pages: 12