Use of unsigned int

I've been reading a bit on Googles coding guidelines, and while it may not be perfect for everyone or everything, I noticed they mentioned the use of unsigned ints. They suggest not using unsigned ints, but rather using assertions to check for a negative value. What are your guys thoughts on this?

Use normal ints, and use compile time assertions to check a negative value, or just use unsigned int?

I typically use unsigned ints for things like a game window width and height. Those two values should never be negative, because what's a window with a negative width or height? So in this case I'd typically use an unsigned int.
Last edited on
this sounds like someone you should not listen to. if its unsigned, make it unsigned. This is self documenting and describes your intent. Also, unsigned hold a larger max value. Also, some compilers dislike bit-logic operations on signed values.

asserts are removed in production code, so if its a user entered value, that won't even work.



Here is where I read that bit, in case you're curious.
http://wesmckinney.com/blog/avoid-unsigned-integers/

"In particular, do not use unsigned types to say a number will never be negative. Instead, use assertions for this." - Google C++ Style Guide
Last edited on
> In particular, do not use unsigned types to say a number will never be negative.

Yes. In an interface, all that an unsigned int says is that the number will never be interpreted as having a negative value, even if a negative value was passed because of a programming error. If there is a requirement that this must be checked, use a signed integer type.

1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream>

void foo( unsigned int arg )
{
    if( arg < 0 ) std::cout << "eror: invalid arg\n"  ; // unsigned int, ergo dead code

    std::cout << arg << '\n' ;
}

int main()
{
    foo(-123456789) ;
}
/shrug I am from the older school of programming. I think all that extra junk in the code to protect people that don't know what they are doing is ridiculous.
At first I thought it was dumb, but I now do I sort of agree with JLBorges. Since the compiler won't pick up on when you pass -100 to a function expecting an unsigned char, you have to do it to be sure. Or have an interface that wraps it that checks.

But I also agree using unsigned is more self-documenting. Obviously, I'm not going to start replacing size_t with signed long long, that's just silly. Overall, I don't see it as a big deal.

PS: Just because Google is big doesn't mean that their style guide is inherently better than anything else. Internally, for Google, it's an nice way of keeping their company's code consistent across development teams.
I've been programming for decades and I can't recall a single time when confusion over signed vs. unsigned numbers made it into production. If you use a negative value where an unsigned is expected, the code usually fails spectacularly.

On the other case, beginners here frequently ask "how can I ensure that the user doesn't enter a negative number?" The easy way is to let the compiler do it: unsigned val; if (!(cin >> val)) { error }
Google style guide used to be quite atrocious, banning such things as streams, assignment operators and even function overloading, but it improved a lot in the last 4 years or so (I suspect its ownership changed from someone who hated C++ to someone who cares about it). It's best to cross-check against other sets of guidelines, such as Bjarne Stroustrup's and Herb Sutter's current C++ Core Guidelines, which say, on the subject,

ES.101: Use unsigned types for bit manipulation
ES.102: Use signed types for arithmetic
ES.106: Don't try to avoid negative values by using unsigned
ES.107: Don't use unsigned for subscripts, prefer gsl::index

(see https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Res-unsigned and down from that)

There are multiple arguments both for and against the use of unsigned integers in arithmetic, indexing, and function interfaces, but it seems signed is winning in the minds of language maintainers and, more importantly, compiler authors (I believe both Clang's and GCC's maintainers expressed pro-signed viewpoints)
I would agree with that. Unsigned is for bit logic and the occasional use case of the values fitting in 2x but not in x where x is the max value for whatever signed type, and possibly self documentation / programmer intentions. not a fan of gsl::index, but I would prefer we go the other way and have just 2 integer types per bit or byte length. Dig down and gsl::index is going to be an integer type redefined somewhere, and I don't agree that it brings anything to the table. I kid you not I think today I could express 'int' 50 distinct ways in current language. This is not helpful.
Last edited on
Topic archived. No new replies allowed.