Why is int rather than unsigned int used for C and C++ for loops?

23,528

Solution 1

This is a more general phenomenon, often people don't use the correct types for their integers. Modern C has semantic typedefs that are much preferable over the primitive integer types. E.g everything that is a "size" should just be typed as size_t. If you use the semantic types systematically for your application variables, loop variables come much easier with these types, too.

And I have seen several bugs that where difficult to detect that came from using int or so. Code that all of a sudden crashed on large matrixes and stuff like that. Just coding correctly with correct types avoids that.

Solution 2

Using int is more correct from a logical point of view for indexing an array.

unsigned semantic in C and C++ doesn't really mean "not negative" but it's more like "bitmask" or "modulo integer".

To understand why unsigned is not a good type for a "non-negative" number please consider these totally absurd statements:

  • Adding a possibly negative integer to a non-negative integer you get a non-negative integer
  • The difference of two non-negative integers is always a non-negative integer
  • Multiplying a non-negative integer by a negative integer you get a non-negative result

Obviously none of the above phrases make any sense... but it's how C and C++ unsigned semantic indeed works.

Actually using an unsigned type for the size of containers is a design mistake of C++ and unfortunately we're now doomed to use this wrong choice forever (for backward compatibility). You may like the name "unsigned" because it's similar to "non-negative" but the name is irrelevant and what counts is the semantic... and unsigned is very far from "non-negative".

For this reason when coding most loops on vectors my personally preferred form is:

for (int i=0,n=v.size(); i<n; i++) {
    ...
}

(of course assuming the size of the vector is not changing during the iteration and that I actually need the index in the body as otherwise the for (auto& x : v)... is better).

This running away from unsigned as soon as possible and using plain integers has the advantage of avoiding the traps that are a consequence of unsigned size_t design mistake. For example consider:

// draw lines connecting the dots
for (size_t i=0; i<pts.size()-1; i++) {
    drawLine(pts[i], pts[i+1]);
}

the code above will have problems if the pts vector is empty because pts.size()-1 is a huge nonsense number in that case. Dealing with expressions where a < b-1 is not the same as a+1 < b even for commonly used values is like dancing in a minefield.

Historically the justification for having size_t unsigned is for being able to use the extra bit for the values, e.g. being able to have 65535 elements in arrays instead of just 32767 on 16-bit platforms. In my opinion even at that time the extra cost of this wrong semantic choice was not worth the gain (and if 32767 elements are not enough now then 65535 won't be enough for long anyway).

Unsigned values are great and very useful, but NOT for representing container size or for indexes; for size and index regular signed integers work much better because the semantic is what you would expect.

Unsigned values are the ideal type when you need the modulo arithmetic property or when you want to work at the bit level.

Solution 3

Not much difference. One benefit of int is it being signed. Thus int i < 0 makes sense, while unsigned i < 0 doesn't much.

If indexes are calculated, that may be beneficial (for example, you might get cases where you will never enter a loop if some result is negative).

And yes, it is less to write :-)

Solution 4

It's purely laziness and ignorance. You should always use the right types for indices, and unless you have further information that restricts the range of possible indices, size_t is the right type.

Of course if the dimension was read from a single-byte field in a file, then you know it's in the range 0-255, and int would be a perfectly reasonable index type. Likewise, int would be okay if you're looping a fixed number of times, like 0 to 99. But there's still another reason not to use int: if you use i%2 in your loop body to treat even/odd indices differently, i%2 is a lot more expensive when i is signed than when i is unsigned...

Solution 5

Using int to index an array is legacy, but still widely adopted. int is just a generic number type and does not correspond to the addressing capabilities of the platform. In case it happens to be shorter or longer than that, you may encounter strange results when trying to index a very large array that goes beyond.

On modern platforms, off_t, ptrdiff_t and size_t guarantee much more portability.

Another advantage of these types is that they give context to someone who reads the code. When you see the above types you know that the code will do array subscripting or pointer arithmetic, not just any calculation.

So, if you want to write bullet-proof, portable and context-sensible code, you can do it at the expense of a few keystrokes.

GCC even supports a typeof extension which relieves you from typing the same typename all over the place:

typeof(arraySize) i;

for (i = 0; i < arraySize; i++) {
  ...
}

Then, if you change the type of arraySize, the type of i changes automatically.

Share:
23,528
Elpezmuerto
Author by

Elpezmuerto

I like pie

Updated on July 09, 2022

Comments

  • Elpezmuerto
    Elpezmuerto almost 2 years

    This is a rather silly question but why is int commonly used instead of unsigned int when defining a for loop for an array in C or C++?

    for(int i;i<arraySize;i++){}
    for(unsigned int i;i<arraySize;i++){}
    

    I recognize the benefits of using int when doing something other than array indexing and the benefits of an iterator when using C++ containers. Is it just because it does not matter when looping through an array? Or should I avoid it all together and use a different type such as size_t?

  • Admin
    Admin over 12 years
    Though to be fair, on all but the most obscure 32- and 64-bit platforms, you'd need at least 4 billion elements for such issues to show up. And platforms with smaller ints typically have far less memory as well, making int sufficient in general.
  • Blagovest Buyukliev
    Blagovest Buyukliev over 12 years
    But then, if arraySize is variable and you want to write bullet-proof code, off_t, ptrdiff_t and size_t still carry some significance.
  • Shahbaz
    Shahbaz over 12 years
    Yes, that is absolutely necessary if you MAY have such super huge arrays, but since people normally don't, then they just use the simple-to-write int. For example, if you are sorting an array of int with O(n^2), you basically have to wait forever for the array to be sorted if there are more than 2M elements, that is given if you even have 8GB of memory. So you see, usually even if you make the indexing right, most programs are useless when given input THAT large. So why make them bullet-proof?
  • Blagovest Buyukliev
    Blagovest Buyukliev over 12 years
    Not using an int also gives more context about the variable and can be regarded as self-documenting code. Also have a read here: viva64.com/en/a/0050
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 12 years
    @delnan: It's not so simple. This kind of reasoning has led to very serious vulnerabilities in the past, even by folks who think of themselves as security gods like DJB...
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 12 years
    @Shahbaz: Most of us would find it just unfortunate if passing a giant array made the sort take weeks to complete, but would find it completely unacceptable when passing a giant array yields a root shell.
  • Admin
    Admin over 12 years
    typedef unsigned us; and it's more to write.
  • Shahbaz
    Shahbaz over 12 years
    @R.. don't get me wrong, I'm not saying this is good, I'm answering the question that asks why people use int all the time.
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 12 years
    I was responding to your most recent comment.
  • chacham15
    chacham15 over 12 years
    see #3 on my answer, its not "purely" laziness/ignorance
  • R.. GitHub STOP HELPING ICE
    R.. GitHub STOP HELPING ICE over 12 years
    That doesn't change the fact that the code is wrong. Here's one way to fix it: for (size_t i=10; i-->0; )
  • littleadv
    littleadv over 12 years
    @WTP - you're one of those who will not understand sarcasm even with the ":-)" right next to it? Well, I guess no cure there...
  • Tony Delroy
    Tony Delroy almost 11 years
    Negative values are a key point, and yours is the only answer that makes more than a token mention of that. But, sadly there're implicit Standard conversions between signed and unsigned parameter types that mean mixing them can just stuff up rather than the inconvenient but safe scenario you describe of "having to add a bunch of typecasts". And "As we move to 64-bit platforms, the unsigned range of a signed integer..." isn't actually growing for most compiler/OSes - ints still tend to be 32 bits, with longs moving from 32 to 64.
  • 6502
    6502 almost 10 years
    The correct type for a size is size_t, unfortunately size_t has been defined using a wrong type itself (unsigned) and this is the source of a big number of bugs. I prefer using semantically correct types for the code (e.g. int) than using formally correct but semantically wrong types. With ints you may run into bugs for very large (incredibly large) values... with unsigned values the crazy behavior is much closer to everyday use (0).
  • Jens Gustedt
    Jens Gustedt almost 10 years
    @6502, opinions seems to vary a lot on that. You could have a look at my blog post about that: gustedt.wordpress.com/2013/07/15/…
  • 6502
    6502 almost 10 years
    @JensGustedt: that the semantic is wrong is not an opinion, unless you think it's correct that a.size() - b.size() should be about four billions when b has one element and a has none. That someone thinks that unsigned is a fantastic idea for non-negative numbers you're correct, but my impression is that they put too much weight on the name rather on the real meaning. Among the ones that think that unsigned is a bad idea for counters and indexes is Bjarne Stroustrup... see stackoverflow.com/q/10168079/320726
  • Jens Gustedt
    Jens Gustedt almost 10 years
    @6502, as I said, opinions vary a lot. And SO shouldn't be a place to discuss opinions, especially of people not involved in the discussion themselves. Stroustrup is certainly a reference for many things, but not for C.
  • carlos
    carlos over 9 years
    I think you are right because java (a "improved" c++) not support unsigned int. Also I think the correct way of write that line is: size_t arr_index; for (size_t i=1; i<=pts.size(); i++) { arr_index = i - 1; }
  • 6502
    6502 over 9 years
    @carlos: No. That would be the correct way if size_t would have been defined correctly. Unfortunately a design error made size_t an unsigned and therefore those values ended up having bitmask semantic. Unless you think that's correct that the size of a container is a bitmask then using size_t is the wrong choice. A choice that unfortunately was made by the standard C++ library, but no one forces me to repeat the same error in my code. My suggestion is to just run away from size_t and use regular ints as soon as you can instead of beding logic so that it works also with size_t.
  • Ruslan
    Ruslan almost 9 years
    It's not just 16-bit platforms. With current size_t you can use e.g. vector<char> of size e.g. 2.1G on IA-32 Linux with 3G/1G memory split. If size_t were signed, what would be if you increase your vector from <2G to a bit more? Suddenly the size will become negative. This just doesn't make any sense. The language should not impose such artificial limits.
  • Miles Rout
    Miles Rout about 6 years
    @6502 Sorry, but the semantics you think are correct are not. size_t - size_t should be off_t, not size_t.
  • Miles Rout
    Miles Rout about 6 years
    A negative size or negative index makes no sense
  • 6502
    6502 about 6 years
    @MilesRout: not sure what you're talking about. Are you saying that is logically correct that a.size()-b.size() should be four billions when a has two elements and b has three elements? This happens in C++ simply because size_t is an unsigned type and choosing such a type for container size was a logical mistake that cannot be fixed now. If you say that the difference of two "non-negative" values should be a "possibly negative" value then I agree... but non-negative and unsigned are very different concepts and in C++ the difference of two unsigned is unsigned.
  • Miles Rout
    Miles Rout about 6 years
    @6502 No, I'm saying that it's absurd to claim that 'unsigned' and 'nonnegative' are different concepts. They're the same concept. The issue is not unsigned. The issue is that the subtraction of two unsigned values should be signed. a.size() should be size_t, but a.size() - b.size() should be ptrdiff_t, just as the subtraction of two pointers doesn't give you a pointer, but a ptrdiff_t. A pointer is, after all, basically the same as a size_t.
  • 6502
    6502 about 6 years
    @MilesRout: unsigned has a very precise meaning in C++ and that meaning is unrelated to "non-negative". May be you like the name of the type, but the name is irrelevant and what counts is the semantic. unsigned means "modulo integer" or "bitmask" ... and saying that the size of a container should be a modulo integer or a bitmask is the error that was made long ago and that unfortunately there's no way to fix now. More details on why unsigned is different from "non-negative" on this video... youtu.be/4afySnY-XgY
  • 6502
    6502 about 6 years
    @Ruslan: it's amazing how this very weak argument can stick even with reasonably good programmers: the idea of a single array containing single bytes eating up most of your address space is totally absurd and I'm sure not something that faces up often yet apparently is considered very important by "unsigned for size" zealots. It would have been nice to have a data type with the ability to use all the bits and with the semantic of a "non-negative" integer, but unfortunately no such a type exist in C++ and using unsigned instead of that is nonsense.
  • Ruslan
    Ruslan about 6 years
    @6502 see this code for an example of hack one needs to implement when an offset type is signed (here off_t/off64_t – not exactly C++, but related: POSIX). Not too inefficient, of course, but ugly. And this code is not of theoretical need there: it was implemented out of necessity.
  • 6502
    6502 about 6 years
    @Ruslan: you mean that code was used to seek into a file bigger than 8 exabytes and so they found a bug?
  • Ruslan
    Ruslan about 6 years
    @6502 18 exabytes actually. You see, /proc/<pid>/mem file on x86_64 Linux for 64-bit processes does have valid pages in the "negative" address range – namely, [vsyscall] at 0xffffffffff600000. See the original EDB issue that was fixed by the code.
  • 6502
    6502 about 6 years
    @Ruslan: using an unsigned type of the exact size for addresses in an address space may make sense indeed. However it does NOT make sense having an unsigned type to represent the number of elements inside a container (what std:: containers do).
  • supercat
    supercat almost 6 years
    @MilesRout: An attempt to operate on a negative number of items will generally have different implications from an attempt to operate on a really large positive number of items. If a function which is supposed to operate all but the last item of a collection is passed a collection with no items, having the number of items to process be recognizable as -1 seems cleaner than having it be SIZE_MAX.