I have just tried implementing a class where numerous length/count properties, etc. are
uint instead of
int. However, while doing so I noticed that it’s actually painful to do so, like as if no one actually wants to do that.
Nearly everything that hands out an integral type returns an
int, therefore requiring casts in several points. I wanted to construct a
StringBuffer with its buffer length defaulted to one of the fields in that class. Requires a cast too.
So I wondered whether I should just revert to
int here. I’m certainly not using the entire range anyway. I just thought since what I’m dealing with there simply can’t be negative (if it was, it’d be an error) it’d be a nice idea to actually use
P.S.: I saw this question and this at least explains why the framework itself always uses
int but even in own code it’s actually cumbersome to stick to
uint which makes me think it apparently isn’t really wanted.
While strictly you should use
uint for variables that hold non-negative integer you have come across one of reasons why it’s not always practicable.
In this case I don’t think the reduction in readability that comes with having to do casts is worth it.
I’ll add to the other answers also that using uint as type of a public field, property, method, parameter, and so on, is a violation of the Common Language Specification rules and to be avoided when possible.
A negative value is often used to signal an error condition, and the size of a operation is often returned by a function call; a negative value may therefore signal error without resorting to an exception mechanism.
Also note that .NET often builds upon straight C libraries, therefore it is sensible to continue this convention. If you require a larger index space you can break the convention for different error signalling mechanism.
My personal feeling is that you should probably just stick to int. It’s not worth adding a cast to pretty much every single property access just to reclaim a numeric range that .NET’s unlikely to let you use anyway.
Using an int is also helpful to detect integer overflow in operations.
IMO, the drawback of using
uint is that it obscures error conditions. The equivalents of the following code aren’t as nice:
if (len < 0) terminate_program("length attained impossible value.");
Of course your programs should not miscalculate anything to begin with, but I feel they also should be written to rapidly detect numerical erros without propagation. In the case where a MaxValue of 2^31 is enough, I say use
int along with proper use of
System.Diagnostics.Debug.Assert() and corresponding error-checks as exemplified above.
If you do use
uint use it along with
checked to prevent underflow and get the same results. However I have found that check is a little bit difficult to apply to existing code that uses casts for some purpose.
Don’t swim upstream if you don’t have to. Not litering your code with casts makes your code more readable. Further, if your possible values fit within an int, then using an int is not a problem.
If you’re afraid you might overflow an int, then there by all means.. but don’t prematurely optimize.
I would say the improved readability of minimizing casts outweighs the slightly elevated risk of a bug with using int.
If you want to check that a value is positive, a better way is probably just by using assert (note this is just a debugging technique – you should ensure that this never occurs in the final code).
using System.Diagnostics; ... Debug.Assert (i > 0);