I think first 400*400=160000 is converted to 28928 by starting from 0 and going 160000 time in a circular fashion for int type (say sizeof(int) = 2 bytes) assuming it like:
And then 28928 is divided by 400, floor of which gives 72 and the result varies with the type of variable. Is my assumption correct or there is any other explanation?
Assuming you’re using a horrifically old enough compiler for where
int is only 16 bits. Then yes, your analysis is correct.*
400 * 400 = 160000 // Integer overflow wrap-around. 160000 % 2^16 = 28928 // Integer Division 28928 / 400 = 72 (rounded down)
Of course, for larger datatypes, this overflow won’t happen so you’ll get back
*This wrap-around behavior is guaranteed only for unsigned integer types. For signed integers, it is technically undefined behavior in C and C++.
In many cases, signed integers will still exhibit the same wrap-around behavior. But you just can’t count on it. (So your example with a signed 16-bit integer isn’t guaranteed to hold.)
Although rare, here are some examples of where signed integer overflow does not wrap around as expected:
It certainly seems like you guessed correctly.
If int is a 16-bit type, then it’ll behave exactly as you described. Your operation happens sequentially and 400 * 400 produces 160000, which is 10 0111 0001 0000 0000
when you store this in 16-bit register, top “10” will get chopped off and you end up with 0111 0001 0000 0000 (28,928)…. and you guessed the rest.
Which compiler/platform are you building this on? Typical desktop would be at least 32-bit so you wouldn’t see this issue.
NOTE: This is what explain your behavior with YOUR specific compiler. As so many others were quick to point out, DO NOT take this answer to assume all compilers behave this way. But YOUR specific one certainly does.
To complete the answer from the comments below, the reason you are seeing this behavior is that most major compilers optimize for speed in these cases and do not add safety checks after simple arithmetic operations. So as I outlined above, the hardware simply doesn’t have room to store those extra bits and that’s why you are seeing “circular” behavior.
The first thing you have to know is, in C, integer overflows are undefined behavior.
(C99, 6.5.5p5) “If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined.”
C says it very clear and repeats it here:
(C99, 3.4.3p3) “EXAMPLE An example of undefined behavior is the behavior on integer overflow.”
Note that integer overflows only regard signed integer as unsigned integer never overflow:
(C99, 6.2.5p9) “A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.”
Your declaration is this one:
int i = 400 * 400 / 400;
int is 16-bit in your platform and the signed representation is two’s complement,
400 * 400 is equal to
160000 which is not representable as an
INT_MAX value being
32767. We are in presence of an integer overflow and the implementation can do whatever it wants.
Usually, in this specific example, the compiler will do one of the two solutions below:
- Consider the overflow wraps around modulo the word size like with unsigned integers then the result of the
400 * 400 / 400is
- Take advantage of the undefined behavior to reduce the expression
400 * 400 / 400to
400. This is done by good compilers usually when the optimization options are enabled.
Note that that integer overflows are undefined behavior is specific to C language. In most languages (Java for instance) they wrap around modulo the word size like unsigned integers in C.
gcc to have overflow always wrap, there is the
-fno-strict-overflow option that can be enabled (disabled by default). This is for example the choice of the Linux kernel; they compile with this option to avoid bad surprises. But this also constraints the compiler to not perform all optimizations it can do.