how 256 stored in char variable and unsigned char

14,134

Solution 1

Your guess is correct. Conversion to an unsigned type uses modular arithmetic: if the value is out of range (either too large, or negative) then it is reduced modulo 2N, where N is the number of bits in the target type. So, if (as is often the case) char has 8 bits, the value is reduced modulo 256, so that 256 becomes zero.

Note that there is no such rule for conversion to a signed type - out-of-range values give implementation-defined results. Also note that char is not specified to have exactly 8 bits, and can be larger on less mainstream platforms.

Solution 2

yes, that's correct. 8 bits can hold 0 to 255 unsigned, or -128 to 127 signed. Above that and you've hit an overflow situation and bits will be lost.

Does the compiler give you warning on the above code? You might be able to increase the warning level and see something. It won't warn you if you assign a variable that can't be determined statically (before execution), but in this case it's pretty clear you're assigning something too large for the size of the variable.

Solution 3

On your platform (as well as on any other "normal" platform) unsigned char is 8 bit wide, so it can hold numbers from 0 to 255.

Trying to assign 256 (which is an int literal) to it results in an unsigned integer overflow, that is defined by the standard to result in "wraparound". The result of u = n where u is an unsigned integral type and n is an unsigned integer outside its range is u = n % (max_value_of_u +1).

This is just a convoluted way to say what you already said: the standard guarantees that in these cases the assignment is performed keeping only the bits that fit in the target variable. This norm is there since most platform already implement this at the assembly language level (unsigned integer overflow typically results in this behavior plus some kind of overflow flag set to 1).

Notice that all this do not hold for signed integers (as often plain char is): signed integer overflow is undefined behavior.

Share:
14,134
Siva Kannan
Author by

Siva Kannan

https://sivakannan.in https://github.com/shivashanmugam/

Updated on June 07, 2022

Comments

  • Siva Kannan
    Siva Kannan about 2 years

    Up to 255, I can understand how the integers are stored in char and unsigned char ;

    #include<stdio.h>
    int main()
    {
            unsigned char a = 256;
            printf("%d\n",a);
            return(0);
    }
    

    In the code above I have an output of 0 for unsigned char as well as char.

    For 256 I think this is the way the integer stored in the code (this is just a guess):

    First 256 converted to binary representation which is 100000000 (totally 9 bits).

    Then they remove the remove the leftmost bit (the bit which is set) because the char datatype only have 8 bits of memory.

    So its storing in the memory as 00000000 , that's why its printing 0 as output.

    Is the guess correct or any other explanation is there?