The unsigned char
type uses all bits to represent a binary number. Therefore, for example, if unsigned char
is 8 bits long, then the 256 possible bit patterns of a char
object represent the 256 different values {0, 1, ..., 255}. The number 42 is guaranteed to be represented by the bit pattern 00101010
.
The signed char
type has no padding bits, i.e., if signed char
is 8 bits long, then it has 8 bits of capacity to represent a number.
Note that these guarantees do not apply to types other than narrow character types.
The unsigned integer types use a pure binary system, but may contain padding bits. For example, it is possible (though unlikely) for unsigned int
to be 64 bits long but only be capable of storing integers between 0 and 232 - 1, inclusive. The other 32 bits would be padding bits, which should not be written to directly.
The signed integer types use a binary system with a sign bit and possibly padding bits. Values that belong to the common range of a signed integer type and the corresponding unsigned integer type have the same representation. For example, if the bit pattern 0001010010101011
of an unsigned short
object represents the value 5291
, then it also represents the value 5291
when interpreted as a short
object.
It is implementation-defined whether a two's complement, one's complement, or sign-magnitude representation is used, since all three systems satisfy the requirement in the previous paragraph.
The value representation of floating point types is implementation-defined. Most commonly, the float
and double
types conform to IEEE 754 and are 32 and 64 bits long (so, for example, float
would have 23 bits of precision which would follow 8 exponent bits and 1 sign bit). However, the standard does not guarantee anything. Floating point types often have "trap representations", which cause errors when they are used in calculations.