In the following code let us assume for simplicity that float
and uint32_t
have the same size.
void fun(uint32_t* u, float* f) {
float a = *f
*u = 22;
float b = *f;
print("%g should equal %g\n", a, b);
}
u
and f
have different base type, and thus the compiler can assume that they point to different objects. There is no possibility that *f
could have changed between the two initializations of a
and b
, and so the compiler may optimize the code to something equivalent to
void fun(uint32_t* u, float* f) {
float a = *f
*u = 22;
print("%g should equal %g\n", a, a);
}
That is, the second load operation of *f
can be optimized out completely.
If we call this function "normally"
float fval = 4;
uint32_t uval = 77;
fun(&uval, &fval);
all goes well and something like
4 should equal 4
is printed. But if we cheat and pass the same pointer, after converting it,
float fval = 4;
uint32_t* up = (uint32_t*)&fval;
fun(up, &fval);
we violate the strict aliasing rule. Then the behavior becomes undefined. The output could be as above, if the compiler had optimized the second access, or something completely different, and so your program ends up in a completely unreliable state.