About additional precision and unpredicted behaviour...
Earlier today someone posted the following code
float a = 0.12f;
float b = a * 100f;
Console.WriteLine((int) b); // prints 12
Console.WriteLine((int)(a * 100f)); // prints 11 !!!!!!!!
An (extensive) explanation for this strange behaviour can be found at CLR and floating point: Some answers to common questions… A possible way to force the compiler and runtime to get rid of the additional precision would be the following
Console.WriteLine((int)(float)(a * 100f));