23
Loktight 23 points ago +23 / -0

Absolutely correct and in some programming languages you can get unpredictable results if you try to multiply an integer by a decimal number because the variables might not get converted to a higher precision as needed.

I’m more of a hobbyist programmer but I’ve been at it for 40 years and I can’t think of any reason you would use a double to store what clearly should be an integer value. Unless of course you might need to do some sort of fractional multiplication on that value later....