Why does 5/2 results in '2' even when I use a float?

24,665

Solution 1

5 is an int and 2 is an int. Therefore, 5/2 will use integer division. If you replace 5 with 5.0f (or 2 with 2.0f), making one of the ints a float, you will get floating point division and get the 2.5 you expect. You can also achieve the same effect by explicitly casting either the numerator or denominator (e.g. ((float) 5) / 2).

Solution 2

Why does 5/2 results in '2' even when I use a float?

Because you do not "use float". 5/2 is an integer division. Only its result (2) gets implicitly converted to a float to become a 2. (mind the dot).

Share:
24,665
Alex Lord
Author by

Alex Lord

Currently studying Computing at UWE (University of the West of England) in Bristol. Originally from Newport, South Wales. Work, work and more work during the week days, and I like spending time with my girlfriend and friends on the weekends - not amazingly exciting! (What did you expect when you started to read this?!)

Updated on July 05, 2022

Comments

  • Alex Lord
    Alex Lord almost 2 years

    I entered the following code (and had no compiling problems or anything):

    float y = 5/2;
    printf("%f\n", y);
    

    The output was simply: 2.00000

    My math isn't wrong is it? Or am I wrong on the / operator? It means divide doesn't it? And 5/2 should equal 2.5?

    Any help is greatly appreciated!