C/C++ rounding up decimals with a certain precision, efficiently

12,185

Solution 1

I wrote an integer square root subroutine with nothing more than a couple dozen lines of ASM, with no API calls whatsoever - and it still could only do about 50 million SqRoots/second (this was about five years ago ...).

The point I'm making is that if you're going for billions of calls, even today's technology is going to choke.

But if you really want to make an effort to speed it up, remove as many API usages as humanly possible. This may require you to perform API tasks manually, instead of letting the libraries do it for you. Specifically, remove any type of stream operation. Those are slower than dirt in this context. You may really have to improvise there.

The only thing left to do after that is to replace as many lines of C++ as you can with custom ASM - but you'll have to be a perfectionist about it. Make sure you are taking full advantage of every CPU cycle and register - as well as every byte of CPU cache and stack space.

You may consider using integer values instead of floating-points, as these are far more ASM-friendly and much more efficient. You'd have to multiply the number by 10^7 (or 10^p, depending on how you decide to form your logic) to move the decimal all the way over to the right. Then you could safely convert the floating-point into a basic integer.

You'll have to rely on the computer hardware to do the rest.

<--Microsoft Specific-->
I'll also add that C++ identifiers (including static ones, as Donnie DeBoer mentioned) are directly accessible from ASM blocks nested into your C++ code. This makes inline ASM a breeze.
<--End Microsoft Specific-->

Solution 2

You could save some major cycles in your posted code by just making that double t[] static, so that it's not allocating it over and over.

Solution 3

Depending on what you want the numbers for, you might want to use fixed point numbers instead of floating point. A quick search turns up this.

Solution 4

I think you can just add 0.005 for precision to hundredths, 0.0005 for thousands, etc. snprintf the result with something like "%1.2f" (hundredths, 1.3f thousandths, etc.) and compare the strings. You should be able to table-ize or parameterize this logic.

Solution 5

Try this instead:

#include <cmath>

double setprecision(double x, int prec) {
    return 
        ceil( x * pow(10,(double)prec) - .4999999999999)
        / pow(10,(double)prec);
}

It's probably faster. Maybe try inlining it as well, but that might hurt if it doesn't help.

Example of how it works:

2.345* 100 (10 to the 2nd power) = 234.5
234.5 - .4999999999999 = 234.0000000000001
ceil( 234.0000000000001 ) = 235
235 / 100 (10 to the 2nd power) = 2.35

The .4999999999999 was chosen because of the precision for a c++ double on a 32 bit system. If you're on a 64 bit platform you'll probably need more nines. If you increase the nines further on a 32 bit system it overflows and rounds down instead of up, i. e. 234.00000000000001 gets truncated to 234 in a double in (my) 32 bit environment.

Share:
12,185
Milan
Author by

Milan

Updated on June 04, 2022

Comments

  • Milan
    Milan almost 2 years

    I'm trying to optimize the following. The code bellow does this :

    If a = 0.775 and I need precision 2 dp then a => 0.78

    Basically, if the last digit is 5, it rounds upwards the next digit, otherwise it doesn't.

    My problem was that 0.45 doesnt round to 0.5 with 1 decimalpoint, as the value is saved as 0.44999999343.... and setprecision rounds it to 0.4.

    Thats why setprecision is forced to be higher setprecision(p+10) and then if it really ends in a 5, add the small amount in order to round up correctly.

    Once done, it compares a with string b and returns the result. The problem is, this function is called a few billion times, making the program craw. Any better ideas on how to rewrite / optimize this and what functions in the code are so heavy on the machine?

    bool match(double a,string b,int p) { //p = precision no greater than 7dp
    
        double t[] = {0.2, 0.02, 0.002, 0.0002, 0.00002, 0.000002, 0.0000002, 0.00000002};
    
        stringstream buff;
        string temp;
    
        buff << setprecision(p+10) << setiosflags(ios_base::fixed) << a; // 10 decimal precision
        buff >> temp;
    
        if(temp[temp.size()-10] == '5')  a += t[p]; // help to round upwards
    
        ostringstream test;
        test << setprecision(p) << setiosflags(ios_base::fixed) << a;
        temp = test.str();
    
        if(b.compare(temp) == 0) return true;
    
        return false;
    }
    
  • Giffyguy
    Giffyguy almost 15 years
    But that would make it round up no matter what. I think he is trying to round down when appropriate as well.
  • Giffyguy
    Giffyguy almost 15 years
    This is excellent advice as well. Static vars can definitely speed things up when used in the correct context.
  • Giffyguy
    Giffyguy almost 15 years
    In fact that array is a perfect candidate for "static const"
  • Giffyguy
    Giffyguy almost 15 years
    static const will cause your array to not exist at all in the final code. The compiler will directly inject the individual constant values when they are used, instead of having to dereference the pointer and perform the pointer arithmetic to access the array.
  • Donnie DeBoer
    Donnie DeBoer almost 15 years
    Though I originally suggested that multiplying by 10^p would work, it doesn't (at least not in general). Floating point numbers are inherently imprecise. When you store a value as a floating point, you lose information about the original value. If the F.P. value is 0.4449999, you can not reasonably assume that represents 0.45. Therefore, multiplying to move digits around doesn't actually "fix" anything - it introduces additional imprecision (eg. multiplying by 100, which actually may be stored as 100.0124) that turns out looking like what you expected, but it's not a general solution.
  • Giffyguy
    Giffyguy almost 15 years
    Another excellent point. This is something we have to take into account, considering that floating-point operations need to be avoided if the code is going to run any faster. I generally believe that this is the point where the term "approximation" really takes meaning. What are the chances of the number being so close to .5 that it could round the wrong way? Floating-point numbers are not 100% precise, and there's generally no way to get around that fact. I believe that at this point we just have to assume that it will round correctly 99.9% of the time, and be satisfyed with that statistic.
  • Giffyguy
    Giffyguy almost 15 years
    Good thinking! See my comment on Donnie DeBoer's second answer.
  • MSalters
    MSalters almost 15 years
    Doesn't hurt of course, but it's no guaranteed win either. The compiler/optimzier can already see it's effectively static const anyway.
  • Matt J
    Matt J over 14 years
    Actually, I think this works (don't think it will be fast though). For example, to round to the nearest hundredth, 1.014 would round down to 1.01 (1.014 + .005 = 1.019, truncate to 2 decimal places), but 1.017 would round up to 1.02 (1.017 + .005 = 1.022, truncate to 2 decimal places).