Performance wise, how fast are Bitwise Operators vs. Normal Modulus?

38,751

Solution 1

Unless you're using an ancient compiler, it can already handle this level of conversion on its own. That is to say, a modern compiler can and will implement i % 2 using a bitwise AND instruction, provided it makes sense to do so on the target CPU (which, in fairness, it usually will).

In other words, don't expect to see any difference in performance between these, at least with a reasonably modern compiler with a reasonably competent optimizer. In this case, "reasonably" has a pretty broad definition too--even quite a few compilers that are decades old can handle this sort of micro-optimization with no difficulty at all.

Solution 2

TL;DR Write for semantics first, optimize measured hot-spots second.

At the CPU level, integer modulus and divisions are among the slowest operations. But you are not writing at the CPU level, instead you write in C++, which your compiler translates to an Intermediate Representation, which finally is translated into assembly according to the model of CPU for which you are compiling.

In this process, the compiler will apply Peephole Optimizations, among which figure Strength Reduction Optimizations such as (courtesy of Wikipedia):

Original Calculation  Replacement Calculation
y = x / 8             y = x >> 3
y = x * 64            y = x << 6
y = x * 2             y = x << 1
y = x * 15            y = (x << 4) - x

The last example is perhaps the most interesting one. Whilst multiplying or dividing by powers of 2 is easily converted (manually) into bit-shifts operations, the compiler is generally taught to perform even smarter transformations that you would probably think about on your own and who are not as easily recognized (at the very least, I do not personally immediately recognize that (x << 4) - x means x * 15).

Solution 3

This is obviously CPU dependent, but you can expect that bitwise operations will never take more, and typically take less, CPU cycles to complete. In general, integer / and % are famously slow, as CPU instructions go. That said, with modern CPU pipelines having a specific instruction complete earlier doesn't mean your program necessarily runs faster.

Best practice is to write code that's understandable, maintainable, and expressive of the logic it implements. It's extremely rare that this kind of micro-optimisation makes a tangible difference, so it should only be used if profiling has indicated a critical bottleneck and this is proven to make a significant difference. Moreover, if on some specific platform it did make a significant difference, your compiler optimiser may already be substituting a bitwise operation when it can see that's equivalent (this usually requires that you're /-ing or %-ing by a constant).

For whatever it's worth, on x86 instructions specifically - and when the divisor is a runtime-variable value so can't be trivially optimised into e.g. bit-shifts or bitwise-ANDs, the time taken by / and % operations in CPU cycles can be looked up here. There are too many x86-compatible chips to list here, but as an arbitrary example of recent CPUs - if we take Agner's "Sunny Cove (Ice Lake)" (i.e. 10th gen Intel Core) data, DIV and IDIV instructions have a latency between 12 and 19 cycles, whereas bitwise-AND has 1 cycle. On many older CPUs DIV can be 40-60x worse.

Solution 4

By default you should use the operation that best expresses your intended meaning, because you should optimize for readable code. (Today most of the time the scarcest resource is the human programmer.)

So use & if you extract bits, and use % if you test for divisibility, i.e. whether the value is even or odd.

For unsigned values both operations have exactly the same effect, and your compiler should be smart enough to replace the division by the corresponding bit operation. If you are worried you can check the assembly code it generates.

Unfortunately integer division is slightly irregular on signed values, as it rounds towards zero and the result of % changes sign depending on the first operand. Bit operations, on the other hand, always round down. So the compiler cannot just replace the division by a simple bit operation. Instead it may either call a routine for integer division, or replace it with bit operations with additional logic to handle the irregularity. This may depends on the optimization level and on which of the operands are constants.

This irregularity at zero may even be a bad thing, because it is a nonlinearity. For example, I recently had a case where we used division on signed values from an ADC, which had to be very fast on an ARM Cortex M0. In this case it was better to replace it with a right shift, both for performance and to get rid of the nonlinearity.

Solution 5

C operators cannot be meaningfully compared in therms of "performance". There's no such thing as "faster" or "slower" operators at language level. Only the resultant compiled machine code can be analyzed for performance. In your specific example the resultant machine code will normally be exactly the same (if we ignore the fact that the first condition includes a postfix increment for some reason), meaning that there won't be any difference in performance whatsoever.

Share:
38,751
Maven
Author by

Maven

Updated on July 31, 2022

Comments

  • Maven
    Maven almost 2 years

    Does using bitwise operations in normal flow or conditional statements like for, if, and so on increase overall performance and would it be better to use them where possible? For example:

    if(i++ & 1) {
    
    }
    

    vs.

    if(i % 2) {
    
    }
    
  • legends2k
    legends2k over 10 years
    +1 for stressing on readability; usually optimization comes in the last phase and at best we can try not to make our code extremely alien.
  • Potatoswatter
    Potatoswatter over 10 years
    ! has higher precedence than &, and Boolean values are numbers equal to zero or one, so (! i & 1) is just the same as !i.
  • yosim
    yosim over 10 years
    The (!i) will invert all bits. the witwise &1 will check if the least significant bit is on. Therefore if i was even, its lease significant bit will be 0, the least significant bit of (!i) will be 1, and therefore performing bitwise (&1) will be true.
  • Potatoswatter
    Potatoswatter over 10 years
    You're thinking of ~, not !.
  • rici
    rici over 10 years
    @yosim: !i is bool, returning 0 or 1. It's not bitwise. You may be thinking of ~.
  • supercat
    supercat over 10 years
    If all values are signed integer types, the expression a=b & 1; will on every standards-compliant implementation I know evaluate faster than a=b % 2;, since the latter expression is equivalent to a= b < 0 ? -(b & 1) : b & 1;. If the only thing done with the result is testing for zero, an optimizer may be able to recognize that the b<0 and b>=0 cases are equivalent, but I wouldn't particularly expect that optimization.
  • supercat
    supercat over 10 years
    I wonder how much existing code relies upon which aspects of the signed-number behavior of / and %? The only use I know of for negative results from % is for code that wants subtract one from the result of / when % yields a negative number. In multiple decades of programming I think I've once encountered a case where the symmetry of truncate-toward-zero was actually useful, any many where periodic behavior would have been better. What's especially ironic is that ANSI started mandating truncate-toward-zero around the time when...
  • supercat
    supercat over 10 years
    ...it probably ceased being the faster behavior in the majority of common cases (since newer processors can perform long multiplications quickly, and floored division by constants can be evaluated using long multiplication more efficiently than truncated division).
  • Cody Gray
    Cody Gray almost 8 years
    It is unfortunate that, though this is the only correct answer, it appears at the very bottom of the page. Too bad my single +1 won't solve that problem. The "always trust your compiler to do the most optimal thing" manta is not altogether wrong, but it can be very misleading when followed blindly.
  • phuclv
    phuclv over 6 years
    which architecture is this? and did you compile with optimizations on?
  • user9164692
    user9164692 over 6 years
    This was generated on IBM z/OS using GCC 4.6 compiler. -O3 optimization level
  • phuclv
    phuclv over 6 years
    I have no idea what those instructions are. They're no way as common as x86 or even PowerPC, and the mnemonics aren't even readable. Besides, counting number of instructions is not a good metric, because a snippet with more instructions might run faster than a shorter one. What's important is how heavy an instruction is, like how are div's throughput and latency compared to shifts. This is not a good answer, lacking explanations and doesn't provide any information to readers
  • potato
    potato almost 4 years
    @supercat There is a way to avoid branching here, just add the sign bit to the mask: a = b & 0b1000....0001
  • supercat
    supercat almost 4 years
    @potato: That wouldn't yield correct results on a two's-complement machine. There are branchless ways to achieve the result, and compilers in fact generate them, but they're more complicated than the code for b & 1.