Time Complexity of bit operations

12,662
  1. The leading 0 bits have to be evaluated, unless you store the index of the MSB and write your own routine to do the comparison.

  2. Bit-shifting is an N-bit operation.

  3. Masking is also an N-bit operation.

  4. Depends on how you represent it internally, it's relatively easy to jump to the correct byte, but high-level languages (if you're using one) usually don't allow you to directly access a specific bit, you'll need some operation (e.g. bit-shift (of that byte only)) to get that bit.

  5. Concatenating strings takes O(m+n) where m and n are the lengths of the respective strings. All strings I know of are represented sequentially in memory. You also don't necessarily have access to the memory after either of the strings (unless you enforce this by allocating enough memory), thus both must be copied to a new location. Though nothing is stopping you from creating your own data structure.

So ... just a straight bit-by-bit or byte-by-byte comparison (possibly with the starting-position optimization mentioned in 1) is probably the best you're going to get.

PS - Any posted research should show sufficient proof (in some form or another) as motivation as to why it is better than something else.

Share:
12,662
Rhuaidhri Tynan
Author by

Rhuaidhri Tynan

Updated on June 04, 2022

Comments

  • Rhuaidhri Tynan
    Rhuaidhri Tynan almost 2 years

    As a result of some original research and the need to develope tools for it I've come up with some new and I hope better/quicker ways of performing certain mathematical operations. Atm I'm working on the psuedocode to post them on the site in response to questions which have already been asked.

    However before I do that I want to optimise them as much as possible and so I need someone to clarify how bit operations work from a time complexity perspective.

    For example say I want to evaluate which of two 8 bit integers is larger. I'm using 8 bits as an example but in reality they could be much larger.

    10000100

    10100000

    As it stands the relation operator can be evaluated having compared the 6 MSB's. Notionally I could subtract 10000000 from both without affecting the inequality.

    00100000

    00000100

    • Q1. This would speed up the evaluation of the relation operator if the reading starts from the MSB but does it or do the leading 0's have to be evaluated anyway? Obvioulsy subtracting isn't worth doing since subtracting 10000000 is itself a 8 bit operation but say instead I could set the MSB or specific bit's using a single or two bit operation then it could be worthwhile.
    • Q2. Two methods I can think of that might fit the bill are bitshifting left and then right to destroy the leading 1 or using a mask but are there other methods? I'm particularly interested in ones which might let you set any bit not just the leading bits. If it's specific to a certain language just let me know that please.
    • Q3. Since masks are N bits then is using a mask not an N bit operation itself?
    • Q4. What about evaluating a specific bit, what methods exist and how time complex is the operation? Do all proceeding bits have to be read in first so that it's a N bit operation or can you "jump" to a certain bit?
    • Q5 Two strings being conjugated from a time complexity perspective. Does that happen by associating two memory addresses together or does one string get read and copied into the other so that it's a String.length operation?

    Thanks.

    Further clarification. I've been rereading the notes I pulled from a few places and although Dukeling confirmed what I thought should be the case I need to triple check. Take for example the simple multiplication algorithm. I see in numerous places the claim that this is a Log^2(N) operation and the given reason for it being a Log^2(N) operation is due to it consisting of Log(N) additions of a Log(N) number. The problem I have with this is although that is true it ignores the fact that each of those Log(N) numbers will be the result of a bit shift and by the end we will have bitshifted at least Log(N) times. As bitshift by 1 is a Log(N) operation the bitshifts considered alone give us Log^2(N) operations. It therefore makes no sense to me when I see it further claimed that in practice multiplication doesn't in fact use Log^2(N) operations as various methods can reduce the number of required additions. Since the bitshifting alone gives us Log^2(N) I'm left confused as to how that claim can be true.

  • Rhuaidhri Tynan
    Rhuaidhri Tynan almost 11 years
    Thanks Dukeling that almost clears it up for me. Given that is the case I can't see why so many people seem to suggest that these operations significantly reduce complexity when used in place or subtraction etc. Just to double check when your says bit shifting is N bits is N in this case the bitsize or is it the number of shifts, in my example would the left-right shift be 2 bits or 16 (since all 8 bits have to move both ways even if we've only moving them one spot). I'll use the starting position idea as it seems to be the best way to reduce the number of bit by bit comparisions.
  • Bernhard Barker
    Bernhard Barker almost 11 years
    Bit-wise operations (bit-shift, AND, OR, XOR) are slightly faster than arithmetic ones (addition, subtraction, multiplication, division) because of how computers work, but they are still N-bit operations, and using a bit-wise operations is generally a micro-optimization (meaning, in general, not worth the effort). All 8 bits will have to get shifted, the processor doesn't know which bits are 0 and which are 1.
  • Rhuaidhri Tynan
    Rhuaidhri Tynan almost 11 years
    Editied the post with an example I'm hoping you can clarify. Thanks.
  • Bernhard Barker
    Bernhard Barker almost 11 years
    If all bits are set, there are log^2 n bit shifts, then log n additions, resulting in O(log^2 n + log n) = O(log^2 n). Now let's say we scan the one number (O(log n)) and see only one bit is set. Now we only need one shift (O(log n)) and no additions, thus resulting in O(2 log n) = O(log n) performance. I'm not sure what actually happens, but it's a possibility. There are also more complex multiplication algorithms, but these may be too complex to be more efficient on binary.