Generate an integer that is not among four billion given ones

119,483

Solution 1

Assuming that "integer" means 32 bits: 10 MB of space is more than enough for you to count how many numbers there are in the input file with any given 16-bit prefix, for all possible 16-bit prefixes in one pass through the input file. At least one of the buckets will have be hit less than 216 times. Do a second pass to find of which of the possible numbers in that bucket are used already.

If it means more than 32 bits, but still of bounded size: Do as above, ignoring all input numbers that happen to fall outside the (signed or unsigned; your choice) 32-bit range.

If "integer" means mathematical integer: Read through the input once and keep track of the largest number length of the longest number you've ever seen. When you're done, output the maximum plus one a random number that has one more digit. (One of the numbers in the file may be a bignum that takes more than 10 MB to represent exactly, but if the input is a file, then you can at least represent the length of anything that fits in it).

Solution 2

Statistically informed algorithms solve this problem using fewer passes than deterministic approaches.

If very large integers are allowed then one can generate a number that is likely to be unique in O(1) time. A pseudo-random 128-bit integer like a GUID will only collide with one of the existing four billion integers in the set in less than one out of every 64 billion billion billion cases.

If integers are limited to 32 bits then one can generate a number that is likely to be unique in a single pass using much less than 10 MB. The odds that a pseudo-random 32-bit integer will collide with one of the 4 billion existing integers is about 93% (4e9 / 2^32). The odds that 1000 pseudo-random integers will all collide is less than one in 12,000 billion billion billion (odds-of-one-collision ^ 1000). So if a program maintains a data structure containing 1000 pseudo-random candidates and iterates through the known integers, eliminating matches from the candidates, it is all but certain to find at least one integer that is not in the file.

Solution 3

A detailed discussion on this problem has been discussed in Jon Bentley "Column 1. Cracking the Oyster" Programming Pearls Addison-Wesley pp.3-10

Bentley discusses several approaches, including external sort, Merge Sort using several external files etc., But the best method Bentley suggests is a single pass algorithm using bit fields, which he humorously calls "Wonder Sort" :) Coming to the problem, 4 billion numbers can be represented in :

4 billion bits = (4000000000 / 8) bytes = about 0.466 GB

The code to implement the bitset is simple: (taken from solutions page )

#define BITSPERWORD 32
#define SHIFT 5
#define MASK 0x1F
#define N 10000000
int a[1 + N/BITSPERWORD];

void set(int i) {        a[i>>SHIFT] |=  (1<<(i & MASK)); }
void clr(int i) {        a[i>>SHIFT] &= ~(1<<(i & MASK)); }
int  test(int i){ return a[i>>SHIFT] &   (1<<(i & MASK)); }

Bentley's algorithm makes a single pass over the file, setting the appropriate bit in the array and then examines this array using test macro above to find the missing number.

If the available memory is less than 0.466 GB, Bentley suggests a k-pass algorithm, which divides the input into ranges depending on available memory. To take a very simple example, if only 1 byte (i.e memory to handle 8 numbers ) was available and the range was from 0 to 31, we divide this into ranges of 0 to 7, 8-15, 16-22 and so on and handle this range in each of 32/8 = 4 passes.

HTH.

Solution 4

Since the problem does not specify that we have to find the smallest possible number that is not in the file we could just generate a number that is longer than the input file itself. :)

Solution 5

For the 1 GB RAM variant you can use a bit vector. You need to allocate 4 billion bits == 500 MB byte array. For each number you read from the input, set the corresponding bit to '1'. Once you done, iterate over the bits, find the first one that is still '0'. Its index is the answer.

Share:
119,483
SecureFish
Author by

SecureFish

get addicted to stackoverflow

Updated on July 08, 2022

Comments

  • SecureFish
    SecureFish almost 2 years

    I have been given this interview question:

    Given an input file with four billion integers, provide an algorithm to generate an integer which is not contained in the file. Assume you have 1 GB memory. Follow up with what you would do if you have only 10 MB of memory.

    My analysis:

    The size of the file is 4×109×4 bytes = 16 GB.

    We can do external sorting, thus letting us know the range of the integers.

    My question is what is the best way to detect the missing integer in the sorted big integer sets?

    My understanding (after reading all the answers):

    Assuming we are talking about 32-bit integers, there are 232 = 4*109 distinct integers.

    Case 1: we have 1 GB = 1 * 109 * 8 bits = 8 billion bits memory.

    Solution:

    If we use one bit representing one distinct integer, it is enough. we don't need sort.

    Implementation:

    int radix = 8;
    byte[] bitfield = new byte[0xffffffff/radix];
    void F() throws FileNotFoundException{
        Scanner in = new Scanner(new FileReader("a.txt"));
        while(in.hasNextInt()){
            int n = in.nextInt();
            bitfield[n/radix] |= (1 << (n%radix));
        }
    
        for(int i = 0; i< bitfield.lenght; i++){
            for(int j =0; j<radix; j++){
                if( (bitfield[i] & (1<<j)) == 0) System.out.print(i*radix+j);
            }
        }
    }
    

    Case 2: 10 MB memory = 10 * 106 * 8 bits = 80 million bits

    Solution:

    For all possible 16-bit prefixes, there are 216 number of integers = 65536, we need 216 * 4 * 8 = 2 million bits. We need build 65536 buckets. For each bucket, we need 4 bytes holding all possibilities because the worst case is all the 4 billion integers belong to the same bucket.

    1. Build the counter of each bucket through the first pass through the file.
    2. Scan the buckets, find the first one who has less than 65536 hit.
    3. Build new buckets whose high 16-bit prefixes are we found in step2 through second pass of the file
    4. Scan the buckets built in step3, find the first bucket which doesnt have a hit.

    The code is very similar to above one.

    Conclusion: We decrease memory through increasing file pass.


    A clarification for those arriving late: The question, as asked, does not say that there is exactly one integer that is not contained in the file—at least that's not how most people interpret it. Many comments in the comment thread are about that variation of the task, though. Unfortunately the comment that introduced it to the comment thread was later deleted by its author, so now it looks like the orphaned replies to it just misunderstood everything. It's very confusing, sorry.

  • Ryan Amos
    Ryan Amos almost 13 years
    One crucial part is that you should be doing it as you go, that way you only have to read once. Accessing physical memory is slow.
  • ratchet freak
    ratchet freak almost 13 years
    @ryan external sort is in most cases a merge sort so on the last merge you can do the check :)
  • dty
    dty almost 13 years
    0.5 Gb, unless you've redefined byte to be 4 bits ;-)
  • Tony Ennis
    Tony Ennis almost 13 years
    If the data is on disk, it will have to be loaded into memory. This happens automatically by the file system. If we have to find one number (the problem does not state otherwise) then reading the sorted file a line at a time is the most efficient method. It uses little memory and is no slower than anything else - the file must be read.
  • Mark Ransom
    Mark Ransom almost 13 years
    The range of numbers in the input isn't specified. How does this algorithm work if the input consists of all the even numbers between 8 billion and 16 billion?
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @Mark, just ignore inputs that are outside the 0..2^32 range. You're not going to output any of them anyway, so there's no need to remember which of them to avoid.
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    ... and how will that fit into 10 MB?
  • corsiKa
    corsiKa almost 13 years
    @dty I think he means "comfortably", as in there will be lots of room in the 1Gb.
  • corsiKa
    corsiKa almost 13 years
    Perfect. Your first answer requires only 2 passes through the file!
  • dty
    dty almost 13 years
    Here's a solution that works... except when it doesn't. Useful! :-)
  • corsiKa
    corsiKa almost 13 years
    @Mark whatever algorithm you use to determine how a 32 bit string maps to a real number is up to you. The process is still the same. The only difference is how you print it as a real number to the screen.
  • brady
    brady almost 13 years
    I'm pretty sure that "integer" means 32-bit int, otherwise, you could just track the maximum value and return max + 1. Also, the OP's analysis states that integers are 32 bits. But maybe he misunderstood the interview question.
  • Mark Ransom
    Mark Ransom almost 13 years
    A 10 MB bignum? That's pretty extreme.
  • flolo
    flolo almost 13 years
    I dont know the book, but no reason to call it "Wonder Sort", as it is just a bucketsort, with a 1-bit counter.
  • vine'th
    vine'th almost 13 years
    In the book, Jon Bentley is just being humorous...and calls it "Wonder Sort"
  • Adrian Petrescu
    Adrian Petrescu almost 13 years
    I'm pretty sure the integers are bounded. If they weren't, then even a beginner programmer would think of the algorithm "take one pass through the data to find the maximum number, and add 1 to it"
  • Christoph Wurm
    Christoph Wurm almost 13 years
    @Henning: In the second part, what do you mean with ignoring numbers outside the 32-bit range?
  • Jack V.
    Jack V. almost 13 years
    Good answer. Since the output format wasn't specified, for the "arbitrary integers in some bignum encoding" case, I was just going to suggest printf("googolplex\n"), which is not really an algorithm, but works :)
  • Petr Peller
    Petr Peller almost 13 years
    What if the MAXINT is included in the file?
  • starblue
    starblue almost 13 years
    Instead of iterating yourself you can use bitSet.nextClearBit(0): download.oracle.com/javase/6/docs/api/java/util/…
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @Legate, just skip overlarge numbers and do nothing about them. Since you're not going to output an overlarge number anyway, there's no need to keep track of which of them you've seen.
  • Jonathan Dickinson
    Jonathan Dickinson almost 13 years
    How about: limit the depth of the BTree to something that would fit into 10MB; this would mean you would have results in the set { false positive | positive } and you could iterate through that and use other techniques find values.
  • leo7r
    leo7r almost 13 years
    It would be useful to mention that, regardless of the range of the integers, at least one bit is guaranteed to be 0 at the end of the pass. This is due to the pigeonhole principle.
  • oosterwal
    oosterwal almost 13 years
    @Petr Peller: A BIGINT library would essentially remove limitations on integer size.
  • Jonathan Dickinson
    Jonathan Dickinson almost 13 years
    Such an awesome answer. This would actually work; and has guaranteed results.
  • Yousf
    Yousf almost 13 years
    The good thing about Solution 1, is that you can decrease the memory by increasing passes.
  • PearsonArtPhoto
    PearsonArtPhoto almost 13 years
    This would work unless the max int was in the file, which is entirely possible...
  • Rag
    Rag almost 13 years
    Literally guessing a random output probably won't get you many points on an interview
  • Rag
    Rag almost 13 years
    Although more portable, this code will be annihilated by code written to utilize hardware-supported vector instructions. I think gcc can automatically convert code to using vector operations in some cases though.
  • jcolebrand
    jcolebrand almost 13 years
    @Brian I think this is more like "super naive code" which is often the most debuggable, but never the most optimized.
  • jcolebrand
    jcolebrand almost 13 years
    @Adrian that was my solution ;) requires one pass through the file and meets the requirements.
  • jcolebrand
    jcolebrand almost 13 years
    Or if you don't want to use a BitSet ... try an array of bits. Does the same thing ;)
  • jcolebrand
    jcolebrand almost 13 years
    When memory is the constraint and not time, this is probably the best method. Keep re-reading the file over and over
  • user606723
    user606723 almost 13 years
    unless you have memory the same speed as your processor, Henning Makholm's solution is better no matter the amount of ram.
  • Mark Ransom
    Mark Ransom almost 13 years
    @Adrian, your solution seems obvious (and it was to me, I used it in my own answer) but it's not obvious to everybody. It's a good test to see if you can spot obvious solutions or if you're going to over-complicate everything you touch.
  • Admin
    Admin almost 13 years
    I am a bit confused at solution 1. Are you saying ignore the high 16bits and count how many times you see the low 16bits? that would require (2^16(bytes for low 16bits) * 3(bytes required to count)) which is 192k so that works but i just wanted to make sure i didnt get it vastly confused.
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @acidzombie: Actually I was saying ignore the low 16 bits and count the number of times you see each possibility for the high 16 bits. But doing it the other way around works equally well.
  • David Heffernan
    David Heffernan almost 13 years
    @brian I don't think Jon Bentley was allowing such things into his book on algorithms.
  • Rag
    Rag almost 13 years
    Nice! This is clearly the correct solution to the interviewer's question. I feel like the question was trying to get you to come up with a solution which holds up even when you keep going lower: "What about 10MB? What about 1MB? What about just static constants and two int32 pointers I give you?" I think your answer holds up even under the last constraint.
  • Pete
    Pete almost 13 years
    The rules does not specify that it is 32bit or 64bit or anything, so according to the rules specified, there is no max int. Integer is not a computer term, it is a math term identifying positive or negative whole numbers.
  • PearsonArtPhoto
    PearsonArtPhoto almost 13 years
    True enough, but one cannot assume that it is a 64 bit number, or that someone wouldn't just sneak in the max int number just to confuse such algorithms.
  • Pete
    Pete almost 13 years
    The entire notion of "max int" is not valid in the context if no programming language has been specified. e.g. look at Python's definition of a long integer. It is limitless. There is no roof. You can always add one. You are assuming it is being implemented in a language that has a maximum allowed value for an integer.
  • Nakilon
    Nakilon almost 13 years
    @oosterwal, if this answer was allowed, than you don't even need to read the file – just print as big number as you can.
  • oosterwal
    oosterwal almost 13 years
    @Nakilon: While that might work for almost every single case, there is no guarantee that the randomly huge number that was printed was not in the file.
  • Rune Jeppesen
    Rune Jeppesen almost 13 years
    Using this technique, I need only 2^16 bits. That's for 2^16 buckets and one bit per bucket. Since we know exactly one number is missing, one of the buckets will have an odd number of hits; the others will have an even number. Initialize the bit array to all zeros. For every hit, toggle the corresponding bit. When you are done, look for the 1 and then scan through that bucket (reusing the bit array) to find the missing number.
  • Nakilon
    Nakilon almost 13 years
    @oosterwal, if your random huge number was the largest you could print, and it was in file, then this task couldn't be solved.
  • oosterwal
    oosterwal almost 13 years
    @Nakilon: +1 Your point is taken. It's roughly equivalent to figuring the total number of digits in the file and printing a number with that many digits.
  • George
    George almost 13 years
    It's not specified to be sorted.
  • ammaciel
    ammaciel almost 13 years
    @Brian - if you have 2^32 bits, and 4 billion integers, you can map one bit to each of the integers, and you're guaranteed to have a slot left over, since you run out of integers before you run out of bits, you would have to map two bits to one integer, which is a contradiction. So, assuming 4B really means 2^32, as I suspect Raf did, he is correct.
  • Brooks Moses
    Brooks Moses almost 13 years
    @Brian: Will it? I think that entirely depends on how many times you plan to run the program, and how much your programming (and debugging!) time is worth. If you only need to run it once, you're better off writing something simple that's less likely to have bugs.
  • Frank Farmer
    Frank Farmer almost 13 years
    As the number of ints in the file increases, the feasibility of the pseudorandom approach decreases. If the file contains, say, (2^32 - 5) ints, trying to find one of the handful of remaining ints randomly is impractical.
  • rfrankel
    rfrankel almost 13 years
    Why would you sort it if you just want to find the max?
  • Christopher Creutzig
    Christopher Creutzig almost 13 years
    @Barry: The question above does not indicate that there is exactly one number missing. It doesn't say the numbers in the file don't repeat, either. (Following the question actually asked is probably a good idea in an interview, right? ;-))
  • Dave Kirby
    Dave Kirby almost 13 years
    @Brian: the book was written in 1986, and made up of columns previously written for the CACM journal. I don't think they had hardware-supported vector instructions in those days.
  • Potatoswatter
    Potatoswatter almost 13 years
    I'm glad to see someone else thought of this, but why is it way down here at the bottom? This is a 1-pass algo… 10 MB is enough for 2.5 M guesses, and 93%^2.5M ≈ 10^-79000 is truly a negligible chance of needing a second scan. Due to the overhead of binary searching, it goes faster if you use fewer guesses! This is optimal in both time and space.
  • Christopher Creutzig
    Christopher Creutzig almost 13 years
    With probably over 90% of the possible values set, your Bloom Filter would probably need to degenerate to the bitfield so many answers already use. Otherwise, you'll just end up with a useless completely filled bitstring.
  • Peter Gibson
    Peter Gibson almost 13 years
    I see this has already been suggested :) upvotes for those people
  • KriptSkitty
    KriptSkitty almost 13 years
    I'm probably missing something here, but how can solution number one work unless we know that there are no duplicates in the file, and I don't see that in the question?
  • Klas Lindbäck
    Klas Lindbäck almost 13 years
    How will you sort 4 billion integers when you only have 1 GB of memory? If you use virtyual memory it will take a loooong time as memory blocks get paged in and out of physical memory.
  • Andrew Dalke
    Andrew Dalke almost 13 years
    @Dave: supercomputers had vector instructions since the 1970s. Wikipedia (en.wikipedia.org/wiki/Vector_processor) claims the first work was done in the early 1960s.
  • Klas Lindbäck
    Klas Lindbäck almost 13 years
    How would you sort 4 billion integers using no more than 1 GB of memory?
  • leftaroundabout
    leftaroundabout almost 13 years
    @Potatoswatter: good you mentioned binary search. That's probably not worth the overhead when using only 5 guesses, but it certainly is at 10 or more. You could even do the 2 M guesses, but then you should store them in a hash set to get O(1) for the searching.
  • Richard H
    Richard H almost 13 years
    @Brian: I think this solution is both imaginitive and practical. I for one would give much kudos for this answer.
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @Thomas, I don't see any reason why having duplicates would be a problem for the solution. Notice that the question does not ask for us to identify all missing numbers, just to find at least one of them. Could you please elaborate?
  • Daniel
    Daniel almost 13 years
    Great answer, but... Roar. I'm the worst case monster -- annoying but vaguely relevant. Worst case: O(N^2), assuming you check for doubles in your random thing. Still, upboats.
  • Aleks G
    Aleks G almost 13 years
    The question was exactly about the algorithm for isNotInFile function. Please make sure you understand the question before answering.
  • Johann du Toit
    Johann du Toit almost 13 years
    Love It! Even Though it's not quite the answer he was looking for... :D
  • ratchet freak
    ratchet freak almost 13 years
    @klas merge sort is designed for that
  • carboleda
    carboleda almost 13 years
    @Christopher My understanding of Bloom filters is that you don't get a filled bitarray until you reach 100%
  • carboleda
    carboleda almost 13 years
    ...otherwise you'd get false negatives.
  • Kevin
    Kevin almost 13 years
    @brian, If even a one in 64 billion billion billion chance is too high for you, just "guess" your number and then make one pass through to verify it doesn't exist.
  • deg
    deg almost 13 years
    no, the question was "which integer is not in the file", not "is integer x in the file". a function to determine the answer to the latter question could for example just compare every integer in the file to the integer in question and return true on a match.
  • Rag
    Rag almost 13 years
    @Potatoswatter Ben Haley's equivalent answer is up near the top
  • Rag
    Rag almost 13 years
    @Nakilon and TheDayTurns have pointed this out in the comments to the original question
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    +1 for creativity (and the smallest worst-case output for a single-pass solution yet).
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    It's what I started with, before I began optimizing for the given parameters.
  • Rag
    Rag almost 13 years
    But there aren't 4 billion bits to diagonalize on, there are only 32. You'll just end up with a 32 bit number which is different from the first 32 numbers in the list.
  • Rag
    Rag almost 13 years
    @Henning It's hardly a single pass; you still have to convert from unary to binary. Edit: Well I guess it's one pass over the file. Nevermind.
  • hammar
    hammar almost 13 years
    @Henning: Cool. It's a nice example of an algorithm where it's easy to tweak the space-time tradeoff.
  • Rag
    Rag almost 13 years
    I think this is a legitimate answer. Except for I/O you only need one integer and a bool flag.
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @Brian, where's there something "unary" here? The answer is constructing a binary answer one bit a time, and it only reads the input file once, making it single pass. (If decimal output is required, things get problematic -- then you're probably better off constructing one decimal digit per three input numbers and accept a 10% increase in the log of the output number).
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @Brian (first comment), Jonathan explicitly assumed arbitrarily large integers.
  • James Waldby - jwpat7
    James Waldby - jwpat7 almost 13 years
    Yes, it's easy to prove[] that works when one integer is missing, but it frequently fails if more than one is missing. For example, 0 ^ 1 ^ 3 ^ 4 ^ 6 ^ 7 is 0. [ Writing 2x for 2 to x'th power, and a^b for a xor b, the xor of all k<2x is zero -- k ^ ~k = (2^x)-1 for k < 2^(x-1), and k ^ ~k ^ j ^ ~j = 0 when j=k+2**(x-2) -- so the xor of all but one number is the value of the missing one]
  • Simon Mourier
    Simon Mourier almost 13 years
    @Aleks G - I don't see why this is marked as wrong. We all agree it's the slowest algorithm of all :-), but it works and just needs 4 bytes to read the file. The original question does not stipulate the file can only be read once for example.
  • Aleks G
    Aleks G almost 13 years
    @Simon I never said the file needs to be read once. I stated that the question includes the requirement for an algorithm to check if a number is in the file.
  • Rag
    Rag almost 13 years
    @Henning The problem doesn't make sense for arbitrarily large integers because, as many people have pointed out, it's trivial to just find the largest number and add one, or construct a very long number out of the file itself. This diagonalization solution is particularly inappropriate because rather than branching on the ith bit you could just output 1 bits 4 billion times and throw an extra 1 on the end. I'm OK with having arbitrarily large integers in the algorithm but I think the problem is to output a missing 32-bit integer. It just doesn't make sense any other way.
  • Simon Mourier
    Simon Mourier almost 13 years
    @Aleks G - Right. I never said you said that either. We just say IsNotInFile can be trivially implemented using a loop on the file: Open;While Not Eof{Read Integer;Return False if Integer=i;Else Continue;}. It needs only 4 bytes of memory.
  • Rag
    Rag almost 13 years
    @Henning (first comment) For some reason I thought that the ones of the 2^32 bit output would have some meaning as a unary number converted to a 32 bit binary number. That's just the population count though :X
  • James Waldby - jwpat7
    James Waldby - jwpat7 almost 13 years
    The problem does not say "one number is missing", it says to find a number not included in the 4 billion numbers in the file. If we assume 32-bit integers, then about 300 million numbers may be missing from the file. The likelihood of the xor of the numbers present matching a missing number is only about 7%.
  • James Waldby - jwpat7
    James Waldby - jwpat7 almost 13 years
    As I mention in a comment on ircmaxell's reply: The problem does not say "one number is missing", it says to find a number not included in the 4 billion numbers in the file. If we assume 32-bit integers, then about 300 million numbers may be missing from the file. The likelihood of the xor of the numbers present matching a missing number is only about 7%.
  • Lee Netherton
    Lee Netherton almost 13 years
    This is the answer I was thinking of when I initially read the question, but on closer inspection I think the question is more ambiguous than this. FYI, this is the question I was thinking of: stackoverflow.com/questions/35185/…
  • KriptSkitty
    KriptSkitty almost 13 years
    @Henning: Assume the file contains 65536 instances of each 32-bit multiple of 65536, that is, the numbers with 16 zeros as suffix. Each counter will then show 65536. -- But wait. I think I see what I missed. That would mean there are 4294967296 numbers in the file, and it was only exactly 4 billion?
  • Rune Jeppesen
    Rune Jeppesen almost 13 years
    @Christopher I see now. I based my solution on some of the other questions/comments in the threads here, which distracted me from the wording of the original question.
  • Klas Lindbäck
    Klas Lindbäck almost 13 years
    @Brian lol. As interview questions go, not understanding where the complexity lies is usually worse than not coming up with a good solution. After all, understanding the problem is usually more than halfway of solving it.
  • ataylor
    ataylor almost 13 years
    @Paul a filled bit array gives you false positives, which are allowed. In this case the bloom filter would most likely degenerate to the case where the solution, which would be negative, returns a false positive.
  • Christopher Creutzig
    Christopher Creutzig almost 13 years
    @Paul: You can get a filled bitarray as soon as the number of hash functions multiplied by the number of entries is as large as the length of your field. Of course, that would be an exceptional case, but the probability will rise pretty quickly.
  • Mark
    Mark almost 13 years
    ah here lies the line between engineers and scientists. Great answer Ben!
  • mrBorna
    mrBorna almost 13 years
    I think I may be missing something. Can someone explain to me how to keep count? Consider the scenario where every single number in the file is n. If we are keeping count, wouldn't there be 4 billion n's requiring us to allocate 2^32 (overhead to represent 4b,max count)* 2^16(number of Hi-bit permutations) == total 2^48 bits == 32768GB? what am i missing?
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @mrBorna, even a counter that can reach a value of 4 billion takes up only 32 bits, not a bit for each value it can reach. And you don't even have to keep precise track -- once a bucket counter reaches ceil(4.0e9/65536)=61036 you can stop increasing it because then it cannot possibly be the counter with the least value when the first pass is over. Thus you really only need a 16-bit counter for each bucket.
  • mrBorna
    mrBorna almost 13 years
    @Henning Ok... So based on your clarification (and thank you for that) , each bucket (of the 2^16 buckets) has a 2^16 bit counter. Thats a total of 512MB in memory total for a 1-run count (though less if we split it up into multiple runs). So this is not a 2-run solution for 10MB memory, correct?
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @mrBorna, wrong! Each bucket has a two-byte counter (sixteen bits, capable of counting up to 65535), making the entire table take up 128 kilobytes.
  • mrBorna
    mrBorna almost 13 years
    Gah I messed up it totally makes sense. I don't know why I kept saying 2^16 when I meant 16 bits for the counter lol. thanks a lot @Henning!
  • vsz
    vsz almost 13 years
    And how would you print the result? You can't put it in a file, and printing on the screen would take a few billion years. Not an uptime likely to be achieved with today's computers.
  • Michael Sagalovich
    Michael Sagalovich almost 13 years
    it is never said we need to print the result anywhere, just 'generate' it. so it depends on what you mean by generate. anyway, my answer is just a trick to avoid working out a real algorithm :)
  • Jürgen Strobel
    Jürgen Strobel almost 13 years
    Unless it's been quoted improperly, the question didn't place a bound on type of integer, or even on language used. Many modern languages have integers only bounded by available memory. If the largest integer in the file is > 10MB, tough luck, task impossible for the second case. My favourite solution.
  • Alcott
    Alcott almost 13 years
    @vine'th, I don't quite understand what did you mean if there is only one byte memory available? how to do the job with one byte? Can you be more specific?
  • Alcott
    Alcott almost 13 years
    @vine'th, by 4 passes, did you mean read the file 4 times to finish the job?
  • Alcott
    Alcott almost 13 years
    @HenningMakholm, why the entire table is 128 kb? What I think is that, there are 2^16 buckets which will take up 2^16/8=8192b, and there a 2-byte counter for each bucket, so 2*2^16=131072b is taken. Then I think 8192+131072=139.3kb will be taken, right?
  • Alcott
    Alcott almost 13 years
    @HenningMakholm, the second thing is, if there are duplicates, say there are 65535 duplicates for a bucket, no other numbers for this bucket, will the solution still detect the missing numbers?
  • hmakholm left over Monica
    hmakholm left over Monica almost 13 years
    @Alcott: (1) it is not clear to me what you need the the the 2^16 single bits (8192 bytes) you speak about for. The counters themselves should be enough. (2) The algorithm does not purport to find all missing numbers. All it needs to do is locate one among 32-bit the numbers that are not in the file. A bucket that is hit by 65535 duplicates of a single number will just be ignored in favor of a less-hit bucket.
  • Alcott
    Alcott almost 13 years
    @hammar, but what if there are those numbers which appear more than once?
  • vine'th
    vine'th almost 13 years
    @Alcott: The one byte example was just an very basic example. Yes by 4 passes I meant read the file 4 times. To be more realistic, if we have memory for 1 Billions bits available, we need 4 passes to find the missing number among the 4 billion numbers in the above example.
  • Alcott
    Alcott over 12 years
    @dr jimbob, what if there is only one number in a bin, and that single number has 65535 duplicates? If so, the bin will still counts 65536, but all of the 65536 numbers are the same one.
  • dr jimbob
    dr jimbob over 12 years
    @Alcott - I assumed you had 2^32-1 (or fewer) numbers, so by the pigeonhole principle you are guaranteed to have one bin with fewer than 65536 counts to check in more detail. We are trying to find just one missing integer, not all of them. If you had 2^32 or more numbers, you can't guarantee a missing integer and would not be able to use this method (or have a guarantee from the outset there is a missing integer). Your best bet then would be brute force (e.g., read through the array 32 times; checking the first 65536 #s the first time; and stopping once an answer was found).
  • Justin Morgan
    Justin Morgan about 12 years
    @oosterwal - There is a guarantee if the randomly huge number was too big to be represented in a file that size. I came to a similar conclusion and discussed it in my answer.
  • Ian
    Ian about 12 years
    @BrianGordon, the time spent in ram will be negligible compared to the time spent reading the file. Forget about optimizing it.
  • supercat
    supercat about 11 years
    I like this approach, but would suggest a memory-saving improvement: if one has N bits of indexed storage available, plus some constant storage, define a configurable reversible 32-bit scrambling function (permutation), pick an arbitrary permutation, and clear all indexed bits. Then read each number from the file, scramble it, and if the result is less than N, set the corresponding bit. If any bit isn't set at the end of the file, reverse the scramble function on its index. With 64KB of memory, one could effectively test over 512,000 numbers for availability in a single pass.
  • supercat
    supercat about 11 years
    If the scrambling function is randomly selected from a set of 2^32 permutations which are equivalent to random distributions, it may be possible for a carefully-constructed list to include all 512,000 numbers for some of those distributions, but not very many. Realistically, even with data that was deliberately designed to be evil, making a truly random one-in-2^32 selection would probably give a better than 99.9999% chance of success in one pass.
  • Cong Hui
    Cong Hui almost 11 years
    I don't see how the second pass would guarantee it would yield a new number. Each slot keeps track of the number of integers in the file with the 16-bit prefix,I see in the second pass if the counter in a slot is either 0 or 1, when we map to it, we can find a new number by just adding 1 to the prefix or adding 1 to the current integer in the file under the scanning pointer. However, when the counter is greater 1, how does it yield a new number in the 2 second pass just by looking at the 16 bit prefix counter? Someone sheds some light please.
  • justhalf
    justhalf almost 11 years
    @CongHui: in the second pass, we only consider those numbers with the specific 16-bit prefix we identified in the first pass, which we already know that there must be less than 2^16 such numbers. Then we record the count of each 16-bit suffix for numbers with the specified 16-bit prefix. There must be a 16-bit suffix which is not in the file. So the number formed by the 16-bit prefix and 16-bit suffix is not in the file, we can output that.
  • Cong Hui
    Cong Hui almost 11 years
    @justhalf Gotcha, that's just brilliant! Thank you
  • Emmet
    Emmet over 10 years
    Why not just Console.Write( 1 << bitcount ) instead of the loop? If there are n bits in the file, then any (_n_+1)-bit number with a leading 1 is absolutely guaranteed to be bigger.
  • hmakholm left over Monica
    hmakholm left over Monica over 10 years
    Why do you think a job interviewer would be expecting a "single best solution"? Also you're solving a different problem than the one that was asked -- it's not about checking whether some given number occurs in the file, it's about producing one that doesn't occur there.
  • KBusc
    KBusc over 10 years
    Unless the largest number in the file is max int then you will just overflow
  • justhalf
    justhalf about 10 years
    @Harshdeep, what we meant by a "pass" here is the process of reading the numbers in the input file, not the memory. Of course after the second pass (that is, the second time reading the input), we need to loop through the memory to find which 16-bit suffix has not been set.
  • Justin Morgan
    Justin Morgan about 10 years
    @Emmet - That would just cause integer overflow, unless the file were smaller than the size of an int (4 bytes in C#). C++ might let you use something bigger, but C# doesn't seem to allow anything but 32-bit ints with the << operator. Either way, unless you roll your own gigantic integer type, it's going to be a very small file size. Demo: rextester.com/BLETJ59067
  • Michael
    Michael over 9 years
    I've got to disagree with this answer, as well as with the comments on its practicality. I have seen on multiple occasions disastrous effects of allegedly statistically improbably collisions. One of the reasons is that the probability of collision within K numbers randomly chosen out of N possibilities, with K << N, is roughly proportional to K^2/N, NOT K/N, see Birthday Paradox. Second reason is in pseudo-randomness shortcomings. You single-pass check definitely helps, but when K is close to sqrt(N) you'd need numerous guesses and passes.
  • Michael
    Michael over 9 years
    Wonderful answer! And it scales very well to larger fixed size numbers, such as 4 passes for 64-bit integers, etc.
  • Michael
    Michael over 9 years
    What would be the size of that file in a real World program that may need to generate a new integer and append it to the "used integers" file 100 times?
  • Ben Haley
    Ben Haley over 9 years
    @Michael you are right that pseudo-randomness can have shortcomings. In this case the birthday paradox does not apply. But your more general point that extra assumptions lead to extra failure modes is well taken. My answer has value for those situations where the speed improvement is worth the extra implementation and debugging work.
  • user253751
    user253751 over 9 years
    Or the length of the length of the longest number! Or the length of that! Or the length of that! Or the length of the number of times you took the length of the longest number! (Or the length of the Knuth up-arrow notation of the longest number! Or ...)
  • derek
    derek over 9 years
    The solution for 2nd case is wrong. Just suppose the following case: 1st bucket: all 1s.
  • hmakholm left over Monica
    hmakholm left over Monica over 9 years
    @deryk: Is "1st bucket: all 1s" meant to be a description of a counterexample? I don't understand exactly what counterexample that would be; could you perhaps attempt to extend your description to one or more complete sentences?
  • derek
    derek over 9 years
    @Henning Makholm. There must be something wrong of this website. I wrote a lot and only a few left. Anyway, my point is this: if each bucket only has duplicates of a single number, the first pass will tell you no number is missing which wrong! For example, 1st buckets has 65536 of 10s, 2nd bucket has 65536 of 65536+10, the 3rd bucket has 65536 2*65536+10, ..., the last bucket has 65536 of 65535*65536+10. The first pass will tell you each bucket counts 65536 times and so will report no number is missing. Please correct me if this counterexample is wrong.
  • hmakholm left over Monica
    hmakholm left over Monica over 9 years
    @deryk: 65536 buckets with 65536 copies in each would require the input file to contain 65536*65536 = 4294967296 entires -- and it is said explicitly in the original interview question that there are only 4 billion numbers there, which is less than that. (Reasonable minds disagree on whether e.g. "gigabyte" ought to mean 1024^3 or 10^9 bytes, but "billion" is unambiguously 10^9 in English).
  • derek
    derek over 9 years
    @Henning Makholm, what do you mean "65536 buckets with 65536 copies in each"? Does that mean each bucket has 65536 bits? or in other words, do you allocate a 2-d array of buckets[65536][65536/8]? if so, the total size is way larger than 10MB.
  • hmakholm left over Monica
    hmakholm left over Monica over 9 years
    @deryk: No, I'm just saying that your claimed counterexample seems to describe an input file with 65536*65536 integers in it. That doesn't match the description in the question, which is about a file with only 4 billion integers in it. (In the solution I describe each of the 65536 buckets is represented by one 16-bit counter).
  • derek
    derek over 9 years
    @Henning Makholm, Or are you saying you read in the file 65536 times and each time you only count the numbers in [k*65536, k*65536+65535]. Am my understanding correct?
  • derek
    derek over 9 years
    @Henning Makholm, a little confused. Any differences between "65536*65536 integers in a file" and "4 billion integers in a file"? 65536*65536 is about 4 billion.
  • hmakholm left over Monica
    hmakholm left over Monica over 9 years
    @deryk: (1) I read the file once and count how many numbers there are in each length-65536 interval. Then I choose the interval that was hit the fewest times and read the file once again, now keeping track of which of those particular 65536 numbers were hit. (2) Yes, the difference is 65536*65536-4000000000 == 294967296. Your file has about 295 million more numbers in it than the 4 billion it's supposed to have.
  • derek
    derek over 9 years
    @Henning Makholm: OK, my question is back. For each 16-bit counter, if duplicates exist, do you count them once or multiple times? For example, for the 1st counter, you see 1,1,1, do you count once or three times?
  • hmakholm left over Monica
    hmakholm left over Monica over 9 years
    @deryk: You count the number of lines in the file that contain a number in the interval covered by the bucket. It is immaterial whether these numbers are the same or different (since you can't let anything depend on whether there are duplicates without using too much memory).
  • derek
    derek over 9 years
    @Henning Makholm: now I understand your solution completely. Since the total integers in file is 4 billion which is less than 2^32, for 65536 buckets, there will always be at least a bucket that will be hit less than 65536 times. Good solution! I dig it a little bit more and found It is called pigeonhole principle.
  • imallett
    imallett about 9 years
    I was thinking this. Assuming int is 32 bits, just output 2^64-1. Done.
  • Peter Cordes
    Peter Cordes almost 9 years
    @BrianGordon: What are you going to do with vectors? Remember that your input isn't sorted, so for each input element, you need to set a different bit in memory, and those bits won't be contiguous. x86 AVX2 has gathered loads (of 32bit elements). It doesn't have scattered stores of ints, let alone bytes, let alone bits into a bitfield!
  • Peter Cordes
    Peter Cordes almost 9 years
    @BrianGordon: Or were you talking about the loop at the end to find the first unset bit? Yes, vectors will speed that up, but looping over the bitfield with 64bit integers, looking for one that's != -1 will still saturate memory bandwidth running on a single core (this is SIMD-within-a-register, SWAR, with bits as elements). (For recent Intel/AMD designs). You only have to figure out which bit is unset after you find the 64bit location containing it. (And for that you can not / lzcnt.) Fair point that looping over a single-bit test may not get optimized well.
  • Peter Cordes
    Peter Cordes almost 9 years
    The clever upper-16 / lower-16 method was posted earlier by Henning: stackoverflow.com/a/7153822/224132. I loved the add-them-up idea for a unique set of integers missing exactly one member, though.
  • Peter Cordes
    Peter Cordes almost 9 years
    @Alcott: then the algorithm will pick the denser bin instead of the sparser bin, but by the pigeonhole principle, it can't ever pick a completely full bin. (The smaller of the two counts will always be less than the bin range.)
  • Peter Cordes
    Peter Cordes almost 9 years
    If you have a contiguous-but-missing-one range that isn't zero-based, add instead of xor. sum(0..n) = n*(n+1)/2. So missing = nmax*(nmax+1)/2 - nmin*(nmin+1)/2 - sum(input[]). (sum idea from @hammar's answer.)
  • Peter Cordes
    Peter Cordes almost 9 years
    The birthday paradox makes this kind of solution not worth the risk, without checking the file to see if your random guess was actually an valid answer. (Birthday paradox doesn't apply in this case, but repeatedly calling this function to generate new unique values does create a birthday paradox situation.)
  • Peter Cordes
    Peter Cordes almost 9 years
    Is this one of the solutions that only works for one missing integer and no duplicates (meaning there's exactly one possible answer)? If so, it looks like it works for the same reason as the much-less-laborious xor method.
  • Peter Cordes
    Peter Cordes almost 9 years
    This duplicates all the years-older bitmap answers. Also, std::vector<bool> doesn't have a fast way to scan for an unset bit. Neither does std::bitset, though. (Testing 64bits at a time against (long)-1 is way faster, unless the compiler is really clever and sees through a bit-at-a-time loop.)
  • dr jimbob
    dr jimbob almost 9 years
    @PeterCordes - Yes, Henning's solution predates mine, but I think my answer is still useful (working through several things in more detail). That said, Jon Bentley in his book Programming Pearls suggested a multi-pass option for this problem (see vine'th's answer) way before stackoverflow existed (not that I am claiming either of us consciously stole from there or that Bentley was the first to analyze this problem -- it is a fairly natural solution to develop). Two passes seems most natural when the limitation is you no longer have enough memory for a 1 pass solution with a giant bit array.
  • Peter Cordes
    Peter Cordes almost 9 years
    Tested this on x86; gcc 4.9.2 generates terrible one-bit-at-a-time loops. clang generates worse loops for setting a sequence of bits, but slightly better loops (using bt r, r) for testing a bit at a time. Both are still ~100 times slower than checking 64bits at a time with cmp r, -1
  • Peter Cordes
    Peter Cordes almost 9 years
    Yup, I agree it's a nice answer for the analysis/explanation. Comparing to other answers, I see the add-them-up idea is similar to the XOR-them-all answers.
  • Peter Gibson
    Peter Gibson almost 9 years
    @PeterCordes Randomly generated 128bit numbers is precisely how UUIDs work - they even mention the birthday paradox when calculating the probability of a collision in the Wikipedia UUID page
  • rents
    rents almost 9 years
    This programming pearls book is gold. +1 for the reference to it. Why didn't I know of this before...!
  • Dewi Morgan
    Dewi Morgan almost 9 years
    Of course, with this algorithm, the worst case is one where the numbers were created by the same random number generator that you are using. Assuming you can guarantee that's not the case, your best tactic is to use a linear congruential random number generator to generate your list, so that you will go through the number space in a pseudorandom way. That means if you somehow fail, you can continue generating numbers until you have covered the whole range of ints (of have found a gap), without ever duplicating your effort.
  • leftaroundabout
    leftaroundabout almost 9 years
    @DewiMorgan: right, though that's a really bad worst case (as in, bad luck).
  • Phil
    Phil over 8 years
    Variant: Find max in set, add 1.
  • Shon
    Shon over 6 years
    If it's one int per line: tr -d '\n' < nums.txt > new_num.txt :D
  • ctrl-alt-delor
    ctrl-alt-delor over 5 years
    for unbounded: one pass, remove delimiters (everything that is not a digit).
  • Solomon Ucko
    Solomon Ucko about 5 years
    If the integers are unbounded, it's not always possible to store a finite integer larger than all of them (or equivalently, their lengths) in a finite amount of space.
  • vine'th
    vine'th almost 5 years
    Could you pls elaborate with an example with lesser size, ex 4 bit integers? unable to clearly understand, my calculations are off. @corsiKa
  • Level 42
    Level 42 over 4 years
    I would quicksort the original array (no additional storage.) then march through the array and report the first 'skipped' integer. Done. Answered the question.
  • kaya3
    kaya3 over 2 years
    The birthday paradox is irrelevant here because you are checking whether any of your random numbers match the ones in the file, not whether any two of your random numbers match each other. Also, for the second algorithm, if none of the 1,000 numbers generated are unique, then you can just generate another 1,000 and try again. The expected number of passes is imperceptibly greater than 1.
  • AminM
    AminM about 2 years
    this is more like quicksort than binary search. But while going through ram NLgN times is no big deal with files it's pretty bad. So a good solution would go through the file O(1) times at most. Otherwise you're really going to be bogged down.