Where does the "2" in 2^n come from when computing max memory size? n=n-bit

6,109

Solution 1

2 comes from the nature of binary numbers, where there are exactly 2 possible states per digit.

When calculating the number of values a given number of digits can contain, the calculation is always Options^Instances. Options represent the set of possible choices a digit could have, and Instances represents the number of digits being used (length, width, and size are common synonyms).

Likewise, to calculate the range of values that could be stored, it is 0 -> (Options^Instances) - 1.

Note that digit values are always natural numbers, so we're not worried about negative numbers or decimals or anything more exotic. Those concepts build atop digit values to augment their meaning, but the underlying value representation is unchanged. 3, -3, and 3.3 all express different meanings, but they all use the digit value 3 in the same way with the same rules.

So a 2-bit number can express 4 possible values, ranging from 0 to 2^2-1 (0-3). Ie. the set of possible values is {00, 01, 10, 11}.

a digit of binary contains 2 options, so it is Base-2. Most common number systems in use these days have 10 options per digit (0-9), so they are Base-10. Other common bases include Octal (base-8) and Hexadecimal (base-16).

This concept isn't even limited to numbers, but any well designed set of values. If I wanted to calculate the the number of possible 8-character passwords consisting of all lower case Latin letters, it would be 26^8. If I added capital letters, it would be 52^8. If I then added numbers, it would be 62^8. For binary numbers however, since it can only be 0 or 1, it's always 2^n.

Bit-size refers to the number of bits used to store a value (the "Instances" variable). For a real-world example, the game No Man's Sky uses a 32-bit number to represent money, so you can never get more than ‭4,294,967,295 money. That's because it's the maximum value you can express in 32 bits.

Solution 2

Here's an attempt at "explain it like I'm 5" type of answer.

A single bit has two states: 0 and 1. Using a single bit I can store two values:

0
1

By adding a single bit we can store four values. Let's ignore that they are binary numbers, just keep in mind that they are distinct values:

00
01
10
11

Add another one and we have eight values:

000
001
010
011
100
101
110
111

Why are they doubling? Imagine you're prepending the new bit to the left. If the bit is 0, you get previous set of four values but prefixed by 0. If it's 1, you get previous set of four values prefixed by 1. That's 8 total: 4 previous values times 2 possible states of the added bit.

 previous bits  |    previous bits
 ↓↓             |    ↓↓
000             |   100
001             |   101
010             |   110
011             |   111
↑               |   ↑
new '0' bit     |   new '1' bit

Here's a graphical version, if you don't like ASCII art:

Original four values (two bytes each), one per row, are duplicated. Then a 0 is prepended to each value in the first group of four and a 1 is prepended to each value in the second group.

If we had three possible states for the prepended "bit" ("trit"?), let's say A, B and C, we'd triple the number of possible values:

 previous bits  |    previous bits  |    previous bits
 ↓↓             |    ↓↓             |    ↓↓
A00             |   B00             |   C00
A01             |   B01             |   C01
A10             |   B10             |   C10
A11             |   B11             |   C11
↑               |   ↑               |   ↑
new 'A' bit     |   new 'B' bit     |   new 'C' bit

An image similar to the previous one, but now there are three copies of original four values. First four are prepended with state 'A', next four with 'B' and final four with 'C'.

So adding a new bit to the value multiplies number of possible values by number of states this new bit can have. First bit has 2 states (0 and 1), so a 1-bit number has 2 values. Second bit has two states:

2 × 2 = 4
↑   ↑
↑    number of 2nd bit's states
↑
 number of 1st bit's states

Third bit has two states too:

4 × 2 = 8
↑   ↑
↑    number of 3rd bit's states
↑
 number of previous values

Same with the fourth bit:

8 × 2 = 16
↑   ↑
↑    number of 4th bit's states
↑
 number of previous values

We can expand the 8 in this formula to our previous calculations:

((2 × 2) × 2) × 2 = 16
  ↑   ↑    ↑    ↑
  ↑   ↑    ↑     number of 4th bit's states
  ↑   ↑    ↑
  ↑   ↑     number of 3rd bit's states
  ↑   ↑
  ↑    number of 2nd bit's states
  ↑
   number 1st bit's states

As you can see, to get the number of possible values, you have to multiply numbers of states of particular bits. Since all of our bits have 2 states, we can simplify multiplying 2's n times to a simply 2n.

Solution 3

Let me complement the existing answers with an analogy: How many different numbers (let's call them addresses) can you build with an n-digit number?

Let's try:

  • 2 digits can build addresses from 00 to 99, which are 100 addresses.
  • 3 digits can build addresses from 000 to 999, which are 1000 addresses.
  • ...
  • In general, n digits can build 10^n addresses.

That's because one (decimal) digit has 10 possible states (0-9, latin decem = ten).

Bits are like digits, except that they only have 2 states (0 and 1). Hence, n bits can build 2^n addresses.

Solution 4

CPUs talk to RAM through their address and data pins. Here's an example from an old 8-bit CPU called a 6502.

enter image description here

The A pins are the address pins and the D pins are the data pins.

There's 16 A pins, numbered 0 through 15.

Each pin can be ON or OFF (nothing in between), so that's 2 possible states per pin.

So there's 2^16 possible states, or 65,536 possible addresses.

The D pins are used to send (write) or get (read) the data from the address. Since there's 8 of them, you can read/write 2^8 possible values (0-255).

Modern CPUs are more complicated because they talk to the RAM with multiple channels and signaling for modern RAM is more complicated due to its speed, but it's essentially the same concept.

Solution 5

A lot of the answers are trying to explain the "binary" side of this, but it may not be clear how that connects to the computer architecture. Computers work with bits in chunks that are called "words". In modern computers, a word is typically 64 bits, and for a long time the standard word size was 32 bits. When you have an integer or unsigned integer data type, one word of bits is all you get. You can work with larger numbers if you write software to do the arithmetic, but that's not built into the machine like single-word integer types are.

This is important because every memory location has an address which is just a number. Suppose you were working with a 1980s machine with 16-bit words. For unsigned integers, you can represent 2^16 different integers, and so those are all the memory addresses you can have. It's like noticing that there can only be ten billion phones in the US and Canada because phone numbers only have ten digits. (Fewer really, because of limits on valid phone numbers.)

With 64-bit machines, you can address far more memory than you could ever provide, but for a long time address space was a real limiting factor.

This oversimplifies in places, but I hope it gives the right overall idea.

Share:
6,109
Admin
Author by

Admin

Updated on September 18, 2022

Comments

  • Admin
    Admin almost 2 years

    So I was reading up on address buses and max memory sizes, so my question is, when computing max memory size for any architecture, where does the 2 in 2^n where n is the address bus bit size come from? Also what does a "bit size" even mean? I'm so confused.

  • Mokubai
    Mokubai about 4 years
    It should probably be stated that this is the numerical "base" as in binary is base-2, decimal is base-10, and so on. This may give hints for future research and learning about numerical systems and the reason for the 2^n and 10^n.
  • Michael
    Michael about 4 years
    Fun fact: Ternary computers actually existed and Donald Knuth once said he thought they'd make a a comeback.
  • Reversed Engineer
    Reversed Engineer about 4 years
    ...and that the word "bit" is a contraction of "binary digit", which is either 0 or 1
  • Ross Presser
    Ross Presser about 4 years
    Another fun fact that is marginally related: you can express complex numbers using only the digits 0,1,2,3 in the quater-imaginary base -2i
  • Stewart
    Stewart about 4 years
    This answer is outstanding. Step-by-step. Care and effort was put into this answer.
  • SiHa
    SiHa about 4 years
    I believe that's the maximum number of studs in the Wii 'Lego' games, too. I'd never really considered that it was 32-bit limitation.
  • Daniel R Hicks
    Daniel R Hicks about 4 years
    Ah, but the wave of the future is TRENARY!!
  • EvilTak
    EvilTak about 4 years
    I'm nit picking here, but a signed 32-bit 2's complement integer can represent values from -2^31 to 2^31-1, not -2^32 to 2^32-1. An unsigned 32-bit integer can only represent values up to 2^32-1.
  • Frank Thomas
    Frank Thomas about 4 years
    +1 for covering words
  • David Spillett
    David Spillett about 4 years
    @EvilTak - as yes, typo on my part, thanks for spotting, will correct.
  • SirTechSpec
    SirTechSpec about 4 years
    You say "This concept isn't even limited to numbers, but any well designed set of values." I think you mean well designated, as in, "it has to be one of these possibilities". The possibilities can be as poorly designed as you like, so long as they're consistent. ;)
  • Frank Thomas
    Frank Thomas about 4 years
    well to calculate a range,ordinality is required, so that is part of my definition of well-designed, but yeah, that is correct.
  • 0xACE
    0xACE about 4 years
    This is good on explaining what bit size means but leaves the "2" question unanswered. The "2" is due to the "bit states" as another answer (@Akina; more votes) mentions