What happens when auto_increment on integer column reaches the max_value in databases?

49,693

Solution 1

Jim Martin's comment from §3.6.9. "Using AUTO_INCREMENT" of the MySQL documentation:

Just in case there's any question, the AUTO_INCREMENT field /DOES NOT WRAP/. Once you hit the limit for the field size, INSERTs generate an error. (As per Jeremy Cole)

A quick test with MySQL 5.1.45 results in an error of:

ERROR 1467 (HY000): Failed to read auto-increment value from storage engine

You could test for that error on insert and take appropriate action.

Solution 2

Just to calm the nerves, consider this:

Suppose you have a database that inserts a new value for every time a user executes some sort of transaction on your website.

With a 64 bit integer as an ID then this is the condition for overflow: With a world population of 6 billion then if every human on earth executes a transaction once per second every day and every year (without rest) it would take more than 80 years for your id to wrap around.

Ie, only google needs to vaguely consider this problem occasionally during a coffee break.

Solution 3

You will know when it's going to overflow by looking at the largest ID. You should change it well before any exception even comes close to being thrown.

In fact, you should design with a large enough datatype to begin with. Your database performance is not going to suffer even if you use a 64 bit ID from the beginning.

Solution 4

The answers here state what happens, but only one answer says how to detect the problem (and then only after the error has happened). Generally, it is helpful to be able to detect these things before they become a production issue, so I wrote a query to detect when an overflow is about to happen:

SELECT
  c.TABLE_CATALOG,
  c.TABLE_SCHEMA,
  c.TABLE_NAME,
  c.COLUMN_NAME
FROM information_schema.COLUMNS AS c
JOIN information_schema.TABLES AS t USING (TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME)
WHERE c.EXTRA LIKE '%auto_increment%'
  AND t.AUTO_INCREMENT / CASE c.DATA_TYPE
      WHEN 'TINYINT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', 255, 127)
      WHEN 'SMALLINT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', 65535, 32767)
      WHEN 'MEDIUMINT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', 16777215, 8388607)
      WHEN 'INT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', 4294967295, 2147483647)
      WHEN 'BIGINT' THEN IF(c.COLUMN_TYPE LIKE '% UNSIGNED', '18446744073709551615', 9223372036854775807) # need to quote because column type defaults to unsigned.
      ELSE 0
    END > .9; # 10% buffer

Hope this helps someone somewhere.

Share:
49,693

Related videos on Youtube

Jonas
Author by

Jonas

Passionated Software Developer interested in Distributed Systems

Updated on July 05, 2022

Comments

  • Jonas
    Jonas almost 2 years

    I am implementing a database application and I will use both JavaDB and MySQL as database. I have an ID column in my tables that has integer as type and I use the databases auto_increment-function for the value.

    But what happens when I get more than 2 (or 4) billion posts and integer is not enough? Is the integer overflowed and continues or is an exception thrown that I can handle?

    Yes, I could change to long as datatype, but how do I check when that is needed? And I think there is problem with getting the last_inserted_id()-functions if I use long as datatype for the ID-column.

  • tobia.zanarella
    tobia.zanarella almost 12 years
    Sorry, just to be correct it would take some more than 9 years :) After 1 min, 360 bill tnx will happen. After 1h, 21600 bill. In 1 day, 518400 billion tnx will be verified. 1 year: 1892160 thousands of billions. After 8 years, 15,137,280 thousands of billions will be saved in the db. Limit of UNSIGNED BIGINT is 18,446,744 thousands of billions.
  • lockedscope
    lockedscope about 11 years
    you are getting that one zero extra for 1 year. so it would be 97 years approx.
  • Anestis Kivranoglou
    Anestis Kivranoglou over 10 years
    And what if i have 2000 proxies making an insert attack launching multiple thread/requests? . Or for example i have a History Upload Function with (60.000 entries ) so uploading that history will require less requests.
  • christianleroy
    christianleroy over 8 years
    I've been wondering about this for a long time now (like, 2 years, LOL). What's the solution to this if it happens? What if the supposedly 1 billion active users of facebook comments at least 3 times a day, and these comments are only in one table, with its ID set as int(11), what's the solution if it reaches the max value of the column? It seems like a problem to think about, for facebook at least.
  • majidarif
    majidarif about 7 years
    well for me, int(10) unsigned, the id column has reached 237836414 in just 2 months or 5.54% of uint_max. So, it's a problem.
  • Franck
    Franck almost 7 years
    @majidarif at this rate, if you use a BigInt instead it would take you an extra 77,560,638,270 months to use all the possible values or 6.5 billion years give or take couple millions of years
  • William T Froggard
    William T Froggard over 6 years
    Bottom line: When the upper bound is not knowable, just use BIGINT (or 64-bit ints for programmers). That's a basic rule of thumb for everything in computing. Still bound, but realistically irrelevant.
  • Victor
    Victor over 5 years
    As I see in my tests, the increment number generated is the same each time (the maximum value) after the limit is reached. I do not receive a SQL error as I was expected.
  • hygull
    hygull almost 5 years
    A best option would be to use unique strings like '6e894c6a-a02a-46ba-b2aa-de0d66d13293', '446a571b-d61f-4dae-bc6d-df1cc2ab52c2'. In case of python, uuid module allows us to generate this, i.e. using str(uuid.uuid4()).