postgresql: out of shared memory?

16,893

Solution 1

when you create a table, you get an exclusive lock on it that lasts to the end of the transaction. Even if you then go ahead and drop it.

So if I start a tx and create a temp table:

steve@steve@[local] *=# create temp table foo(foo_id int);
CREATE TABLE
steve@steve@[local] *=# select * from pg_locks where pid = pg_backend_pid();
   locktype    | database | relation  | page | tuple | virtualxid | transactionid | classid |   objid   | objsubid | virtualtransaction |  pid  |        mode         | granted 
---------------+----------+-----------+------+-------+------------+---------------+---------+-----------+----------+--------------------+-------+---------------------+---------
 virtualxid    |          |           |      |       | 2/105315   |               |         |           |          | 2/105315           | 19098 | ExclusiveLock       | t
 transactionid |          |           |      |       |            |        291788 |         |           |          | 2/105315           | 19098 | ExclusiveLock       | t
 relation      |    17631 |     10985 |      |       |            |               |         |           |          | 2/105315           | 19098 | AccessShareLock     | t
 relation      |    17631 | 214780901 |      |       |            |               |         |           |          | 2/105315           | 19098 | AccessExclusiveLock | t
 object        |    17631 |           |      |       |            |               |    2615 | 124616403 |        0 | 2/105315           | 19098 | AccessExclusiveLock | t
 object        |        0 |           |      |       |            |               |    1260 |     16384 |        0 | 2/105315           | 19098 | AccessShareLock     | t
(6 rows)

These 'relation' locks aren't dropped when I drop the table:

steve@steve@[local] *=# drop table foo;
DROP TABLE
steve@steve@[local] *=# select * from pg_locks where pid = pg_backend_pid();
   locktype    | database | relation  | page | tuple | virtualxid | transactionid | classid |   objid   | objsubid | virtualtransaction |  pid  |        mode         | granted 
---------------+----------+-----------+------+-------+------------+---------------+---------+-----------+----------+--------------------+-------+---------------------+---------
 virtualxid    |          |           |      |       | 2/105315   |               |         |           |          | 2/105315           | 19098 | ExclusiveLock       | t
 object        |    17631 |           |      |       |            |               |    1247 | 214780902 |        0 | 2/105315           | 19098 | AccessExclusiveLock | t
 transactionid |          |           |      |       |            |        291788 |         |           |          | 2/105315           | 19098 | ExclusiveLock       | t
 relation      |    17631 |     10985 |      |       |            |               |         |           |          | 2/105315           | 19098 | AccessShareLock     | t
 relation      |    17631 | 214780901 |      |       |            |               |         |           |          | 2/105315           | 19098 | AccessExclusiveLock | t
 object        |    17631 |           |      |       |            |               |    2615 | 124616403 |        0 | 2/105315           | 19098 | AccessExclusiveLock | t
 object        |    17631 |           |      |       |            |               |    1247 | 214780903 |        0 | 2/105315           | 19098 | AccessExclusiveLock | t
 object        |        0 |           |      |       |            |               |    1260 |     16384 |        0 | 2/105315           | 19098 | AccessShareLock     | t
(8 rows)

In fact, it added two more locks... It seems if I continually create/drop that temp table, it adds 3 locks each time.

So I guess one answer is that you will need enough locks to cope with all these tables being added/dropped throughout the transaction. Alternatively, you could try to reuse the temp tables between queries, simply truncate them to remove all the temp data?

Solution 2

Did you create multiple savepoints with the same name without releasing them?

I followed these instructions, repeatedly executing SAVEPOINT savepoint_name but without ever executing any corresponding RELEASE SAVEPOINT savepoint_name statements. PostgreSQL was just masking the old savepoints, never freeing them. It kept track of each until it ran out of memory for locks. I think my postgresql memory limits were much lower, it only took ~10,000 savepoints for me to hit max_locks_per_transaction.

Solution 3

Well, are you running the entire create + queries inside a single transaction? This would perhaps explain the issue. Just because it happened when you were dropping tables would not necessarily mean anything, that may just happen to be the point when it ran out of free locks.

Using a view might be an alternative to a temporary table and would definitely by my first pick if you're creating this thing and then immediately removing it.

Share:
16,893
Claudiu
Author by

Claudiu

Graduated from Brown University. E-mail: [email protected]

Updated on June 14, 2022

Comments

  • Claudiu
    Claudiu almost 2 years

    I'm running a bunch of queries using Python and psycopg2. I create one large temporary table w/ about 2 million rows, then I get 1000 rows at a time from it by using cur.fetchmany(1000) and run more extensive queries involving those rows. The extensive queries are self-sufficient, though - once they are done, I don't need their results anymore when I move on to the next 1000.

    However, about 1000000 rows in, I got an exception from psycopg2:

    psycopg2.OperationalError: out of shared memory
    HINT:  You might need to increase max_locks_per_transaction.
    

    Funnily enough, this happened when I was executing a query to drop some temporary tables that the more extensive queries created.

    Why might this happen? Is there any way to avoid it? It was annoying that this happened halfway through, meaning I have to run it all again. What might max_locks_per_transaction have to do with anything?

    NOTE: I'm not doing any .commit()s, but I'm deleting all the temporary tables I create, and I'm only touching the same 5 tables anyway for each "extensive" transaction, so I don't see how running out of table locks could be the problem...