How do I speed up counting rows in a PostgreSQL table?

25,878

Solution 1

For a very quick estimate:

SELECT reltuples FROM pg_class WHERE relname = 'my_table';

There are several caveats, though. For one, relname is not necessarily unique in pg_class. There can be multiple tables with the same relname in multiple schemas of the database. To be unambiguous:

SELECT reltuples::bigint FROM pg_class WHERE oid = 'my_schema.my_table'::regclass;

If you do not schema-qualify the table name, a cast to regclass observes the current search_path to pick the best match. And if the table does not exist (or cannot be seen) in any of the schemas in the search_path you get an error message. See Object Identifier Types in the manual.

The cast to bigint formats the real number nicely, especially for big counts.

Also, reltuples can be more or less out of date. There are ways to make up for this to some extent. See this later answer with new and improved options:

And a query on pg_stat_user_tables is many times slower (though still much faster than full count), as that's a view on a couple of tables.

Solution 2

Count is slow for big tables, so you can get a close estimate this way:

SELECT reltuples::bigint AS estimate 
FROM pg_class 
WHERE relname='tableName';

and its extremely fast, results are not float, but still a close estimate.

  • reltuples is a column from pg_class table, it holds data about "number of rows in the table. This is only an estimate used by the planner. It is updated by VACUUM, ANALYZE, and a few DDL commands such as CREATE INDEX" (manual)
  • The catalog pg_class catalogs tables and most everything else that has columns or is otherwise similar to a table. This includes indexes (but see also pg_index), sequences, views, composite types, and some kinds of special relation (manual)
  • "Why is "SELECT count(*) FROM bigtable;" slow?" : http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F

Solution 3

Aside from running COUNT() on an indexed field (which hopefully 'id' is) - the next best thing would be to actually cache the row count in some table using a trigger on INSERT. Naturally, you'll be checking the cache instead.

For an approximation you can try this (from https://wiki.postgresql.org/wiki/Count_estimate):

select reltuples from pg_class where relname='tablename';

Solution 4

You can ask for the exact value of the count in the table by simply using trigger AFTER INSERT OR DELETE Something like this

CREATE TABLE  tcounter(id serial primary key,table_schema text, table_name text, count serial);

insert into tcounter(table_schema, table_name,count) select 'my_schema', 'my_table', count(*) from my_schema.my_table;

and use trigger

CREATE OR REPLACE FUNCTION ex_count()
RETURNS trigger AS
$BODY$
BEGIN
    IF (TG_OP='INSERT') THEN
      UPDATE tcounter set count = count + 1 where table_schema = TG_TABLE_SCHEMA::TEXT and table_name = TG_TABLE_NAME::TEXT;
    ELSIF  (TG_OP='DELETE') THEN
      UPDATE tcounter set count = count - 1 where table_schema = TG_TABLE_SCHEMA::TEXT and table_name = TG_TABLE_NAME::TEXT;
    END IF;
RETURN NEW;
END$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

CREATE TRIGGER tg_counter  AFTER INSERT OR DELETE
  ON my_schema.my_table  FOR EACH ROW  EXECUTE PROCEDURE ex_count();

And ask for count

select * from tcounter where table_schema =  'my_schema' and table_name = 'my_table'

it means you select count(*) once for initialize first record

Solution 5

You can get an estimate from the system table "pg_stat_user_tables".

select schemaname, relname, n_live_tup 
from pg_stat_user_tables 
where schemaname = 'your_schema_name'
and relname = 'your_table_name';
Share:
25,878
Juan Carlos Coto
Author by

Juan Carlos Coto

Updated on May 11, 2021

Comments

  • Juan Carlos Coto
    Juan Carlos Coto almost 3 years

    We need to count the number of rows in a PostgreSQL table. In our case, no conditions need to be met, and it would be perfectly acceptable to get a row estimate if that significantly improved query speed.

    Basically, we want select count(id) from <table> to run as fast as possible, even if that implies not getting exact results.

  • varunvlalan
    varunvlalan about 9 years
    Nice +1. Any idea on how we can speed up counting rows in postgres having search query?
  • Erwin Brandstetter
    Erwin Brandstetter about 9 years
    @varunvlalan: Please ask the question as question (with the necessary detail to make it clear). Comments are not the place. You can always link to this one for context.
  • rogerdpack
    rogerdpack over 8 years
    nice, the ::bigint part avoids some overflow with tables size > 4B or some threshold like that, thanks!
  • Naga
    Naga over 8 years
    is it possible to add where clause if I want to ?
  • Giox
    Giox about 8 years
    This query gives an estimate. If I would like to build a reports where I need exact count of records, that can be run on table of several hundreds of thousands records? 've seen that postgre take up to 13s for counting 20000 records!! I can believe it
  • Erwin Brandstetter
    Erwin Brandstetter about 8 years
    @Giox: I can't believe it either. 13s for 20k rows is far beyond normal - indicating a problem in your DB. Shouldn't take more than a couple of ms. Please ask a question providing necessary details. You can always link to this one for context. Comments are not the place. You can drop a short comment here (or maybe here?) to link to the related question and get my attention.
  • Nikolay Kuznetsov
    Nikolay Kuznetsov about 7 years
    This query gave me 415K, however COUNT(*) returns 8M.
  • Erwin Brandstetter
    Erwin Brandstetter about 7 years
    @NikolayKuznetsov: This can happen immediately after big writes to the table, or if autovacuum is not running or badly misconfigured or for temporary tables, which are not covered by autovacuum. Run ANALYZE tbl; and then try again. Then find the reason why the table wasn't analyzed automatically. Follow the linked answer above for more.