postgres vacuum
I have hit this problem before. Below are my notes from how I fixed it
You need to fix the problem by running a vacuum, as the error conveniently hints to. In order to do this, first, shutdown postgres
sudo /sbin/service postgresql stop
Now, you'll need to go through each database and perform a vaccum with the stand-alone back-end. As the postgres user Run:
postgres -D /opt/pgsql/data/ postgres
Where /opt/pgsql/data is the path to postgres's data directory, and postgres is the name of the database you want to fix.
You will get a prompt like:
PostgreSQL stand-alone backend 8.1.18
backend>
Simply enter VACUUM
at the backend> prompt.
You will now most likely get something like WARNING: database "template1" must be vacuumed within 999407 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "template1".
Just keep repeating the steps, changing the DBNAME each time until you don't get the warnings anymore. When that happens, you can restart postgres and all will be right again.
Edit: I should also mention, that after we did this process a few times; we decided to implement the autovacuum process: http://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM
Related videos on Youtube
Benny B
Updated on September 18, 2022Comments
-
Benny B over 1 year
Ok guys, I've moved to a new company and I see they have a single PostgreSQL database here to run OpenNMS.
I'm a MySQL person, not familiar at all with PostgreSQL, but I've at least learned how to run
psql
andpostgres -D
to get in single user mode.Anyway, OpenNMS crashed. I got in and tracked it down to PostgreSQL giving this error:
WARNING: database "template1" must be vacuumed within 965550 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "template1".
It actually started at 1 million and I'm down to 965500 as you can see. I've gotten to this point by logging in single user (
postgres -D
) and running Vacuum full. Now I expected this to clean it up, but the results are exactly 55 rows of:WARNING: database "template1" must be vacuumed within 938861 transactions HINT: To avid a database shutdown, execute a full-database VACUUM in "template1". WARNING: database "template1" must be vacuumed within 938860 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "template1". WARNING: database "template1" must be vacuumed within 938861 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "template1". WARNING: database "template1" must be vacuumed within 938860 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "template1".
So the question I have is, do I just need to keep running this until I reach 0 or am I doing it wrong?
-
Benny B almost 13 yearsUpdate: I've determined I was on my way to a bad thing. Once it hits 0, it's all over. So now I run postgres -D /path/ template1 and things appear a litte different. I run just "VACUUM" there and my error is : ERROR: could not access status of transaction 711058365 DETAIL: could not open file "pg_clog/02A6": No such file or directory I don't have a problem dumping my opennms data and starting over, but I don't know how to do it without fixing template1
-
-
mdpc almost 13 yearsYou might wish to reindex (i.e. rebuild the indices) the database after the vacuum as well.