PostgreSQL In Memory Database

17,956

Solution 1

I'm now using streaming replication which is async. This means my MASTER could be running all in memory, with the separate SLAVE instance using traditional disk.

A machine restart would involve stopping the SLAVE, copying the postgresql data back into ramdisk and then restarting the MASTER followed by the SLAVE. This would be an interesting possibility which compares well with something like REDIS, but with the advantage of redundancy / hotstandby / backup / sql / rich toolset etc.

Solution 2

have you seen the Server Configuration manual chapter? check it out, then google postgresql memory tuning.

Solution 3

The answer is caching. Look into adding memory to the server, then tuning PostgreSQL to maximize memory usage. Also, the file system cache will help with this, doing some of it automatically. You will be able to speed up performance, almost as if it were in memory except for the first hit, while not having to manage it yourself, and being able to have a database larger than the physical memory.

Solution 4

I have to believe that Postgres is written in such a way as to take full advantage of available RAM in the server. As you may have guessed by now, there's no reliable way to do this outside of Postgres.

Within Postgres, transactions assure that all operations are atomic, so if the power goes down while you are writing to a Postgres database, you will only lose that particular operation, and not the entire database.

Share:
17,956
David Barnes
Author by

David Barnes

I work as a Software Engineer in Minneapolis, Minnesota.

Updated on June 06, 2022

Comments

  • David Barnes
    David Barnes about 2 years

    I want to run my PostgreSQL database server from memory. The reason is that on my new server, I have 24 GB of memory, and hardly any of it is used.

    I know I can run this command to make a ramdisk:

    mdmfs -s 1024m md2 /mnt
    

    And I could theoretically have PostgreSQL store its data there. But the problem with this is that if the server crashes or reboots, the data will be gone.

    Basically, I want the database to be loaded in memory at all times so that it does not have to go to the hard disk drive to read every record, since I have TONS of memory and since memory is faster than hard disk drives.

    Is there a way to do this while also having PostgreSQL write to disk so I don't lose any data in case the server goes down? Or is there a way to cache all data in memory?

  • plundra
    plundra over 12 years
    This is probably even more feasible now with synchronous replication. Doing what OP want.
  • Abdul Saqib
    Abdul Saqib over 11 years
    I think what the question is asking is related to the persistence of the ramdisk being gone after shutdown not the atomic nature of a transaction in Postgres.
  • Yinda Yin
    Yinda Yin over 11 years
    @JustBob: This question is over three years old, and the highest voted answer says "Google it." Post an answer if you think you can do better (that shouldn't be too hard).
  • Abdul Saqib
    Abdul Saqib over 11 years
    Thanks for the constructive input. I just found this question and thought adding a little detail to an given answer wouldn't hurt anybody.
  • user454322
    user454322 almost 8 years
    If synchronous replication is used, it should be to another on memory file system, otherwise there wouldn't be performance gains. When requesting synchronous replication, each commit of a write transaction will wait until confirmation is received that the commit has been written to the transaction log on disk of both the primary and standby server. postgresql.org/docs/9.4/static/…
  • Abhijit Gujar
    Abhijit Gujar about 6 years
    I was thinking of the same idea.
  • code_dredd
    code_dredd over 4 years
    OP is not asking about tuning memory usage; he's asking about having Postgres use system memory as if it were the traditional secondary disk-based storage.