Swap partition size on a 64 GB RAM computer for memory-intensive work

39,223

Solution 1

You probably only need a small amount of swap. When you have sufficient RAM for your computer's typical working set, which I'm pretty sure you do, you only need swap for two things:

  1. You need swap to get information that will likely never be accessed out of RAM to free up more space for disk cache. Many applications run on system startup and will never be accessed again. You don't want any pages they dirtied stuck in RAM forever. So you need swap to hold them.

  2. You need swap to cover allocations that will never be filled. This space simply has to be available, even though it will not be used. Without it, the system will have to refuse to allocate memory even when it has plenty of free physical RAM because it has insufficient backing store to permit all of its allocations to be used at once.

Neither of these requires a large amount of swap. 16GB, for example, should be more than enough. The purpose is not to let you run bigger working sets at the cost of speed. The purpose is to let you use your 64GB effectively and not have to clog it with junk or reserve it for edge cases that will never happen.

(I agree with Bert that 4GB is quite likely to be sufficient.)

Solution 2

RedHat recommends 4 GB on a machine with 64 GB.

However, sizing swap is more of an art than a science. It depends on what the machine is being used for, how much disk space and memory you have, and other factors. Remember, you can always add more swap later.

Using the 2X physical memory rule is outdated with the amount of memory systems have these days. But running with zero swap is not recommended unless you know what you are doing. The recommendation of 4 GB is a good starting point.

Solution 3

On Linux, you need enough swap so that the total virtual memory available (RAM + SWAP) is enough for all the processes you want to run at once and their maximum virtual footprint.

If you have less swap than this, or no swap at all, then the following situation happens: the system runs out of memory trying to allocate a page. But, this is still a soft failure even though there is no swap, because the system has plenty of "victim" pages that can be removed in order to make room: namely, the pages of all file-backed memory mappings, such as executables and shared libraries!

As your system demands more and more space for data (which cannot be swapped out), it will increasingly evacuate the executable code (shared libraries and executables), leading to terrible thrashing, as the working set is trimmed into a tighter and tighter set of pages.

Swap space softens this problem by providing a place for anonymous (not file mapped) pages to be swapped out: the pages used for memory allocations, so that executable code can stay in memory.

Even so, if you don't frequently run memory-intensive tasks, you may be able to get away with running swapless most of the time, and manually configure a swap file (instead of a dedicated partition) when you need it. To make a swap file on the fly, become root and:

dd if=/dev/zero of=/path/to/swapfile size=$((1024 * 1024)) count=32768  # 32 Gb.
mkswap /path/to/swapfile
swapon /path/to/swapfile

When you don't need it any more:

swapoff /path/to/swapfile
rm /path/to/swapfile

Notes:

  1. You definitely do not need to configure at least as much swap as you have RAM. This rule of thumb dates back to operating systems where it was a hard requirement due to the way swapping was designed.

  2. There are ways to make Linux fail hard when no memory is available, namely via manipulating the values of these sysctl entries:

    vm.overcommit_memory
    vm.overcommit_ratio
    

Solution 4

There are more considerations. If you need/want suspend to work then you need at least the size of your RAM and then some. However it sounds unlikely you need it given that you seem to mainly build a computational work horse.

In this case please consider using a swap file instead of a partition. You don't need to worry about sizing much, getting rid of or adding it later doesn't require any repartitioning. There is no (noticable) performance penalty using a file over a partition. If you ever happen to need it, have a look at the size and this will then also give you good hints.

Solution 5

A much better idea than having "a lot of swap" is (re-)organizing your work so that the working sets fit in memory, then using the file-system to store and retrieve the work you do. I.e., instead of forcing the OS to guess what your memory usage patterns will be, use what you know about your problems to control your memory usage patterns.

As a random example that is immediately relevant to me this summer... In implementing the quadratic sieve, one needs a large (apparently) contiguous array to mark up (with some complicated algorithm whose details actually don't matter for this example). The array needs to be ~100 Giga-entries, so easily in the 1 TB range. I could pretend to allocate that and let the OS do an amazing amount of inefficient swapping to get pages in and out of RAM to support all the sequential writes through the array. Instead of doing something that boneheaded, I have arranged to allocate a much smaller array that exactly fits in memory and then reuse that little array to iteratively cover the rest of the big array in slices. I've also stripped the OS, stripped the running set of services, replaced the shell, and customized two layers of memory allocators to do their darnedest to keep as much of the address space available to my process as close to contiguous as possible.

SSD may be fast, but it is not nearly as fast as organizing your computation to do the same set of operations without ever stalling to swap.

Share:
39,223
wrwt
Author by

wrwt

Updated on September 18, 2022

Comments

  • wrwt
    wrwt over 1 year

    I have 64 GB RAM and 240 GB SSD on my computer, which I'm going to use for memory-intensive calculations (machine learning, data mining, etc.). Most of the advice I found on the Internet are about 2-4-8 GB RAM computers, and they recommend 2x the size of the RAM swap partition (so 128 GB).

    Is it reasonable to make a 128 GB swap partition? what advantages do I get by making a huge swap partition?

    Do I understand correctly that, in case I run out of physical RAM:

    1. If I have no swap, I get an 'out of memory' error
    2. If I do have swap, some of RAM pages will be copied to swap partition, and the program will continue to run (although more slowly).

    Some people say it's a bad idea to make swap on SSD, since it has limited amount of read/write cycles. How fast using swap will it exhaust SSD read/write cycles (as far as I know, it's about 50000 write IOPS)?

    I'm using Linux (Ubuntu 14.04 (Trusty Tahr)).

    Going to set a 16 GB swap for now, as it should be surely enough (for example, RedHat suggests 4 GB), and 16 GB of disk space isn't actually a big deal.

  • Dan Is Fiddling By Firelight
    Dan Is Fiddling By Firelight almost 10 years
    +1 for the last paragraph. The 2x recommendation dates back to when most computers didn't have enough ram to avoid swapping in normal use. Subjectively, from using computers then, the 2x limit appears to've been selected as a number big enough that the computer will become unusably slow before running out of swap.
  • Jason C
    Jason C almost 10 years
    @wrwt Put your swap partition at the end of the drive (or at least after your data partition), it will make resizing it quicker and less write-intensive should you ever choose to adjust it (more specifically it will make resizing the data partition to accommodate it simpler, since you don't have to move the start). There is no link between position and performance on SSDs as there sometimes is on mechanical drives.
  • Aviator45003
    Aviator45003 almost 10 years
    +1 for not swapping to SSD, -1 for swapping to a component that has a very short life span when used like that.
  • Soren
    Soren almost 10 years
    +1 for actually refering to kernel configuration parameters -- The key is in the part of the question If I have no swap, I get an 'out of memory' error -- which is false -- the truth is that when you run out of swap space the out-of-memory-killer will kick in and kill a random process to free up space -- so the amount of swap space needed depends on how your application is written.
  • Soren
    Soren almost 10 years
    While this answer probably suffice for most hobbyist, then it is bad advice for real servers -- the answer depends on how the application is written, because running out of swap space will cause the out-of-memory killer to kick in and terminate a process by random (yes you read that right; "random")
  • David Schwartz
    David Schwartz almost 10 years
    @Soren This is superuser, not serverfault. ;) It's certainly true that setting the swap space is not the only decision you need to make for "real servers". You also need to make decisions about things like overcommit, you may need to tune the OOM killer, and so on. (And the answers get much more complicated if you expect your working set to exceed physical RAM. But almost nobody operates machines that way anymore.)
  • wrwt
    wrwt almost 10 years
    @Soren It's likely that most of the RAM will be filled with actual data, so the out-of-memory killer won't make much difference. Thx for 'the truth' anyway.
  • amalloy
    amalloy almost 10 years
    @Kaz I think you're talking about something different than kaste is. kaste is saying that if you want to be able to suspend/hibernate your computer, turn it off, and pick up where you left off later, you need enough swap space to store all your RAM (else where would it go?).
  • Maciej Piechotka
    Maciej Piechotka almost 10 years
    @JasonC - other option is just put it in LVM and don't bother with such details as 'where to put partition to resize it later'.
  • David Yates
    David Yates almost 10 years
    @T.C. is right, ArmanX - if you're trying to avoid using flash (SSD), why would you use flash on USB? That's irrational.
  • Damon
    Damon almost 10 years
    @T.C.: Not using SSD for swap because of wearing down the medium is an unjustified urban legend. Swapping does not happen "all the time", but rarely. Also, this is something that has been extensively researched at Microsoft after the Win7 release with the result that the typical access patterns of swapping are quite acceptable for SSD (that's Windows, not Ubuntu, but it's likely that Linux does not perform much worse). You have a hundred (or thousand) times more write operations wearing down your SSD due to silly log files that nobody is ever reading (usually syncing every line).
  • David Schwartz
    David Schwartz almost 10 years
    @Ruslan When this answer was written, the OS hadn't been disclosed. In any event, the second point is still correct. Linux will overcommit but it will also refuse allocations it doesn't need to refuse. (Just a bit later, plus you also may find your existing processes getting killed. So it's actually, at least potentially, worse.)
  • Agent_L
    Agent_L almost 10 years
    The logic is flawed: if the thumbdrive is indeed as fast as SSD, why is it cheaper?
  • Palec
    Palec almost 10 years
    On my Debian, if I got it correctly, swap is used to store RAM contents when hibernating to HDD. I think this is a big reason to have swap at least the size of RAM. But in this scenario, hibernation is probably not a common use case.
  • NPSF3000
    NPSF3000 almost 10 years
    @ArmanX you're much more unlikely to wear our a SSD compared to a flash drive, and if you did the odds are you'd have worn out 5-10 flash drives by the time you wore out the SSD.
  • Koray Tugay
    Koray Tugay almost 10 years
    Sorry my English is not good enough, what does this mean: You need swap to get information that will likely never be accessed out of RAM to free up more space for disk cache.
  • David Schwartz
    David Schwartz almost 10 years
    @KorayTugay When your computer first starts up, lots of services load. They allocate and dirty a bunch of memory as they read configuration, load libraries, and so on. Many of those services will never run again because they're things you aren't using. The operating system moves that junk to swap to free up precious physical memory to increase the size of the disk cache which helps reduce I/O and improve performance.