How important is the 1GB RAM per 1TB disk space rule for ZFS?

6,489

The only reason you would need to use that ratio of RAM to storage space, would be if you decided to use data deduplication. It does not say that the 1 GB to 1 TB ratio is a requirement.

According to a wiki:

Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage. Insufficient physical memory or lack of ZFS cache can result in virtual memory thrashing when using deduplication, which can either lower performance or result in complete memory starvation. Solid-state drives (SSDs) can be used to cache deduplication tables, thereby speeding up deduplication performance.

Source

Share:
6,489

Related videos on Youtube

Sebastian_H
Author by

Sebastian_H

Updated on September 18, 2022

Comments

  • Sebastian_H
    Sebastian_H almost 2 years

    I'm planning on building my first NAS box and currently I'm considering FreeNAS and ZFS for it. I read up on ZFS and it's feature set sounds interesting, although I will probably only use a fraction of it.

    Most guides say that the recommended rule of thumb is that you need 1 GB of (ECC-)RAM for every TB of disk space in your pool. So my question is, what is the actual (expected) impact on ignoring this rule?

    Here is a setup of someone who build a 71 TiB NAS with ZFS and 16GB RAM. According to him it run's like a charm. He uses Linux however (if this makes a difference).

    So apparently you don't actually need 96 or even 64 Gigs of RAM to run such a large pool. But the rule must be there for a reason. So what happens if you do not have the recommended amount of RAM? Is it just a bit slower or do you run the risk of losing data or accessing your data at a snails pace only?


    I realize that this has also a lot to do with the features that will be used, so here are the parameters I'm considering:

    • It's a home system
    • 16GB ECC RAM (the maximum supported by the setup I have in mind)
    • No deduplication, no ZIL, no L2ARC
    • Probably with compression enabled
    • Will store mostly media files of various sizes
    • Will probably run bit torrent or similar services (frequent smaller reads/writes)
    • 4 disks, probably 5 TB each
    • Actual pool setup will probably be part of another question but I think no RAIDZ (although I would be interested to know if it actually makes a difference in this context), probably two pools with two disks each (for 10TB netto storage), one acting as backup
    • Daniel B
      Daniel B over 8 years
      Of course you don’t need that much memory. Unless using dedup, that’ll seriously bite you in the butt. Of course, performance might not be optimal.
    • Ramhound
      Ramhound over 8 years
      Its a recommendation. There are very few hardware configurations that would even support 96 GB of memory. In most cases that requires multiple processor configurations to achieve memory density that large. Even if it was required your system by your own specifications does not support 20 GB of memory. The current 6th generation Intel processors only support 64 GB DDR4. I realize there are systems with several TBs worth of memory that exist, we are talking about consumer hardware, and not huge servers.
    • Ramhound
      Ramhound over 8 years
      Before somebody says I am wrong. Keep the context of this question in mind and the scope of Superuser in mind.
    • Sebastian_H
      Sebastian_H over 8 years
      @Ramhound The 16 GB limit is the reason I asked the question. I haven't bought it yet so switching to a machine that can support 32 GB would be possible but make the entire thing more expensive. If it would just be about another stick of memory I wouldn't mind. But I don't want to invest the extra money unless it's absolutely necessary.
    • Sebastian_H
      Sebastian_H over 8 years
      @DanielB performance might not be optimal - that is the part I'm interested in for an answer. I realize that insufficient memory may cause the system to lose performance. But what kind of scope are we talking about? Are we talking about a "can't always saturate a 1gigabit Ethernet connection" or a "a 64k modem is faster than your system" performance loss?
    • Daniel B
      Daniel B over 8 years
      Considering how a single disk can more or less max out a 1 Gbps connection... ;) I can only relay my experience: 6×3 TB RAIDZ2 runs fine with 8 GiB of RAM, even when other programs are running.
    • code_dredd
      code_dredd almost 5 years
      This article might be a good read on considerations for ZFS Deduplication and how to calculate some things. Also this one.
  • Sebastian_H
    Sebastian_H over 8 years
    Interesting. I actually just saw that in the older FreeNAS documentation for version 9.2.2 there is this sentence:"If you plan to use your server for home use, you can often soften the rule of thumb of 1 GB of RAM for every 1 TB of storage[...]. The sweet spot for most users in home/small business is 16GB of RAM." There is no such sentence in the documentation of the current version however.Together with the replies in the FreeNAS forum about memory it often makes it sound like it's an ironclad rule that must be followed or else your NAS slows down to 64K or something.
  • qasdfdsaq
    qasdfdsaq over 8 years
    @Sebastian_H: It's a load of rubbish basically. A rule of thumb is a rule of thumb, approximate at best, totally irrelevant at worst. Anything over 4GB is fine, even for a 100TB array.