Can SSD raid be faster than ramdisk?

20,830

Solution 1

So a typical ssd will have a read speed of 250 - 500 mb / second. And a ram will have about 10x than that.

Just what kind of RAM are you referring to? Certainly not something that has been in common use in PCs recently, it would seem.

DDR3 SDRAM can trivially give you a transfer rate of around 10 GB/s (you need DDR3-1333 for that) and currently tops out at about 17 GB/s for DDR3-2133.

Let's say you stripe four SSDs capable of delivering 500 MB/s and the overall system is capable of handling that (no bus contention, the system is still I/O-bound, etc.). This gives you a theoretical maximum throughput of 2 GB/s. The 4xSSD loses out by nearly a factor of 10.

DDR3 SDRAM has a latency in the 10 ns region. A good SSD might give you 100k IOPS, tops, which translates to a latency of 10,000 ns. (For example, the Intel 530 specifies a 41k IOPS for random 4k reads, which gives you a latency of nearly 25,000 ns.)

Stripe four fast SSDs and ignore all overhead, and you might get 400k IOPS, or 2,500 ns latency. The 4xSSD loses out by a factor of 250.

The data from the SSD has to go to somewhere, and that "somewhere" will be RAM. The CPU can grab it from there, but it does not talk directly to the SSD any more than it talks directly to a spinning-platter hard disk drive.

If we assume that you don't suffer from bus contention on the SSD, it makes sense to assume the same thing for the RAM. Which leads to the conclusion that by either of these metrics, a SSD is horrendeously slow compared to DDR3 SDRAM.

RAM has other drawbacks. Even compared to SSDs it is very expensive per gigabyte, and it needs constant power to retain its contents. Also, a RAM disk does not function quite the same as RAM, as it is a software construct in the operating system. You should still get most of the performance benefit of RAM, but you lose out on the same amount of RAM which may cause the system to need to resort to swapping more often (which is a death sentence for performance) and it likely won't give quite the same performance as raw RAM would.

Solution 2

The real cause of the "no" answer is not only the difference comparison between the two, but because the OS will eventually cache the data read from the SSD. It will cache this in your RAM. Thus a block reading from the SSD means a block-write to the RAM as well. Always.

This is why the RAM disks are faster even in very special hardware combinations (e.g. a very fast SSD was combined with very slow RAM).

Solution 3

To answer your actual question: No. RAM has an order of magnitude more bandwidth and an order of magnitude less latency. It's not even close.

To answer the question you're not asking: What you're planning to do is a bad idea, unless you have a very specific use case that requires that kind of bandwidth. If you really do need that kind of speed, getting a PCIe based SSD (like an ioDrive or an Intel 910 is going to be much faster than a bag of SATA SSDs in RAID-0.

Any current SSD is going to be fast enough for consumer and enthusiast workloads that you're going to be the bottleneck.

Solution 4

RAM will always be faster than whatever peripheral bus-system (like SATA) can deliver.

But when you are dealing with a RAM-disk you are not purely dealing with RAM alone. There is also a bunch of software (the file-system and device-drivers) to consider that "convert" the raw RAM into something that the OS will see as disk-storage.

How fast a RAM-disk really will be totally depends on the quality of that software.

Having said that: Normally it should still be several times faster as the fastest storage solution you can attach to the motherboard.

There may be some corner-cases where specific usage-patterns will make the difference smaller or almost zero, but without more detail on what you are going to do with such a system it is impossible to say if that will apply to your situation.

P.S. Bear in mind that a RAM-disk will be volatile. After boot of the machine you will have to initially load it with data. On shutdown of the machine you will have to save the content (if you need it for the next run). If the system crashes you loose the RAM-disk with being to save anything.
This is something you will have to take into account, especially if you expect frequent reboots. Saving/Reloading the RAM-disk content may be less than trivial.

Share:
20,830

Related videos on Youtube

Koray Tugay
Author by

Koray Tugay

Crappy software developer who does not like self-promoting profiles.

Updated on September 18, 2022

Comments

  • Koray Tugay
    Koray Tugay over 1 year

    So a typical ssd will have a read speed of 250 - 500 mb / second. And a ram will have about 10x than that.

    My question is: Can 4 ssds with raid-0 be faster than a single ram memory block for some reason?

    I am either going to go with lots of ram and ramdisk, or 4 ssds with raid-0. Which is faster?

  • Koray Tugay
    Koray Tugay over 10 years
    ramdisk it is then..
  • user
    user over 10 years
    Actually, DDR3 latency is not an order of magnitude lower than that of a SSD. It's three to four orders of magnitude lower. See my answer for some actual numbers. (The bandwidth difference is definitely in the ballpark, however.)
  • user
    user over 10 years
    @KorayTugay If you need the absolute most in performance, you should go with a RAM disk, but keep in mind that a RAM disk does not function quite the same as RAM and has its own set of drawbacks, not the least of which being price. (Also, while I appreciate the accept, I encourage you to wait a day or so before accepting an answer in case someone else provides an even better answer.)
  • Koray Tugay
    Koray Tugay over 10 years
    I will keep my database in ram.
  • afrazier
    afrazier over 10 years
    @KorayTugay: Generally speaking, the answer to "Do I need a RAMDisk?" is almost always "No. If you really needed it, you'd know enough that you wouldn't have to ask." What are you doing that makes you think you'll need it?
  • Koray Tugay
    Koray Tugay over 10 years
    @afrazier development / testing with a huge database and I access it many times for and a lot of data is fetched from it. I will keep the db in the ram. I do this for 8 10 hours a day and waiting for 4 5 seconds seem too long after some time. I need to hit the db faster.
  • afrazier
    afrazier over 10 years
    Any competent DBMS will keep as much of the DB in RAM as possible. If your DB is tiny enough to fit in a RAMdisk, it's tiny enough to just fit in RAM. Properly tuning your DBMS with an SSD will give you the performance you're looking for -- for local development, you can even take advantage of things like write back caching and worrying less about ACID. A RAMdisk is fighting the system.
  • ChrisInEdmonton
    ChrisInEdmonton over 10 years
    This is actually the best answer.
  • peterh
    peterh over 10 years
    Latency isn't important, because the IO happens in blocks. At least in 4K blocks. Bandwidth is important.
  • afrazier
    afrazier over 10 years
    @PeterHorvath: Latency is still a big deal with SSDs. Look at all the research and benchmarks into consistency and the stuttering problems that early SSDs had. And as Michael Kjörling mentioned, RAM latency is several orders of magnitude lower than SSD. Latency on Lx caches is significantly lower still, and CPU register latency is even lower yet. And it all matters for performance.
  • user
    user almost 10 years
    sudo zfs set primarycache=none tank There goes your RAM-based cache. (Assuming of course you're using ZFS.)
  • Curt
    Curt over 8 years
    I pretty much agree with the above, with the exception of transaction logs, which MUST be written to disk. For the highest-performance transactional systems, I often specify that logs are maintained on flash-backed RAM drives.
  • peterh
    peterh over 7 years
    @MichaelKjörling That's right, and even on non-zfs constructions there are possibilities to turn off the read cache. But even in this case, if you make a $read()$ call, it will be actually a copy from the SSD to the RAM. You can't avoid this, except if you are reading the data somehow directly into the CPU cache (which also means you avoid somehow the DMA), AMD cpus (and maybe intels) have an integrated memory manager, I think their slightly modified version could somehow do that. But even in this case it would require to modify the CPU.