Difference between 'sync' and 'async' mount options

120,886

Solution 1

async is the opposite of sync, which is rarely used. async is the default, you don't need to specify that explicitly.

The option sync means that all changes to the according filesystem are immediately flushed to disk; the respective write operations are being waited for. For mechanical drives that means a huge slow down since the system has to move the disk heads to the right position; with sync the userland process has to wait for the operation to complete. In contrast, with async the system buffers the write operation and optimizes the actual writes; meanwhile, instead of being blocked the process in userland continues to run. (If something goes wrong, then close() returns -1 with errno = EIO.)

SSD: I don't know how fast the SSD memory is compared to RAM memory, but certainly it is not faster, so sync is likely to give a performance penalty, although not as bad as with mechanical disk drives. As of the lifetime, the wisdom is still valid, since writing to a SSD a lot "wears" it off. The worst scenario would be a process that makes a lot of changes to the same place; with sync each of them hits the SSD, while with async (the default) the SSD won't see most of them due to the kernel buffering.

In the end of the day, don't bother with sync, it's most likely that you're fine with async.

Solution 2

Words of caution: using the 'async' mount option might not be the best idea if you have a mount that is constantly being written to (ex. valuable logs, security camera recordings, etc.) and you are not protected from sudden power outages. It might result in missing records or incomplete (useless) data. Not-so-smart example: imagine a thief getting into a store and immediately cutting the camera power cable. The video recording of the break-in was recorded but might not have been flushed/synced to the disk since it (or parts of it) might have been buffered in memory instead, thus got lost when the camera lost power.

Solution 3

for what it's worth, as of 2022 and RHEL 7.9

servers using self-encrypting SSD's or a few with Dell BOSS M.2 for the linux operating system going over 100gbps HDR infiniband... by default NFS connects as sync under version 4.1 and protocol=tcp. I cannot get nfs v4.2 to work even though /cat proc/fs/nfsd/versions shows +4.2 but I don't know how much better nfs 4.2 could be over 4.1.

I tried /etc/exports with /scratch *(rw) which inherently does sync and also with /scratch *(rw,async) and saw no difference in an rsync --progress <source> <dest> for a single nfs file copy of a 5gb tar file which averaged 460 MB/sec (max burst of 480). A local copy of same file to another folder on same server (not over the network) averaged 435 MB/sec. For reference I always get solid 112MB/sec ssh scp speed over traditional 1gbps copper.

/etc/exports   on rhel-7.9 nfs-server

    /scratch *(rw,no_root_squash)

exportfs -v  on rhel-7.9 nfs-server

    /scratch         <world>(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

mount    on rhel 7.9 nfs-client

    server:/scratch on /scratch type nfs4 (rw,nosuid,noexec,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.2,local_lock=none,addr=192.168.1.1,_netdev)

/etc/fstab      on rhel 7.9 nfs-client

    192.168.1.1:/scratch   /scratch   nfs4   _netdev,defaults,nosuid,noexec 0 0

also : https://www.admin-magazine.com/HPC/Articles/Useful-NFS-Options-for-Tuning-and-Management (no date on article, makes no mention of nfs v3 vs v4)

Most people use the synchronous option on the NFS server. For synchronous writes, the server replies to NFS clients only when the data has been written to stable storage. Many people prefer this option because they have little chance of losing data if the NFS server goes down or network connectivity is lost.

Asynchronous mode allows the server to reply to the NFS client as soon as it has processed the I/O request and sent it to the local filesystem; that is, it does not wait for the data to be written to stable storage before responding to the NFS client. This can save time for I/O requests and improve performance. However, if the NFS server crashes before the I/O request gets to disk, you could lose data.

Synchronous or asynchronous mode can be set when the filesystem is mounted on the clients by simply putting sync or async on the mount command line or in the file /etc/fstab for the NFS filesystem. If you want to change the option, you first have to unmount the NFS filesystem, change the option, then remount the filesystem.

If you are choosing to use asynchronous NFS mode, you will need more memory to take advantage of async, because the NFS server will first store the I/O request in memory, respond to the NFS client, and then retire the I/O by having the filesystem write it to stable storage. Therefore, you need as much memory as possible to get the best performance.

The choice between the two modes of operation is up to you. If you have a copy of the data somewhere, you can perhaps run asynchronously for better performance. If you don't have copies or the data cannot be easily or quickly reproduced, then perhaps synchronous mode is the better option. No one can make this determination but you.

Share:
120,886

Related videos on Youtube

Admin
Author by

Admin

Updated on September 18, 2022

Comments

  • Admin
    Admin over 1 year

    What is the difference between sync and async mount options from the end-user point of view? Is file system mounted with one of these options works faster than if mounted with another one? Which option is the default one, if none of them is set?

    man mount says that sync option may reduce lifetime of flash memory, but it may by obsolete conventional wisdom. Anyway this concerns me a bit, because my primary hard drive, where partitions / and /home are placed, is SSD drive.

    Ubuntu installer (14.04) have not specified sync nor async option for / partition, but have set async for /home by the option defaults. Here is my /etc/fstab, I added some additional lines (see comment), but not changed anything in lines made by installer:

    # / was on /dev/sda2 during installation
    UUID=7e4f7654-3143-4fe7-8ced-445b0dc5b742 /     ext4  errors=remount-ro 0  1
    # /home was on /dev/sda3 during installation
    UUID=d29541fc-adfa-4637-936e-b5b9dbb0ba67 /home ext4  defaults          0  2
    # swap was on /dev/sda4 during installation
    UUID=f9b53b49-94bc-4d8c-918d-809c9cefe79f none  swap  sw                0  0
    
    # here goes part written by me:
    
    # /mnt/storage
    UUID=4e04381d-8d01-4282-a56f-358ea299326e /mnt/storage ext4 defaults  0  2
    # Windows C: /dev/sda1
    UUID=2EF64975F6493DF9   /mnt/win_c    ntfs    auto,umask=0222,ro      0  0
    # Windows D: /dev/sdb1
    UUID=50C40C08C40BEED2   /mnt/win_d    ntfs    auto,umask=0222,ro      0  0
    

    So if my /dev/sda is SSD, should I - for the sake of reducing wear - add async option for / and /home file systems? Should I set sync or async option for additional partitions that I defined in my /etc/fstab? What is recommended approach for SSD and HDD drives?

  • HellishHeat
    HellishHeat almost 9 years
    in the case that a local application is deleting and writing to the mounted drive (pointing to an external Windows box); is there potential that the default, async mode is unsafe? The scenario is a polling app, looking in one folder on the mount, pocessing the sub folders then deleting them.
  • countermode
    countermode over 6 years
    @HellishHeat You should ask this as a separate question with sufficient details of the scenario you have in mind.
  • Brian Bulkowski
    Brian Bulkowski about 6 years
    What is the speed of different storage layers: ram is nanosecond, flash is microseconds ( 10's for writes, about 100 for reads ), rotational disk is milliseconds ( 5 ms best case, 10 to 100ms if the disk queue is backed up and the accesses are random ). Writes to a single location on a flash device may write to a capacitor backed SRAM and not be written all the way to NAND. Thus it is hard to determine either wear or speed impact.
  • tonioc
    tonioc almost 6 years
    Modern servers have battery backed disk caches in RAID controllers, which will prevent from data loss even in case of a power loss.
  • CMCDragonkai
    CMCDragonkai over 5 years
    Does this mean one doesn't need to call sync or fsync or fdatasync syscalls on a sync mounted fs?
  • Ini
    Ini over 5 years
    What are the differences in a unwanted shutdown/restart scenario?
  • Ini
    Ini over 5 years
    async does not write for many seconds? How many seconds approximately?
  • countermode
    countermode over 5 years
    @ini You may risk a loss of data with async. Yet, if this is an issue, then sync is not the answer - the performance penalty of sync is simply prohibitive.
  • Ini
    Ini over 5 years
    ok, but you will only loose data of the last – how many seconds at maximum?
  • countermode
    countermode over 5 years
    @ini there is no generic answer. It depends on the particular file system and the tuning parameters. I can only repeat: if data integrity is such an issue, then sync is not the answer.
  • bjd2385
    bjd2385 over 5 years
    @Ini seems it depends on the filesystem being used I believe.
  • Ini
    Ini over 5 years
    The OS should anyway ensure that when you shutdown, that everything gets written to the ssd/hdd. In the case of a power-outage then you might loose some data. Is what I'm saying correct?
  • Cray
    Cray about 5 years
    Battery-based cache in some disks is really not a reason to not optimize for power loss 1) that's only in expensive professional servers. Not all users will have this 2) it will only save you in the situation where the data has even reached the disk controller at all. In many cases it will be stuck in OS cache, long before the controller will ever see that data - and that will be lost in an event of a power failure.
  • Kolay.Ne
    Kolay.Ne almost 3 years
    @Ini, I'm not 100% sure, so I'd appreciate confirmation or refutation of my words, but as far as I'm concerned, there are no time limits in the Linux caching/buffering algorithms, there only are memory limits. OS does not care how long is data buffered, it cares how big the buffer is. So, the flush to a file system happens when the buffer gets filled with changes (or when a partition is unmounted, a machine is shut down or hibernated, etc)
  • user30747
    user30747 almost 3 years
    At least for NFS, this answer appears to be outdated. From man exports, it says about async that "This option allows the NFS server to violate the NFS protocol and reply to requests before any changes made by that request have been committed to stable storage (e.g. disc drive).". It also says that "In releases of nfs-utils up to and including 1.0.0, the async option was the default. In all releases after 1.0.0, sync is the default, and async must be explicitly requested if needed."