EXT4-fs (nvme1n1p1): mounted filesystem with ordered data mode. Opts: (null)

6,135

TLDR; A simple systemctl daemon-reload followed by mount -a should fix this.

There should exist a systemd mount called mnt-ssd\x2dhigh\x2dNVME.mount (the \x2d are - fromo your path) which you can check with:

# systemctl status mnt-ssd\x2dhigh\x2dNVME.mount
● mnt-ssd\x2dhigh\x2dNVME.mount - /mnt/ssd-high-NVMe
   Loaded: loaded (/etc/fstab; generated)
   Active: inactive (dead) since Thu 2021-08-12 09:00:04 CEST; 49s ago
    Where: /mnt/ssd-high-NVMe
     What: /dev/disk/by-uuid/UUID_OF_OLD_DISK
     Docs: man:fstab(5)
           man:systemd-fstab-generator(8)

The important part here is the What, which will probably show the UID of the old disk.

I assume that between the unmount of the old nvme disk and the mount of the new disk no reboot has taken place, as the services should be regenerated on reboot.

The problem is that the systemd-mount - for some reason unknown to me - seems to force the use of the disk defined in the mount definition, even if an explicit mount DISK PATH is used.


In my case I was able to mount the new disk initially and only after I (hot-) detached the old disk from the vm I couldn't mount any other disk anymore. It even automatically unmounted the new disk from the mountpoint when I detached the old disk.

I assume it's an error in the compatibility with the manual (u-)mount. Systemd probably sees the old disk be removed - which is still in the systemd-mount - marks the mountpoint as failed (or at least as inactive) and does some cleaning up which involves making sure nothing is mounted on that path anymore or something like that. The reason it is impossible to mount another disk afterwards is unclear to me.

Share:
6,135

Related videos on Youtube

Tim He
Author by

Tim He

Updated on September 18, 2022

Comments

  • Tim He
    Tim He over 1 year

    I just want to mount my nvme-ssd to /mnt/ssd-high-NVMe

    $ sudo rm -rf /mnt/ssd-high-NVMe
    $ sudo rm -rf /mnt/ssd-high-NVME
    
    $ sudo mkdir /mnt/ssd-high-NVMe
    $ sudo mkdir /mnt/ssd-high-NVME
    
    $ ls -lh
      drwxr-xr-x 2 root root 4.0K  Jan 20 22:58 ssd-high-NVMe
      drwxr-xr-x 2 root root 4.0K  Jan 20 22:42 ssd-high-NVME
    
    $ sudo mount /dev/nvme1n1p1 /mnt/ssd-high-NVMe
    
    $ df -h
      tmpfs           6.3G  2.3M  6.3G    1% /run
      /dev/sdb3       110G   45G   59G   44% /
      tmpfs            32G   95M   32G    1% /dev/shm
      tmpfs           5.0M  4.0K  5.0M    1% /run/lock
      tmpfs           4.0M     0  4.0M    0% /sys/fs/cgroup
      /dev/sdb2       512M  7.8M  505M    2% /boot/efi
      tmpfs           6.3G  180K  6.3G    1% /run/user/1000
    
    $ sudo dmesg
       [43391.301050] EXT4-fs (nvme1n1p1): mounted filesystem with ordered data mode. Opts: (null)
    
    $ sudo e2fsck /dev/nvme1n1p1
       e2fsck 1.45.6 (20-Mar-2020)
       NVMe-SSD:No problem,22967/30531584 files,34978829/122096384 blocks
    
    $ sudo nvme smart-log /dev/nvme1n1p1 
      Smart Log for NVME device:nvme1n1p1 namespace-id:ffffffff
      critical_warning          : 0
      temperature               : 35 C
      available_spare               : 100%
      available_spare_threshold     : 10%
      percentage_used               : 0%
      endurance group critical warning summary: 0
      data_units_read               : 1,665,126
      data_units_written            : 2,815,185
      host_read_commands            : 53,190,654
      host_write_commands           : 83,501,433
      controller_busy_time          : 368
      power_cycles              : 27
      power_on_hours                : 25
      unsafe_shutdowns          : 11
      media_errors              : 0
      num_err_log_entries           : 0
      Warning Temperature Time      : 0
      Critical Composite Temperature Time   : 0
      Temperature Sensor 1           : 35 C
      Temperature Sensor 2           : 40 C
      Thermal Management T1 Trans Count : 0
      Thermal Management T2 Trans Count : 0
      Thermal Management T1 Total Time  : 0
      Thermal Management T2 Total Time  : 0
    
    $ sudo vim /etc/fstab
      # /etc/fstab: static file system information.
      #
      # Use 'blkid' to print the universally unique identifier for a
      # device; this may be used with UUID= as a more robust way to name devices
      # that works even if disks are added and removed. See fstab(5).
      #
      # <file system> <mount point>   <type>  <options>       <dump>  <pass>
      # / was on /dev/sda3 during installation
      UUID=49b55adc-d909-470d-8a6b-87401c8ae63d /               ext4    errors=remount-ro 0       1
      # /boot/efi was on /dev/sda2 during installation
      UUID=5624-9AA0  /boot/efi       vfat    umask=0077      0       1
      /swapfile                                 none            swap    sw              0       0
    
      /dev/disk/by-uuid/6a4437ab-8812-484d-b799-4fd007593db4 /mnt/ssd-high-NVME auto rw,nosuid,nodev,relatime,uhelper=udisks2,x-gvfs-show 0 0
    

    HOWEVER, when I change the mount point to another directory (making 'ssd-high-NVMe' to 'ssd-high-NVME'), everything is OK.

    $ sudo mount /dev/nvme1n1p1 /mnt/ssd-high-NVME
    
    $ df -h
      tmpfs           6.3G  2.3M  6.3G    1% /run
      /dev/sdb3       110G   45G   59G   44% /
      tmpfs            32G   95M   32G    1% /dev/shm
      tmpfs           5.0M  4.0K  5.0M    1% /run/lock
      tmpfs           4.0M     0  4.0M    0% /sys/fs/cgroup
      /dev/sdb2       512M  7.8M  505M    2% /boot/efi
      tmpfs           6.3G  180K  6.3G    1% /run/user/1000
      /dev/nvme1n1p1  458G  126G  312G   29% /mnt/ssd-high-NVME  <------ SUCCESS!
    

    One thing may matter: I have used /mnt/ssd-high-NVMe as a mount point of /dev/nvme1n1p1 before, and I had done something bad on the raw /dev/nvme1n1p1 and make it corrupted when it was still mounted. After that, I totally reformating the /dev/nvme1n1p1 (I am sure the disk is healthy). I think my problem is related with this. But how to fix it? What information should I provided further?

    Thanks!

    Additional information

    $ sudo gdisk -l /dev/nvme1n1
    GPT fdisk (gdisk) version 1.0.5
    
    Partition table scan:
      MBR: MBR only
      BSD: not present
      APM: not present
      GPT: not present
    
    
    ***************************************************************
    Found invalid GPT and valid MBR; converting MBR to GPT format
    in memory. 
    ***************************************************************
    
    Disk /dev/nvme1n1: 976773168 sectors, 465.8 GiB
    Model: Samsung SSD 980 PRO 500GB               
    Sector size (logical/physical): 512/512 bytes
    Disk identifier (GUID): 54BF3843-FF55-41C5-8FD5-25BF87B4DEEA
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 34, last usable sector is 976773134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 2029 sectors (1014.5 KiB)
    
    Number  Start (sector)    End (sector)  Size       Code  Name
       1            2048       976773119   465.8 GiB   8300  Linux filesystem