NFS Mount fails on startup
Solution 1
Here's what I did as a work around in case anyone else runs into this problem and comes looking for the solution here:
Created a script (mountall.sh) in /etc/init.d/:
#!/bin/bash
mount -r NFSSERVER-priv:/vol/vol1_isp/eshowcase/sites /var/www
mount NFSSERVER-priv:/vol/vol1_isp/vusers /var/users
Make the system aware of the new script:
update-rc.d mountall.sh defaults
The option “defaults” puts a link to start mountall.sh in run levels 2, 3, 4 and 5. (and puts a link to stop mountall.sh into 0, 1 and 6.)
Chmod the file to be executable
chmod +x mountall.sh
Now when you init 6 you should have your mount points. Also a good idea to make a "comment" in your fstab so people know where everything is actually being mounted from as that will be the first place they'll look.
Solution 2
Not sure if this is applicable to you, but the problem I was having was that the directory I was trying to mount at was not available at boot. I instead tried mounting to /mnt
and it worked.
Solution 3
I was having the same issue after upgrading ubuntu 14.04 to 14.10. Here is what solved the problem for me:
Edit /etc/default/nfs-common
and make sure it says:
NEED_STATD=yes
After restarting, my NFS mounts worked.
Scott Rowley
SAS Administrator & Freelance Web / iOS Developer.
Updated on September 18, 2022Comments
-
Scott Rowley over 1 year
I have multiple Ubuntu servers, recently I've installed a few 11.04 servers (and 1 desktop) and I've just found that upon rebooting the nfs mounts will not mount.
I've tried upgrading nfs-common to the latest version (I'm only one small revision behind) but that just slightly changes my errors. All of these servers having the issues are clones (vmWare) from a server template I made awhile back, so I thought maybe it was an issue with the template and therefore all of its clones. I then tried the same mount on the Desktop 11.04 but I had the same issues. About half the time I will be able to press "S" to skip but the other half of the time the server freezes (and I restore from a recent snapshot). Also whats odd is that if I am able to get into the system then I can do a "mount -a" just fine and it will mount everything. This makes me think the issue is that nfs isn't waiting for a network to be present in order to try mounting. Something else that makes me think this is that I get a "unable to resolve host" (to an NFS point) error, even though that host is in /etc/hosts.
Here is my /var/log/boot.log
fsck from util-linux-ng 2.17.2 fsck from util-linux-ng 2.17.2 /dev/sda1 was not cleanly unmounted, check forced. /dev/mapper/php53x-root: clean, 75641/1032192 files, 492673/4126720 blocks (check in 5 mounts) init: portmap-wait (statd) main process (373) killed by TERM signal init: statd main process (402) terminated with status 1 init: statd main process ended, respawning init: statd-mounting main process (355) killed by TERM signal mount.nfs: Failed to resolve server NFSSERVER-priv: Name or service not known init: statd-mounting main process (416) killed by TERM signal mount.nfs: Failed to resolve server NFSSERVER-priv: Name or service not known init: statd main process (435) terminated with status 1 init: statd main process ended, respawning init: statd main process (459) terminated with status 1 init: statd main process ended, respawning mountall: mount /var/www [410] terminated with status 32 mountall: mount /var/users [436] terminated with status 32 init: statd-mounting main process (448) killed by TERM signal init: statd main process (468) terminated with status 1 init: statd main process ended, respawning init: statd main process (498) terminated with status 1 init: statd main process ended, respawning /dev/sda1: 226/124496 files (1.3% non-contiguous), 39133/248832 blocks mountall: fsck /boot [268] terminated with status 1 mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified mountall: mount /var/users [583] terminated with status 32 mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified mountall: mount /var/www [575] terminated with status 32 mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified mountall: mount /var/www [638] terminated with status 32 mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified mountall: mount /var/users [645] terminated with status 32 mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified mountall: mount /var/www [724] terminated with status 32 mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified mountall: mount /var/users [729] terminated with status 32 Skipping /var/www at user request * Starting AppArmor profiles [80G [74G[ OK ] * Starting Name Service Cache Daemon nscd [80G [74G[ OK ] FATAL: Module vmhgfs not found. FATAL: Module vmsync not found. FATAL: Module vmblock not found. * Loading open-vm-tools modules [80G [74G[ OK ] * Starting open-vm daemon vmtoolsd [80G [74G[ OK ]
Sorry for the long post, just wanted to convey as much information as possible. Does anyone have any suggestions on this? I've been googling all day and I have tried things with _netdev and well as changing the configuration for statd but nothing has worked. I have 6 servers this is effecting. :\
/etc/fstab: (problem lines only - removing these will allow the rest of nfs to mount)
NFSSERVER-priv:/vol/vol1_isp/eshowcase/sites /var/www nfs ro,defaults 0 0 NFSSERVER-priv:/vol/vol1_isp/vusers /var/users nfs defaults 0 0
/etc/hosts (relevant entry):
10.1.1.43 NFSSERVER-priv
-
Bruno Pereira over 12 yearsWhat are your fstab mount lines looking like?
-
Scott Rowley over 12 yearsThe problem lines are: (they work just fine on other ubuntu servers on older OSes) NFSSERVER-priv:/vol/vol1_isp/eshowcase/sites /var/www nfs ro,defaults 0 0 NFSSERVER-priv:/vol/vol1_isp/vusers /var/users nfs defaults 0 0
-
-
agc93 over 10 yearsIf you read the question, he was asking about NFS shares, not NTFS shares. Therefore, Windows has nothing to do with this issue at all.