Give user permissions to all files and folders

10,285

Solution 1

Just adding the user backups to the user group sudo does not automatically give the account access to all files on the system. It gives the user the permission to run the sudo command.

Since you are using public key authentication (presumably without a passphrase), I would approach this with security and ease of implementation in mind. Using ssh allows you to restrict the user to execute only very specific commands. In this case, you can allow the user backups to execute rsync with superuser permissions.

You have already performed the key exchange and verified authentication is successful. In the authorized_keys file on the remote host that you are backing the /home directory from, you can add a command= directive to the key that is used by the user backups. This directive will only allow that command to be run when that key is used for authentication. So the first field of the key would look similar to this:

command="/path/to/sudo /path/to/rsync -az /home /local/folder" ssh-rsa AAAAB3NzaC1yblahblahblah

You can go even further and add more options to the key, such as from=myhost,no-pty,no-X11-forwarding.

This should give you decent security and not require you to modify the underlying file system permissions. You will probably need to play with the command that you place in the authorized_keys file until it works like you expect; it may take a bit to wrap your brain around it. The command specified in the authorized_keys will basically override the rsync options you will pass from the connecting host.

Lots of good information in man sshd. You want to specifically read the AUTHORIZED_KEYS FORMAT section.

Solution 2

One option is to run an rsync server on the remote system.

Basically, create an rsync.conf with uid=root then use rsync -az rsync://domain.com.

See http://pastebin.com/5hQx1mRV for an example someone else wrote and Configuring the rsync daemon for some basics of enabling a server in Ubuntu.

Note that you have to consider the security of this. A better approach is probably to make the remote system push to your local system.

Solution 3

You could perhaps use ACL if the underlying filesystem supports it.

You should add the group backup in the ACL of each file and directory. But to be added automatically for newly created file and directory, you should first set the default ACL to all existing directories. So on the remote server:

sudo find /home -type d -print0 | xargs -0 setfacl -d -m group:backup:r-x

Then you should have rx permission for all existing directories and read for files. You can do this with 2 commands (without the -d this time)

sudo setfacl -R -m group:backup:r-- /home
sudo find /home -type d -print0 | xargs -0 setfacl -m group:backup:r-x

This will give you extra permission to read the files to the group backup without changing the existing user and group that own each file.

Note: to accelerate the find | xargs command you can add the following option to the xargs command: -P n with n the number of parallel process. You can set it to the number of cpu you have on your machine + 1.

Share:
10,285

Related videos on Youtube

Thomas Clayson
Author by

Thomas Clayson

Updated on September 18, 2022

Comments

  • Thomas Clayson
    Thomas Clayson over 1 year

    I want to have a user who has access to all the files and folders on the system. This is for the purpose of using RSYNC on a local machine to backup a remote machine.

    At the moment we are using the user backups and although we have added this user to the groups sudo and admin rsync still returns messages like:

    rsync: opendir "/location/to/folder" failed: Permission denied (13)

    rsync: send_files failed to open "/location/to/file": Permission denied (13)

    Any idea how we give the user backups permission to access everything (short of adding the user to all groups ever - the remote server we are trying to backup is a dedicated hosting server where every account has its own user on the system).

    Thanks for any help.

    • Phil
      Phil about 12 years
      Can you give examples of paths? If it's ones in /proc,/dev etc then errors are to be expected.
    • Vital Belikov
      Vital Belikov about 12 years
      add user backups to sudo and admin do not change anything.
    • Thomas Clayson
      Thomas Clayson about 12 years
      @Phil I am trying to back up the whole /home directory. Files/directories that are failing are ones such as /home/a/c/account/users/mail:username which have permissions like drwx------ and owned by mail:4096. I assume that because user backups is in the group sudo or admin it doesn't have access because there are no access permissions for groups? I dunno, but its erroring on folders/files like that.
    • Thomas Clayson
      Thomas Clayson about 12 years
      @favadi I've already done that... still not working.
    • Phil
      Phil about 12 years
      You'll have to run the rsync command with sudo.
    • Thomas Clayson
      Thomas Clayson about 12 years
      This is basically the rsync command we are running: rsync -e ssh -az [email protected]:/home /location/of/local/folder
    • George M
      George M about 12 years
      Since you are running rsync with ssh, are you using public key authentication?
    • Mikel
      Mikel about 12 years
      The sudo group is only useful for running sudo. I'm not sure if the admin group is used for anything. For this to work the way you want, you need to actually be the root user on the remote system.
  • Thomas Clayson
    Thomas Clayson about 12 years
    That looks pretty good thanks. Will that then rsync everything from the /backup/lc folder on the remote server? Do I change the path bit to be /home to get that to work?
  • Mikel
    Mikel about 12 years
    Parallel xargs won't speed things up in this case, because the bottleneck is I/O, not CPU.
  • Huygens
    Huygens about 12 years
    Linux uses caches in memory before committing the changes to disk, so it does feel faster from a user point of view. And even a small notebook hdd could handle the load of setting multiple ACL in parallels. It's not that much I/O. Try it :)
  • Thomas Clayson
    Thomas Clayson about 12 years
    Thanks for the edit dude. The only problem with that is that we cannot guarantee a static IP address where we are, and we want to do this via cron so that it just happens all the time. If we "push" it could stop working if our IP address changes. :/
  • Thomas Clayson
    Thomas Clayson about 12 years
    Ok, I'm following I think. Adding that directive to the authorized_keys file will allow you to run a remote command with superuser permissions right? How would I change the command rsync -e ssh -az [email protected]:/home /location/of/local/folder in order to instruct it to run as superuser on the remote machine? Or will it do it automatically? Thanks for your time and help. :)
  • George M
    George M about 12 years
    The directive doesn't automatically allow you to run the command as superuser. When that specific key is used for authentication, it restricts the user to ONLY the command specified. The superuser permissions come from "sudo". Technically, you could put your whole "rsync" command in the authorized_keys file and just initiate the ssh connection from the backup host. The whole process would then start automatically after authentication completes. I would start the connection with something like "ssh backups@remotehost sudo rsync"
  • George M
    George M about 12 years
    It does take a little bit to wrap your brain around, but it's pretty cool when it finally clicks and you realize what you can do.
  • Warren Young
    Warren Young about 12 years
    @ThomasClayson: That's what dynamic DNS is for. :)
  • Huygens
    Huygens about 12 years
    I've just read here superuser.com/a/422954/53206 that you could simply use: sudo setfacl -Rm d:group:backup:r-X,group:backup:r-X /home :) much simpler!