How to set default file permissions for all folders/files in a directory shared through sshfs?

I want to share a folder between two computers and make sure I have read and write access from both computers.

I’ve shared a folder with sshfs between two computers and I’ve done sudo chmod -R ugo+rw /mnt/storage to change the permission for all existing files and folders. But how do I make sure all new files and folders have those permissions too?

I think it would be something like this but it isn’t clear to me what that does so I don’t know how to adapt it for two computers.

You should explain what are you trying to achieve first.

Are you sure you’re using sshfs? This is usually bound on a user basis by ssh, do not really “shared”.

Ideally, you would create a group, add the two users to this group and use the solution from your linked post to make sure all files/folders belong to this group.

There are two mechanisms, but they are not equivalent.

The first mechanism is by changing your umask, but I wouldn’t recommend this, given that you’re aiming for world-writable permissions. See… :arrow_down:

… and… :arrow_down:

man umask

The second (and better) method — possibly in conjunction with temporarily changing your umask — is by setting the SGID flag on the directory. This too is explained in the above-linked tutorial.

That all said, the better way to handle the permissions is to, instead of using sshfs for sharing a directory, set up NFS instead. That way you can fine-tune the permissions without having to make the share world-writable. :arrow_down:

NFS at the Arch Wiki

That can be done… but note that one is the server the other is the client…

Well, why would you need that? Only o-rw would make sense…

Problem here are not the permissions, but the owner. The owner can be mapped on the client side. Example:

sshfs user@xxx.xxx.xxx.xxx:/storage /mnt/storage -o idmap=user

That way it will map the remote owner to user who is actually mounting it. So you. On the remote side nothing changes, but on the client side every file is mapped to your user account.

I want to share a folder between two computers and make sure I have read and write access from both computers.

I tried NFS before sshfs and it froze the programs that used the shared folder in the client computer whenever the server went offline. I couldn’t fix it and I got tired of trying so I won’t go back to it.

I’ll take a look at the second option.

I have the following line in the client’s /etc/fstab

user@server-ip:/mnt/storage /mnt/storage fuse.sshfs IdentityFile=/home/user/.ssh/id_rsa,uid=1000,gid=1000,allow_other,default_permissions,_netdev,follow_symlinks,ServerAliveInterval=45,ServerAliveCountMax=2,reconnect,noatime,auto 0 0

Should I replace something for idmap=user or just add it.

Guess you need add it. That are my options in fstab:

noauto,x-systemd.automount,_netdev,user,idmap=user,follow_symlinks,identityfile=/home/user/.ssh/id_rsa,allow_other,default_permissions,uid=1000,gid=1000,X-mount.mkdir=0755

I like the behavior that it only mounts the sshfs if I open the folder. Otherwise it stays unmounted.

2 Likes

That’s because you were using NFS over TCP/IP. The current NFS implementation also supports UDP, which is the better option if the server isn’t always online.

To accomplish that you can use:
sudo setfacl -R -m d:o:rwX /mnt/storage

1 Like

I used this tutorial. It would be nice if you add how to use NFS with UDP.

I’m getting a bunch of Operation not supported messages for each folder. Is that normal?

That command will only work on filesystems that support extended attributes, so if you get such errors it means the filesystem you mounted doesn’t support those…
(It should have given the same error for files also, not just folders :no_good_woman:)

Edit: Ohhhh wait a sec, i just read you are mounting using NFS, that’s not a local filesystem maybe thats why.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.