Timeout when sshfs mount not reached

I’m using some mounted sftp shares via sshfs, which works nice so far. This shares are located in my local LAN.

But when I’m not connected to my local LAN and open applications, they start not or only after minutes (8 - 12 minutes). This applications are in no way connected with the sftp shares and at least in my opinion, should start without any problems also when the LAN connection is not available.

If i unmount the sftp shares before the start, I don’t have that issue.

Any idea what I can do? I know I can’t use resources on the sftp shares, when I’m not connected to my LAN, but at least such simple things like a bash script should start without any issue, when they have nothing to do with the sftp shares.

Yes. I suggest you setup them to automount using systemd. See here for details:

Followed by:

Hope this helps!

1 Like

Will that also work, when the LAN connection is available during boot time and I switch later the connection to another LAN where the share is not available?

No. That will not work.

If you want it to work with two different geographical locations - two different network - it will not work.

It will only work when the ssh service is available.

It is possible to configure a setup where it works anywhere - but that will require extra configuration of DNS, enhanced ssh security and port forwarding in router’s firewall at the service location.

Ohh that I cant access the share is fine for me. I only talk about the timeout for applications or scripts, that have nothing todo ith the share. Maybe i should give a more practical example:

I have a bash script which backup files to a USB stick. This stick is mounted for example as /mnt/usbstick. The mount comes from a entry in /etc/fstab. It ill be automatically mounted during boot. Also in fstab, a mount which mounts a sftp share to /mnt/sftpdrive. At the start of the laptop, im on my local LAN and /mnt/sftpdrive can be mounted fine. Access is fast, and now starting the bash script to backup things from, lets say /home/username/ to /mnt/usbstick is fast as it should.

If i now disconnect my Wifi LAN connection, i lost also the connection to /mnt/sftdrive, which is the expected behavior. But I didnt lost the connection to /mnt/usbstick. If i now start the bash script for backup from /mnt/username to /mnt/usbdrive, the start of this script take 10 minutes or longer. If it is started, the whole backup process is fine. Only the start of the script takes ages. And this is with almost all applications. This is completely unexpected, because the bash script doesnt make use of the un-connected /mnt/sftpdrive nor do i backup to or from this share…

There seems to be no entry in the logfiles. I have also no idea, where i should look for entries exactly.

But I will try your suggestion. Maybe an automount via systemd is better as an entry in /etc/fstab…

OK. I can report: Also with systemd mount, i have the same behavior. If i unmount /mnt/sftpdrive before i start the applications, all is fine and they start as fast as before. At least for me, this is completely unexpected behavior.

Are you using these mount options?


This is my line in fstab:

username@   /mnt/sftpshare   fuse.sshfs   IdentityFile=/username/.ssh/keyfile,users,x-systemd.automount,allow_other 0 0

Is it better to use your options additionally and why?

OK, i tried now with:

username@   /mnt/sftpshare   fuse.sshfs   IdentityFile=/username/.ssh/keyfile,users,x-systemd.automount,allow_other,noauto,nofail,_netdev 0 0

After the change in fstab, i unmounted the share. Mounted again with sudo mount -a. Tried to access the share, with success. All is working, i can access all files. Now i disconnect from Wifi, change to an other Wifi. Started some random application. And… nothing changed. Yes the application starts, but only after 10 Minutes waiting time…

If i unmount the share, before the change, the application starts immediately. The same is, when the share is mounted and im in the right Wifi. I have absolute no idea, why it takes so long to start the application, in the other case.There is nothing, what is needed for the application on the sftpdrive…

You need to build in some condition and error handling.

When you rely on a script on an usb stick to execute a backup to a network share mount without any kind of handling you are begging for trouble.

When you execute your script on the usb stick - it will backup whether your network share is available or not - because the mount point will always exist - but the mounted share will not.

Your script will then write to mount point and the data will be stored on your system disk and not on the share as you expect.

One way to run a conditional check is to use the mount command and search the output to verify if the mount actually exist.

When you mount using fstab your system expect those mounts to be present at all times.

If the mount is not available at boot time your system will hang for 90s while waiting for the entry to be available.

The absence of the device will cause unexpected issues for your system - after all - fstab is containing system critical information.

You can - partly - work around those by using a diversity of options in fstab and some will remedy the boot delay but the absence of the device will still cause issues for your system.

Believe me - years back I tried hard to make my system behave using fstab and a variety of options - but no matter how I got around it - I kept banging my head on obstacles.

By coincidence I began looking into mount units and automount units - and finally I got it working - no boot delays - navigating a folder made it available on demand.

There’s only one thing you will always need to accomodate for when scripting tasks - condition handling.

If you script depends on a network share or a specific mountpoint - always ask the system before assuming the share is available - otherwise you will likely write the data to a location on your system disk.

Thanks for your input but I guess we are still talking past each other. The backup script was just an example. The problem affects absolutely all scripts, applications, etc. Everything I can start. I just wanted to clarify that neither the backup script, nor the other applications have anything to do with the to SFTP share. Neither do the applications access it, nor do they start from it, nor do they read or write to the share. Nevertheless, each startup is extremely delayed. On average 12 to 15 minutes. If the access to the SFTP share works, all applications start immediately, that is if I am connected to the correct LAN/WiFi. As soon as I disconnect, all applications start extremely delayed. I can’t detect any error message anywhere. If the SFTP share is not mounted, all applications start immediately. No matter if the correct wifi connection is used or not. It doesn’t matter anymore.

The idea I would have now would be to create some kind of systemd service, which would unmount the SFTP share if the connection to a specific LAN/Wifi is lost. However, I have no idea what to use as a trigger or what the condition/trigger should be. The service should then automatically re-mount the share again when the connection to the correct LAN/Wifi is re-established. From my point of view this is currently the only workaround.

I understand you quite well - I have been there … I know exactly what you are referring to …

@Mirdarthos already linked you to the solution - the articles I wrote are battle tested - and they work.

You can create a system service to do the lifting - it will mount when you access the mountpoint - detach yourself from the thought that fstab is the only way.

Comment the entry in fstab and mount the share before you test below suggestion.


Description=sftp share on




Description=sftpshare on


sudo systemctl enable --now mnt-sftpshare.automount
1 Like