You might be able to create a custom .target or pre-service, that is a condition for anything to do with your NAS server.
It would explicitly make sure that 192.168.2.102 is reachable as the condition.
Then it can be defined as a condition for mounts and services, possibly.
When I look at my logs for my rsync task (to the NAS server), sometimes I will see two or three messages of “NAS unreachable! Retrying!” before it actually starts the rsync task.
This tells me that it took several seconds and a few retries, even though the “network” was available. Had I not used that logic (pasted above), then my rsync task would fail, even if my network was online.
Oh, and I’m using WIRED ETHERNET, so it’s not like I have to wait for a WiFi connection to establish.
Oh! Try this! I haven’t tested it myself, but this might be the cleaner method, since it will be granular to your NAS server, and not some ambiguous “network connection”.
Create and enable a oneshot service named /etc/systemd/system/reachable-nas.service
The contents of reachable-nas.service might look like this:
[Unit]
Description=Make sure TrueNAS server is reachable
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'while ! ping -c1 192.168.2.102; do sleep 2s; done'
TimeoutStartSec=30s
[Install]
WantedBy=multi-user.target
Now enable it:
sudo systemctl enable reachable-nas.service
And add these under your [Unit] dependencies for your mnt-nas-TRUENAS.mount.
i mean, all i really need to do it use the .automount and then a simple script that will ls all my mounts after logon.
Its just really a stupid thing to have to do.
But the method I wrote above (while you were still typing your post) might work out as a cleaner approach, and it can be modular, since all you need to do is add the following to any .mount or .service that uses the NAS server.
You’re basically creating a more nuanced condition in which systemd and other logic is failing to address: a truly reachable NAS server on your local network
EDIT: Not to sound blunt, but the users on Reddit are not taking this into account:
They sound like “Well it works for me, so surely it should work for you.”
As for the claim,
Then explain this?
mount | grep /mnt/nas/downloads
//192.168.1.101/downloads on /mnt/nas/downloads type cifs (rw,relatime,vers=3.1.1,cache=loose,username=winnie,domain=workgroup,uid=1000,forceuid,gid=1000,forcegid,addr=192.168.1.101,file_mode=0600,dir_mode=0700,iocharset=utf8,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=60,x-systemd.automount,user=winnie)
This would also work without using the automount method. (Mount unit only.)
There’s nothing in my fstab, yet any use of this mount point (even when invoking the mount command), and with every possible application/software, it works flawlessly.
The problem from the get-go is not “fstab vs systemd-mount”. It’s that your NAS server is not immediately reachable upon system bootup.
However, using discrete .mount units, you can easily create a modular approach (such as the custom pre-condition of an available NAS server), without creating long lines and entries in the fstab, writing out every single x-systemd option.
well i suspect you are right that a reinstall of linux will not help, but i have a multi boot system (even have windows in there somewhere, (should really nuke that from orbit some time), so another partition with another copy of a fresh manjaro kde wont hurt but it will show if its almost definitely a bug in the kernal or systemd or somewhere but not to do with me. If i knew enough i might even contact the devs but i dont understand systemd enough to communicate with them. I pretty sure that thay already know though, i mean i can find the exact same problem from 4 years ago!