Two Issues With Kernel 6.10rc1-2 (ethernet, device specs)

I’ve noticed a couple of issues with kernel 6.10 today:

Issue #1: Ethernet is offline, though wifi still works. This is a bummer because I normally depend on ethernet to connect computers in my bedroom because it’s faster and because it seems silly to use radio to communicate over a distance of half a meter. I hope this gets fixed.

Issue #2: Device specifications have changed. For example, compare storage-usage reports for my system between kernels 6.9 and 6.10:

using kernel 6.9:

Partition           Total  Used     Avail Use% Mnt
/dev/nvme0n1p2      419GB 142GB     256GB  36% /
/dev/nvme1n1p1     1968GB 870GB     999GB  47% /home
Max usage = 47%

using kernel 6.10:

Partition           Total  Used     Avail Use% Mnt
/dev/nvme1n1p2      419GB 142GB     256GB  36% /
/dev/nvme0n1p1     1968GB 870GB     999GB  47% /home
Max usage = 47%

So, the latest kernel has swapped device numbers on my two m.2 modules. Is this considered “acceptable behavior”? If so, we can’t rely on device specifications being the same from day to day. This has already caused me some annoyance this morning because I had to re-write my “storage usage tracker” shell script to use mountpoints instead of device specs. (I would have used UUIDs, but unlike lsblk, df doesn’t provide those.)

You could never rely on things like device names.
UUID is the way to go.

I dont know what you wrote or how.
But df is just a utility for reading size and such.
You can very well use by UUIDs, for example

df -h /dev/disk/by-uuid/*

As to any other 6.10 release candidate issues;
Well, its not an LTS and its still an RC and there are still a half dozen other supported kernels.
Dont use it for now. Or keep testing it from time to time.

1 Like

Apparently that’s per-session only, I take it?

In this case, I used mount-point, as that’s really what I’m interested in. (One of my m.2 cards contains “/” and “/boot/efi”, and the other contains “/home”. So to check disk use I need only check “/” and “/home”, as “/booot/efi” is tiny.)

Interesting! But it doesn’t actually print the UUIDs:

Filesystem      Size  Used Avail Use% Mounted on
dev              32G     0   32G   0% /dev
/dev/nvme1n1p1  1.8T  809G  931G  47% /home
dev              32G     0   32G   0% /dev
/dev/nvme0n1p1  300M  4.1M  296M   2% /boot/efi
/dev/nvme0n1p2  390G  132G  238G  36% /

I wrote a script to display storage usage. Until today, it displayed info for “/dev/nvme0n1p2” and “/dev/nvme1n1p1”. But since I’ve discovered that “/dev/xxxxx” addresses are unreliable, I’m now using mount points “/” and “/home” instead:

raw=`/usr/bin/df -l -BGB --output=target,size,used,avail,pcent`
use=`echo -e "$raw" | awk '/^\/ +/{print}/^\/home +/{print}'`
rms=`echo -e "$use" | sed 's/                                    /  /'`
max=`echo -e "$rms" | awk '/[0-9]+%/{print $5}' | cut -d "%" -s -f 1 | sort -n | tail -n 1`
msu="Max Storage Usage = $max%"
mes="\nMount-point  Size   Used  Avail  Percent-Used"
echo -e "$mes"
echo -e "$mes" >> '/home/aragorn/Data/Celephais/Captain’s-Den/Reports/Storage-Usage/Storage-Usage.log'
sendEmail -s "$EMAIL_SRV" -f "$EMAIL_FRM" -t "$EMAIL_TOO" -u "$msu" -m "$mes" -o tls=no

Not the most eloquent shell script, but it gets the job done.

I wouldnt expect it to.
But it was pointed at the UUIDs in the path.

So you could also print those while running df on each. Something like

for duuid in /dev/disk/by-uuid/*; do echo -e "\nUUID: "${duuid##*/}"" && df -h $duuid; done; echo
1 Like