Hi friends. i have had an external drive mounted without LABEL fo some time, nicely working in Doplhin, but I wanted to put it Label to see it as a nice name insted of numbers…
I run :
sudo xfs_repair -n /dev/sda1 ✔ 5m 51s
[sudo] password for consultor:
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used. Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
would have reset inode 140119303 nlinks from 1 to 2
No modify flag set, skipping filesystem flush and exiting.
current results of mount points are
lsblk -a INT ✘
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
`-sda1 8:1 0 931.5G 0 part
sdb 8:16 1 29.9G 0 disk
`-sdb1 8:17 1 29.9G 0 part /run/media/consultor/241468c4-9de4-4078-9999-f61efe700020
sr0 11:0 1 2K 0 rom
nvme0n1 259:0 0 447.1G 0 disk
|-nvme0n1p1 259:1 0 300M 0 part /boot/efi
|-nvme0n1p2 259:2 0 429.8G 0 part /
`-nvme0n1p3 259:3 0 17G 0 part [SWAP]
So I did put a LABEL for a system but because I had in Setting of Plasma defined default paths I could not access the all files on all paths as there were already still to many paths on it´s old path.
SO I put it back, changed the defaults Paths, but somewhere in the process got an error.
Now I have a device listed as mounted in list as dev/sda1 but no mount point. and also cannot remove it from mount points. So I did try to check a filesystem of the disk, but it is asking me to mount the drive, and run xfs_utility with -L command, which seems to be a dangerous operation to do, though I am quite sure there was no important data changes done durring the last session when the error has happend. So my real question is to estimate how really bad is to use the xfs_repair with -L option. I think the best formulation of question I need to have responded is:
Will xfs_repair -L option remove metadata to access all that has been store on my external HDD or only those that have been in session during which the failure of HDD has happend?
thank you for any help plz help