Will xfs_repair -L option remove metadata to access all that has been stored ever on HDD?

Hi friends. i have had an external drive mounted without LABEL fo some time, nicely working in Doplhin, but I wanted to put it Label to see it as a nice name insted of numbers…
I run :

sudo xfs_repair -n /dev/sda1                                              ✔  5m 51s    
[sudo] password for consultor: 
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used.  Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
would have reset inode 140119303 nlinks from 1 to 2
No modify flag set, skipping filesystem flush and exiting.

current results of mount points are

lsblk -a                                                                         INT ✘   
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda           8:0    0 931.5G  0 disk 
`-sda1        8:1    0 931.5G  0 part 
sdb           8:16   1  29.9G  0 disk 
`-sdb1        8:17   1  29.9G  0 part /run/media/consultor/241468c4-9de4-4078-9999-f61efe700020
sr0          11:0    1     2K  0 rom  
nvme0n1     259:0    0 447.1G  0 disk 
|-nvme0n1p1 259:1    0   300M  0 part /boot/efi
|-nvme0n1p2 259:2    0 429.8G  0 part /
`-nvme0n1p3 259:3    0    17G  0 part [SWAP]

So I did put a LABEL for a system but because I had in Setting of Plasma defined default paths I could not access the all files on all paths as there were already still to many paths on it´s old path.
SO I put it back, changed the defaults Paths, but somewhere in the process got an error.
Now I have a device listed as mounted in list as dev/sda1 but no mount point. and also cannot remove it from mount points. So I did try to check a filesystem of the disk, but it is asking me to mount the drive, and run xfs_utility with -L command, which seems to be a dangerous operation to do, though I am quite sure there was no important data changes done durring the last session when the error has happend. So my real question is to estimate how really bad is to use the xfs_repair with -L option. I think the best formulation of question I need to have responded is:
Will xfs_repair -L option remove metadata to access all that has been store on my external HDD or only those that have been in session during which the failure of HDD has happend?

thank you for any help plz help

Even after reading man xfs_repair I don’t know what the -L option will actually do.
But I’m quite sure it doesn’t have anything to do with giving the file system a label.

https://linux.die.net/man/8/xfs_repair

Labeling can be done with xfs_admin, it seems:

https://medium.com/shehuawwal/how-to-label-ext4-and-xfs-file-system-in-linux-356f56e4cae2

The command you ran:
xfs_repair -n /dev/sda1
did nothing - it was a “dry run”

If you run that with the -L option, you’ll likely incur some damage.

It is the wrong tool for the job.

All this is just from reading the manual pages and three minutes of googleing.
I have never ever actually used xfs.

1 Like

removd tag and moved to Support as the question is not plasma specific

1 Like

thank you for feedback. The issue is not so much with labeling, There are several Issues. For the moment I am no interested in LABELIN the disk. First is that mount point /dev/sda1 seem occupied and I cannot remove it. I cannot do umount, and secondly I wish to know if -L will remove all metadata of all drive or only the last session… For me was not clear from manual pages.

/dev/sda1 is NEVER to be used as mountpoint to mount a disk !

/dev/xxxxx is the name of the device itself, or of some partition of the device. So it is correct, that you can not remove it :wink:
:footprints:

Please show the command you tried to run, and its full error-messages along.

2 Likes

Like @andreas85 said:
/dev/sda1 is the device partition itself
There is no mount point for it visible in your output - it is therefore not mounted.

/dev/sdb1 is mounted - it is mounted to /run/media
(which is how and where external devices are attached when you mount them through the GUI)

lsblk -f
is more useful - as it will also give the file system on the various devices

I cannot conclusively clear that up, because I have never ever used xfs, although:

xfs_repair -L /dev/sda1

is a very different command than:

xfs_admin -L myname /dev/sda1

both have an -L option, but that -L means different things with different commands

It looks like your command will zero out … something - and something will get lost:

… and would have moved some files to lost+found

reading the man page tells me, that
changes to the metadata (in the log) will get lost
not all metadata - just the changes
… whatever that might mean in reality for your data … :man_shrugging:

3 Likes

I feel that some documentation is warranted… :stuck_out_tongue:

:point_down:

1 Like