Hi friends. i have had an external drive mounted without LABEL fo some time, nicely working in Doplhin, but I wanted to put it Label to see it as a nice name insted of numbers…
I run :
sudo xfs_repair -n /dev/sda1 ✔ 5m 51s
[sudo] password for consultor:
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
ignored because the -n option was used. Expect spurious inconsistencies
which may be resolved by first mounting the filesystem to replay the log.
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
would have reset inode 140119303 nlinks from 1 to 2
No modify flag set, skipping filesystem flush and exiting.
current results of mount points are
lsblk -a INT ✘
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
`-sda1 8:1 0 931.5G 0 part
sdb 8:16 1 29.9G 0 disk
`-sdb1 8:17 1 29.9G 0 part /run/media/consultor/241468c4-9de4-4078-9999-f61efe700020
sr0 11:0 1 2K 0 rom
nvme0n1 259:0 0 447.1G 0 disk
|-nvme0n1p1 259:1 0 300M 0 part /boot/efi
|-nvme0n1p2 259:2 0 429.8G 0 part /
`-nvme0n1p3 259:3 0 17G 0 part [SWAP]
So I did put a LABEL for a system but because I had in Setting of Plasma defined default paths I could not access the all files on all paths as there were already still to many paths on it´s old path.
SO I put it back, changed the defaults Paths, but somewhere in the process got an error.
Now I have a device listed as mounted in list as dev/sda1 but no mount point. and also cannot remove it from mount points. So I did try to check a filesystem of the disk, but it is asking me to mount the drive, and run xfs_utility with -L command, which seems to be a dangerous operation to do, though I am quite sure there was no important data changes done durring the last session when the error has happend. So my real question is to estimate how really bad is to use the xfs_repair with -L option. I think the best formulation of question I need to have responded is: Will xfs_repair -L option remove metadata to access all that has been store on my external HDD or only those that have been in session during which the failure of HDD has happend?
Even after reading man xfs_repair I don’t know what the -L option will actually do.
But I’m quite sure it doesn’t have anything to do with giving the file system a label.
thank you for feedback. The issue is not so much with labeling, There are several Issues. For the moment I am no interested in LABELIN the disk. First is that mount point /dev/sda1 seem occupied and I cannot remove it. I cannot do umount, and secondly I wish to know if -L will remove all metadata of all drive or only the last session… For me was not clear from manual pages.
Like @andreas85 said: /dev/sda1 is the device partition itself
There is no mount point for it visible in your output - it is therefore not mounted.
/dev/sdb1 is mounted - it is mounted to /run/media
(which is how and where external devices are attached when you mount them through the GUI)
lsblk -f
is more useful - as it will also give the file system on the various devices
I cannot conclusively clear that up, because I have never ever used xfs, although:
xfs_repair -L /dev/sda1
is a very different command than:
xfs_admin -L myname /dev/sda1
both have an -L option, but that -L means different things with different commands
It looks like your command will zero out … something - and something will get lost:
… and would have moved some files to lost+found
reading the man page tells me, that
changes to the metadata (in the log) will get lost
not all metadata - just the changes
… whatever that might mean in reality for your data …
I see my report is messy. I agree it is a mountpoint. Somehow, I am very loos at memorizing the basics. I don´t know why, given the fact that I have linuxes at home for a very long time now… sorry.
I used it as it came as result of from the gparted when I choosed option for repair the partition, and it came as suggestion, as it was not able to do a repair.
So I actually try the dry run (-n option) again for SDC1 mountpoint… But Ifeel stucked. Which way to try to move now. It looks to me that you did not recommend to use -L option, here is fulloutput:
> sudo xfs_repair -n /dev/sdc1 INT ✘ 4m 49s
> [sudo] password for consultor:
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
> - zero log...
> ALERT: The filesystem has valuable metadata changes in a log which is being
> ignored because the -n option was used. Expect spurious inconsistencies
> which may be resolved by first mounting the filesystem to replay the log.
> - scan filesystem freespace and inode maps...
> - found root inode chunk
> Phase 3 - for each AG...
> - scan (but don't clear) agi unlinked lists...
> - process known inodes and perform inode discovery...
> - agno = 0
> - agno = 1
> - agno = 2
> - agno = 3
> - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
> - setting up duplicate extent list...
> - check for inodes claiming duplicate blocks...
> - agno = 0
> - agno = 1
> - agno = 2
> - agno = 3
> No modify flag set, skipping phase 5
> Phase 6 - check inode connectivity...
> - traversing filesystem ...
> - traversal finished ...
> - moving disconnected inodes to lost+found ...
> Phase 7 - verify link counts...
> would have reset inode 140119303 nlinks from 1 to 2
> No modify flag set, skipping filesystem flush and exiting.
so: you want to change the label - or give it a label because it doesn’t have one
no need to repair anything, in this case
when you feel you need to repair something (click the triangle to expand)
xfs_repair
Usage: xfs_repair [options] device
Options:
-f The device is a file
-L Force log zeroing. Do this as a last resort.
-l logdev Specifies the device where the external log resides.
-m maxmem Maximum amount of memory to be used in megabytes.
-n No modify mode, just checks the filesystem for damage.
(Cannot be used together with -e.)
-P Disables prefetching.
-r rtdev Specifies the device where the realtime section resides.
-v Verbose output.
-c subopts Change filesystem parameters - use xfs_admin.
-o subopts Override default behaviour, refer to man page.
-t interval Reporting interval in seconds.
-d Repair dangerously.
-e Exit with a non-zero code if any errors were repaired.
(Cannot be used together with -n.)
-V Reports version and exits.
the -L option for this command does what is stated above - it will zero some logs with the potential loss of some recent data
But that command is not the correct one to change the label.
So: what is it that you actually want to do?
… give it a label or repair it?
You said you wanted to give it a label …
xfs_admin -L mylabel /dev/sdc1
would do that
To me it looks like, even if you would want or need to repair it,
giving the -L option would be a last resort if the repair didn’t work without it …
Thank you. I do not wish to give it a Label. I changed my mind and decided not to label it, because the links in some apps did not updated automatically. It was actually on the way back (when I went from situation form label to without label) that I realize, that I cannot mount the drive with my data. So I tried to repair the disk in Gparted, but It did not repair it an offered to run -L command, though I do not know if I shall do it or how when the disk is not mounting. Sorry for messy explanation.