Could not enter in console mode, after i pressed enter it keeps showing the same messages.
I can enter Fedora in the boot, which is in another partition. I can’t mount the Manjaro partition.
Hi @servimo,
It might be, or “simply” the filesystem. It’s relaticely easy to confirm or eliminate the disk, though. Then the filesystem is all that remains/ Or it will confirm the disk, ruining your day.
Simply use smartctl
from a chroot live environment to check.
Ignore this.
How to chroot
-
Ensure you’ve got a relatively new ISO or at least one with a still supported LTS kernel.
-
Write/copy/
dd
the ISO to a USB thumb drive. -
When done, boot with the above mentioned USB thumb drive into the live environment.
-
Once booted, open a terminal and enter the following command to enter the
chroot
environment:
manjaro-chroot -a
- If you have more than one Linux installation, select the correct one to use from the list provided.
When done, you should now be in the chroot
environment.
But, be careful, as you’re now in an actual root environment on your computer, so any changes you make will persist after a restart.
Using smartctl
I have no idea whether the smartmontools
package is installed on the live environment, so test this first. run:
smartctl -h
…and if you get an error that means it’s not installed, so install it first:
pacmac install smartmontools
When/if it is there, run the following to identify the disk first:
lsblk
Mine, for example:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
└─sda1 8:1 0 3.6T 0 part
sdb 8:16 0 4.5T 0 disk
└─sdb1 8:17 0 4.5T 0 part /home/mirdarthos/virtualbox
/home/mirdarthos/Video
/home/mirdarthos/Pictures
/home/mirdarthos/Music
/home/mirdarthos/KeePass
/home/mirdarthos/Documents
/mnt/5TB
nvme0n1 259:0 0 232.9G 0 disk
├─nvme0n1p1 259:1 0 500M 0 part /boot/efi
├─nvme0n1p2 259:2 0 7.8G 0 part [SWAP]
└─nvme0n1p3 259:3 0 224.6G 0 part /
I know my SSD is 250GB, and I have 2 x HDDs in my PC, so my SSD is /dev/nvme0n1
.
Then doing the smartctl
test is as easy as:
sudo smartctl --all <identifiedDisk>
Where <identifiedDisk>
is the disk previously identified.
The problem could ALSO be because of a faulty connector, which is possibly easy the fix if you are using a SATA SSD, and AFAIK impossible if you use NVME. Luckily from the photo it looks like it is SATA.
As I said:
It’s a SATA SSD.
When performing the test I suggested, Manjaro doesn’t have to be mounted. Simple use a live environment.
That is a sign that it’s not your SSD, but rather the btrfs
filesystem that’s damaged.
You’re going to have to repair it from a live session. If your Fedora has the btrfs
tools installed, then you might be able to do it from there.
I somehow missed this:
So Ignore what I said, @Aragorn is correct. Follow his advice & instruction if any.
I think that now is really a problem with the SSD, I did detached all my SATA cables and reattached, now I can’t start the system.
For now let’s leave this alone.
I will go for an assistance. See if the SSD is working.
I did a last chance for the cables and now it is working again it boots \°/
Confirming that it’s either:
- a cable; or
- a connector.
What should I do using btrfs in Fedora?
I tried this:
“sudo btrfs rescue zero-log /dev/sdb1” and “sudo btrfs rescue super-recover -v /dev/sdb1”. All results is that it’s ok.
Run “btrfs check
” (with appropriate options) on the block device holding your Manjaro partition.
See the man
page for details.
man btrfs-check
Note that for running the actual command, “btrfs
” and “check
” are two separate words. it is only for invoking the man
page that they are connected with a hyphen.
If I run “sudo btrfs check /dev/sdb1”:
Lots of “csum exists for … but there is no extent record”
…
ERROR: errors found in csum tree.
Then you must run it with the repair
option.
It’s considered “dangerous”, but given that for all intents and purposes you’ve already lost access to your data, I don’t think there would be anything left to lose, and all the more to gain.
Not repaired, aborted.
"Ok we have overlapping extents that aren’t completely covered by each other, this is going to require more careful thought. The extents are […] and […]
Aborted
Then I see no other option but to try…
sudo btrfs check --repair --init-extent-tree /dev/sdb1
Like I said, at this point you must consider a total loss as already being factual reality, so you can only gain from it.
Note: We also have no idea what you did or what happened that may have caused this situation in the first place. btrfs
is normally a very robust and self-healing filesystem.
What happened was that my computer was rebooting, it was raining, comes a thunder and my computer stops, turns off.
A timeshift snapshot on a external drive with Rsync and there would be no problem.
At least for a ext4 boot partition… not sure how it looks with BTRFS.
@Kobold Although I use btrfs, my backups are with rsync in another partition (ext4) . The problem now is to access my root partitioned with btrfs
Okay, yeah, a sudden power loss can cause filesystem damage, but btrfs
is normally quite robust in that scenario.
On the other hand, if you had other electrical malfunctions and/or a circuit breaker being tripped by lightning, then your computer is with certainty damaged, because an EMP inevitably fries finer electronics.
Trust me: been there, have the T-shirt. A 6-weeks-old brand-new professional workstation with SCSI hardware, back in September 2000. I tried to repair it by swapping out some components, but it didn’t last very long anymore, and from that moment on, it was also incredibly unstable.
If there is filesystem damage, then putting back the snapshot won’t fix anything.
Note: File damage is not the same thing as filesystem damage.
There’s no reason why you wouldn’t be able to make rsync
backups when using btrfs
— I do it all the time. But, see above.
You mean Partition damage correct?
But what about formating the root partition and then restoring the timeshift snapshot?
That should fix it or not?