Is there a way to re-do a 'broken' upgrade or 'force-reinstall' all packages?

Essentially the following happened:

  1. I ran a full system upgrade using pacman -Syyu from a tty window (ctl+alt+F2).
  2. At some time in the middle, my computer froze. My guess is that I might have ran out of disk space or something, but I am not sure about this.
  3. I restarted the computer (and cleaned up some disk space).
  4. I ran updates again, first using pacman -Syyu and after that pamac upgrade to also upgrade the aur-packages that I have.

Everything seemed fine, until I rebooted and the system would stop showing anything on the screen when switching to GUI-mode.
Luckily I could still switch to the other tty windows, which I did to try to re-install sddm, optimus-manager and some other packages that I thought might be causing the problem.

In the end I was able to restart and get back to the GUI. However, it seems that this only happens sometimes and other times it will again result in above problem, only giving me access to the shell.

Also, even when shutting the computer down using shutdown now from the CLI, it will find inodes it needs to clear on next startup. I also see some /oldroot-partition that the system "fails to unmount" at shutdown.

So I think my system is in a weird 'partial upgrade' state now. Not by choice but by accident.

My question: is there a way to 're-do' the full upgrade, or 'force-reinstall' all packages that are currently on the system?

You can re-install all your repo packages like this:

sudo pacman -Syu $(pacman -Qqen)

That being said, it sounds to me like there is something wrong with your disk/partition.

If it is ext4, I would boot off a live iso and run fsck on it.

I would probably try that before re-installing all your packages.


Thank you very much! I have first run fsck on the hard-drive, and after that re-installed all repo packages using the command you shared.

It seems like the problem (sometimes starting the GUI but other times hanging and frequently finding uncleared inodes) is still persisting, so I guess I'll back-up everything that I have not backed up so far, and do a clean reinstall :man_shrugging:.

Maybe you should also check the S.M.A.R.T. info for that drive.

smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.40-1-MANJARO] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke,

Model Number:                       SAMSUNG MZVLW128HEGR-000L2
Serial Number:                      S35FNX0HC46136
Firmware Version:                   1L1QCXB7
PCI Vendor/Subsystem ID:            0x144d
IEEE OUI Identifier:                0x002538
Total NVM Capacity:                 128.035.676.160 [128 GB]
Unallocated NVM Capacity:           0
Controller ID:                      2
Number of Namespaces:               1
Namespace 1 Size/Capacity:          128.035.676.160 [128 GB]
Namespace 1 Utilization:            127.856.922.624 [127 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            002538 bc61b2bd9d
Local Time is:                      Sat May 23 11:11:17 2020 CEST
Firmware Updates (0x16):            3 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x001f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat
Warning  Comp. Temp. Threshold:     68 Celsius
Critical Comp. Temp. Threshold:     71 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     7.60W       -        -    0  0  0  0        0       0
 1 +     6.00W       -        -    1  1  1  1        0       0
 2 +     5.10W       -        -    2  2  2  2        0       0
 3 -   0.0400W       -        -    3  3  3  3      210    1500
 4 -   0.0050W       -        -    4  4  4  4     2200    6000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

SMART overall-health self-assessment test result: FAILED!
- NVM subsystem reliability has been degraded

That does not look very good :thinking:...

Yeah.... it might be sufficient though to open up some space on the drive, not sure on that though. In any case you should backup your data ASAP.
If opening up some space is not enough, it would probably be recommendable to get another drive and move your system over there.

I have backed up all my data, and then did a full clean installation from a live USB.

Everything seems fine now. I hope my SSD will be with us a little bit longer, but I'll keep a close eye.

Thank you very much everyone for your help! :heart_eyes:

1 Like

Keep in mind to not fill up the drive completely as you did before, since SSD's need some open Space to wear level the cells as to not fail early, which is likely what happened here.
You might also, if you can afford it, benefit in terms of longevity by disabling swap, as to not cause exessive amounts of writes.
Problem is the specific model you have uses TLC V-NAND and is the OEM Version of the 960Evo. TLC V-NAND is a newer and cheaper option, which comes at the cost of the maximum amount of writes before a cell fails.
Generally your drive would get (if not OEM) a warranty for about 50TB written or 3 years of use. You should be able to view the amount of written data with smartctl --all "devicepath", aswell as some more indepth information about reliability concerning info.

Forum kindly sponsored by