Need Advice on Optimal Partition Scheme with New SSD for Manjaro Linux

Hi everyone,

I’ve recently added a new 1TB SSD to my laptop and I’m looking for some advice on the best way to partition my drives to optimize my setup. Here’s a bit of background:

Current Setup

I have a dual-boot system with Windows 11 and Manjaro Linux installed on a 256 GB NVMe SSD. I’ve just installed an additional SATA SSD inside the laptop (connected internally via the USB port). My current partition scheme on the NVMe SSD is as follows:

Device             Start       End   Sectors  Size Type
/dev/nvme0n1p1      2048    534527    532480  260M EFI System
/dev/nvme0n1p2    534528    567295     32768   16M Microsoft reserved
/dev/nvme0n1p3    567296 271015935 270448640  129G Microsoft basic data
/dev/nvme0n1p4 271015936 272474111   1458176  712M Windows recovery environment
/dev/nvme0n1p5 272476160 377333759 104857600   50G Linux filesystem
/dev/nvme0n1p6 377333760 499009535 121675776   58G Linux filesystem
/dev/nvme0n1p7 499009536 500105215   1095680  535M Windows recovery environment

Goals

  1. Manage Disk Space More Efficiently: My /home and / partitions on the NVMe SSD are filling up quickly. I want to make better use of the new SATA SSD for storing user data and potentially other things.
  2. Optimize Performance and Longevity: I want to ensure that the partition scheme is both performant and doesn’t prematurely wear out any of the SSDs.

Questions

  1. Partitioning the New SSD: What would be the best way to partition the new SATA SSD? Should I create a large /home partition there? Would it make sense to also allocate some space for additional Linux filesystems like /var or /opt?

  2. Reorganizing Existing Partitions: Given that my root and home partitions are filling up, is it better to resize existing partitions or move some of them to the new SSD? If so, what would be the most efficient way to do this?

  3. Best Practices: Are there any best practices I should follow for partitioning to ensure smooth operation and longevity of my SSDs?

I appreciate any advice or suggestions you can provide!

Thanks in advance for your help!

I would recommend, if you’re adding storage, to move the whole Home folder to the new 1TB SSD (format it as Ext4, copy properly the Home folder (make sure to keep owner/permission untouched), configure fstab for the new mount point of the /home folder.
I would not recommend to do various partitions for the various possible partition you could create (for /bin, /var, /usr or whatever).

Keep the 256GB SSD for the whole System, use the new 1TB SSD for the Home.
Keep it simple.

But at the end of the day, you’re the one who knows what you need. There is no better way of doing this or that, it all depends on what you want to do. For example I told to make a whole 1TB partition, but what if you want 500GB for Windows? That, nobody knows but you.

2 Likes

Create less partitions, Use btrfs.

You find good Information about Btrfs in the wiki

:footprints:

3 Likes

I can only suggest what I might do with that mess.

1. Windows is already running on the 256G SSD – keep it there.

2. Install a fresh copy of Manjaro to the new SSD, something like"

Mount Point Size Type Comment
/boot/efi 300 MB $ESP EFI System partition
/ 100/150 GiB EXT4 Linux system partition
/home 800 GiB EXT4 Separate home partition
swap 16/32 GiB --- Linux Swap

3. Copy/move the content of the old /home to the new /home.

4. Cleanup the $ESP of the 256G SSD leaving only the Microsoft bootloader /EFI/Microsoft and /EFI/BOOT.

5. When satisfied that both Manjaro and Windows are booting from the new $ESP, remove Manjaro from the 256G SSD.


Or, a variation of that general theme.

Maintaining each OS on it’s own disk will ultimately help to keep them isolated from each other, and less prone to boot issues that frequently seem to plague those multibooting on a single disk.

Good luck.

3 Likes

/var is often been separate on servers, for apps and services with files that grow over time. When it filled up, you could at least boot and fix it. Especially on more modern filesystems, even on servers, it’s becoming less common. With smarter filesystems (and server storage).

/opt is a little dated, and hopefully you shouldn’t have to be using this at all. It’s for apps that don’t follow any file layout, and you throw it all in there, including shared libraries and all. It’s kind of like a Flatpak, without the pak, and no containment.

On a workstation, it does make more sense to keep home and root separate. Home folders can become large in their own, plus a lot of user level configuration is obviously stored here. Where everything outside is your system.

If you snapshot and/or backup these volumes separately. You can also rollback, or restore them separately.

NVMe storage doesn’t care about partitions. If you write to the same partition, same file, same block of data; over and over again. The drive does it’s best to never write to the same space twice (anytime soon), so writes are already distributed equally across the drive. You just need to run an fstrim every so often, or your write performance diminishes. Windows does this once a week by default.

This is what I also use, with a separate root and home volume that draw from the same free space (but you can easily implement quotas on either).

With btrfs you could take your 256 GB drive, and half your 1 TB drive, for a single 750-ish GB btrfs filesystem.

Now, you generally would avoid this. Just saying, you can. Optimally, with multiple drives, you would have them the same size, and use a type of RAID instead. (Which also comes with btrfs, and is a lot easier to configure than LVM.)

Myself, I would also use the 1 TB for Manjaro, and use the free space for snapshots and whatever else I want. Windows only knows how to Windows. (And that is just me, and I don’t dual boot, but my Windows install can still see parts of my root FS, for like a Steam library for example.)

I have relied on rollbacks more than I care to admit, or just love getting an old lost configuration file from an old snapshot. But the snapshots obviously need more space. A single massive Manajro update will take a few hundred MB for example. And there’s no right answer here, it depends what you’re doing.

btrfs does truly make things easier for things like this, and even more so in the future, if when you want to rework things.

1 Like

Let 10-20% unpartitioned and that works best against wear. Also don’t fill the SSD to the brim, the emptier the SSD the faster they are. But that also depends on the SSD Model, there are a some devices shows a big performance malus when the fill rate is above 80-90%, the sweetspott is around 60% fillrate.

1 Like

Any info about that claim?

The key word called: over provisioning

1 Like

This is not a user side thing. This is something done at the factory, it is internal to the SSD, programmed in its firmware. You misunderstood completely that.
A 1TB disk will not have 1TB disk space to use with the manufacturer over provisioning, it will be more around 800GB to 960GB (typically). You buy a 128GB disk and end up with 100 to 120GB available to use. A few manufacturers’ data about over provisioning:

1 Like

Samsungs Magician also do this for their user’s.

Yes some manufacturers give tools to reshape the firmware defaults, but that is beside the point (and you didn’t talk about rewriting into the firmware, but having unallocated space), to name a few from a quick Google: Kingston SSD Manager, Samsung Magician, Crucial Storage Executive, Intel Memory and Storage Tool offer that option, to give more free space for the controller to do its magic.

Also depending on the SSD use case, as you can see in the above link, manufacturer can go as far as almost 30% over provisioning. That would make no sense for a “normal user” to do that. The default over provisioning of a consumer grade SSD is perfectly fine, your SSD will not die because you didn’t let half of your disk space “unused” (which it is not, unused, when over provisioning).

1 Like

From my understanding this has nothing to do with the firmware in general… besides some SSD’s has this integrated yeah.

But if you do it manually its just unallocated space and trim works just better in this area and that’s all.

At least this is my understanding.

That’s why i told him to left space around 10-20%… and since a lot SSD are faster when they are not filled to the brim… you don’t won’t miss that space anyways.

I don’t believe ANY SSD nowadays do not have over provisioning. It is kinda standard since forever now. You’ll probably find examples of SSD without over provisioning if you’re looking specifically for that, but I’m confident all brand name consumer SSD nowadays have over provisioning, it is part of how it works.

But there is no need, this is probably what is already set in the firmware. If he tells which model it is we can check. The only use case I would see the need to have spare space unallocated would be for a full to the brim SSD. The firmware will not struggle to manage operations on it because there will be plenty of “unallocated space” to work on. To me it is simply a waste, just don’t fill your SSD to 100% (because indeed, when full or near full, performance decrease it is a fact).

//EDIT: by the way, you’re talking about 100GB to 200GB of unallocated space for something that was done in the early days of SSDs. This is crazy :sweat_smile:

1 Like

If he bought a sandisk, then there is probably nothing to improve at all :wink:

What kind of USB port is it, what spec. exactly? Is the SSD powered via the USB port or do you have a separate power connection?

Why is that? This space should be free anyways… its not like, that it would hurt anyone.

Im actually used my new NVMe 2TB this way (Corsair MP600 Pro LPX) :stuck_out_tongue:

And i don’t see any reason to change this. The space should be free anyways, the benefit is performance!

This is crazy because you buy a 1TB disk, which in reality will probably be a 960GB free space to the user disk, and then on top of that you want to not use on purpose 200GB, to end up with a 760GB free space to the user disk. And you don’t find it crazy when you’ll have no real life benefit for a normal user (we not mining CHIA here)? Anyway, it’s beating a dead horse at this point, I’m done with that, with you.

//EDIT: and if you want to cut down 400GB on your side for imaginary benefit, suit yourself :slight_smile: but what I was pointing was just it was unnecessary for normal operation. In the end if you don’t write any data on the SSD you’ll get full performance forever.

1TB == 931.5GiB…of course the filesystem takes a little of it.


@Kobold

The manufacturers over provision, we don’t. Yes it’ll slow down as it gets full. If you want to leave empty space for nothing, then have at it.

Just never fill up a filesystem completely.

1 Like

I never said its required.

And now we getting personal… out of nowhere?

What a ridicules statement…

You’re right, I’ll correct that.