What exactly are you asking?
Do you just want to transfer the files EXACTLY (a file-clone) to another partition that is not using ZFS?
If so, create the new filesystem and use rsync (with correct attributes so EVERYTHING gets copied).
What exactly are you asking?
Do you just want to transfer the files EXACTLY (a file-clone) to another partition that is not using ZFS?
If so, create the new filesystem and use rsync (with correct attributes so EVERYTHING gets copied).
btrfs
has mostly the same features as zfs
, and it’s part of the kernel, so you don’t need any special drivers. It supports mirroring and striping, albeit not RAID 5 or 6.
Thanks, will try. Have not found among Manjaro docs any info how to configure raid1. Hope, Btrfs - ArchWiki will be appropriate.
My question was about fs to use, and btrfs is already advised.
Strange attitude…
I see no question about what file system to use.
I see this question though:
Also, I suggest you close the tread since you got your questions answered.
But I back off then if you feel I only offend you with my question somehow.
Probably sometimes it is a good idea to start with a thread title.
Ok, at the moment /dev/sdb
and /dev/sdc
are used with ZFS. My steps are:
/dev/sdd
and /dev/sde
fdisk
mkfs.btrfs -d single -m raid1 /dev/sdd1 /dev/sde1
At this point new disks will get /dev/sd
-names of the old ones. Will it affect raid1 config some way?
It is always best to use UUID
s or PARTUUID
s instead of block device names.
UUIDs - do you mean /etc/fstab
?
You can use them everywhere.
[nx-74205:/dev/pts/3][/home/aragorn]
[aragorn] > ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 Nov 6 15:23 1aedac7f-403b-4d85-9643-0d622be75453 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Nov 6 15:23 3f1852bc-be95-4e42-8d46-25bafac4ce79 -> ../../sda3
lrwxrwxrwx 1 root root 10 Nov 6 15:23 6d475769-c92c-4935-b2af-a53d29dcd898 -> ../../sda7
lrwxrwxrwx 1 root root 10 Nov 6 15:23 6e756bf9-2d7c-401a-a0f3-bb4ea3b824a9 -> ../../sda6
lrwxrwxrwx 1 root root 11 Nov 6 15:23 6fbd221f-5d46-40f0-9fbf-4479d73874b4 -> ../../sda10
lrwxrwxrwx 1 root root 10 Nov 6 15:23 8462b14d-4a97-4383-a55d-61fcd136aeb9 -> ../../sda5
lrwxrwxrwx 1 root root 10 Nov 6 15:23 889d4e36-e279-4b82-838a-a5c1d4964e1a -> ../../sdb1
lrwxrwxrwx 1 root root 10 Nov 6 15:23 8de45432-efe5-4d76-beb9-fcb3247a063e -> ../../sda4
lrwxrwxrwx 1 root root 10 Nov 6 15:23 924f11d8-4bec-49a6-852e-033ce5e3d6a3 -> ../../sda2
lrwxrwxrwx 1 root root 11 Nov 6 15:23 bdf6a9f6-a2a7-47a0-ba6f-7bbf71a01a95 -> ../../sda11
lrwxrwxrwx 1 root root 10 Nov 6 15:23 CEF6-EF5C -> ../../sda1
lrwxrwxrwx 1 root root 10 Nov 6 15:23 db2385c9-2715-408e-96b4-52086a0292fe -> ../../sda9
lrwxrwxrwx 1 root root 10 Nov 6 15:23 e3fe76c9-8dea-4a27-8f47-839bb464f022 -> ../../sda8
I’m currently working on pre-compiled ZFS modules for all major kernel series we support. There was a problem on how to compile the modules. So lets see how that goes.
A motherboard has got four SATA socket only. So, the first one is used for boot/system SSD, 2d and 3d ones for ZFS mirror, and I need two to configure btrfs raid1. The question: is it safe for data to disconnect one of ZFS drives to free SATA slot?
Considering that it’s a mirror, it should be. But I’m not too well-versed on ZFS.
That’s not quite accurate. My current board has 6 SATA ports, for example, and the board previous to that had 8 SATA ports. There was a time when 4 SATA ports were more or less standard, but that was many years ago.
That said, it’s not unheard of to find the occasional low budget board with only 4 SATA ports, but they seem few, and far between.
Sorry, I’m sure you know my hardware much better than I do. But with the motherboard… Well, you can be sure yourself, it is MSI H110M Pro-D. It isn’t too dated (6G SATA) and is sufficient for NAS.
But what about ZFS drive detaching? Is it safe?
Apologies, for misunderstanding your comment:
It seemed to be a clear generalization, rather than referring to your specific hardware. Perhaps I need to take night courses in English.
It should be, given that it’s only a mirror, and that RAID is designed with hardware failure in mind.
This topic was automatically closed 3 hours after the last reply. New replies are no longer allowed.