I’m not. I provided troubleshooting steps for a common NTFS issue, then you interjected into the thread with extra commands and redundant steps involving uid and gid. We don’t even know if they are internal or external drives, but why assume these were all internal drives used with hibernation? A more likely case is they are 3 external drives formatted as NTFS back when they used a Windows PC. We’re still waiting for the original poster to clarify this for us. (Or they’re all internal in a desktop PC, still waiting on the OP.)
ntfs-3g does not need to be explicitly invoked in order to mount an NTFS file-system with “read-write” permissions.
Simply using the mount command is enough:
sudo mount /dev/nvme0n1p4 /mnt/ntfs
/dev/nvme0n1p4 on /mnt/ntfs type fuseblk (rw,nosuid,nodev,user_id=0,group_id=0,allow_other,blksize=4096)
I have full read-write access to my NTFS partition.
Using Dolphin file manager (as the OP did), simply clicking on the NTFS file-system yields full read-write access as well.
However, if the NTFS file-system is marked as “dirty”, it doesn’t matter what options you include in your mount command, and it doesn’t matter if you manually invoke ntfs-3g. It will still be mounted as read-only in order to protect the NTFS file-system.
Here is a similar thread for reference, in which ntfsfix was all that was needed to resolve the read-only issue:
The simplest solution here is too mount the disks as read only, copy the content to another drive, usb thumbdrive or usb harddisk or wathever. This way you don’t have to deal with a dirty ntfs filesystem.
but i agree with others that it is better to let Windows fix the filesystem first but since you don’t have access to Windows any more this is a viable option.
I missed the post where the OP did those steps and replied with the error message of why the file-systems were mounting as read-only and whether or not ntfsfix was enough to resolve the issue (as it has been with other users.) Can you link me to the OP’s reply where they followed-through with this, so that we can quickly rule it out and move on to the next steps? Thank you.
Even with ntfs-3g, RO can occur, as has happened to my drives (SATA and USB) when I have used Windows (no fastboot, hibernate or sleep and not dual-boot) and then Manjaro. All drives became RO. It’s not every time, but sometimes. The solution was to reboot into Windows and then back to Manjaro again.
My Windows is on a separate SATA HDD, and is not even in GRUB, but the problem can still occur with other SATA and USB drives after using Windows.
I think that @Mayanktaker could install Win 10 ISO on a virtual machine, turn off fast boot, mount the drives on that. Shut down the VM and try again. Alternatively, try the drives in a trusted friend’s/relative’s Windows machine.
Cant reply everyone so writing this -
I have one more SSD and I installed Windows 10 on that and when I installed Manjaro and Windows both, every time I unplugged my other Hard Drives for safety and I do this everytime I install an OS. So I tried booting to Windows and then Manjaro but no luck. I fear if those commands break my hard disk or I fear of data lose. I cant lose data. 3 TB of data. Cant move from those drives to other place because I dont have more memory.
I tried with Ubuntu live and Manjaro live, no luck. in my main Manjaro, no luck.
Run few commands you guys suggested but no luck.
This is the only thing stopping me to use Manjaro full time.
For now I have to use Windows again(sadly).
Still searching for proper solution because I have 6 computers which I want to replace totally. But looking at current conditions, I think Windows is ready to eat if peal properly.
Update : sudo ntfsfix command did the trick and fixed errors. I used this command with all three partitions and now I have full read/write access to my HDD.
Thank you guys for your time, your help and your kindness.
I really appreciate it. Now I can enjoy Manjaro. <3
However, be diligent and cautious with NTFS. You might not be entirely out of the woods yet, as @bikehunter666 implied (I think?) that longterm your best option might be to copy your data to another drive formatted in a more “Linux-friendly” file-system, such as EXT4 or XFS. This way you won’t have to worry about using a Microsoft file-system on a pure Linux ecosystem, nor rely on booting into Windows 10 to fix deeper NTFS issues, should they ever occur.
Just remember the risks involved with moving large amounts of data without any backups or redundancy. The choice is yours! Good luck, @Mayanktaker!
Keep in mind that when you copy everything over to the new 4TB harddrive, and you have no backups, if (when) the drive fails you lose everything.
If the copy is 100% successful to the new Toshiba drive, you can then convert the existing three drives from NTFS to ext4, and use them to make backups of your most important data. Always have backups, and even backups of backups if possible.
I know Backblaze B2 charges a flat “price-per-GB” of storage, so you’re not limited to a ceiling of 100GB, 200GB, or 2TB. Technically, you could sync the entirety of your 4TB drive, and it doesn’t require a “higher level” tier. Another plus is they fully support the open-source rclone, (only with their B2 service actually) which means you can provide your own encryption that even they do not have the key to decrypt.
Anyways, I’ll leave it at that, since it’s going off topic.
UPDATE: Correction. Only their B2 service supports rclone and Linux. The reason for that is data hoarders (with data even exceeding 24TB) usually run Linux / Unix servers, so such users would cripple their Personal backup solution.