What you see could be the effect of the kernel cache.
If you want to draw any conclusions on a sudden drop it could be the stick’s internal software doing error correction, which then point to possible defective memory cells.
What you should look at is the final numbers.
The amount of time taken to write the data and the amount of time taken to unmount the device - which would include flush what remains of the buffer.
What happens inbetween is skewed by cache is not a reliable number as it will be different running the same test twice.
I tried to rsync my data from my internal Nvme with Ext4 to my external USB stick 64 GB with Exfat via USB 3.0.
I created a shell script.
You need to replace <SOURCE> with your file path, <TARGET> too.
#!/bin/bash
echo "===Start==="
echo "RAM cache:"
vmstat
for i in {1..8}; do
printf "\n===%s.repeat===\n" "$i"
echo "Rsync your data:"
rsync --progress "<SOURCE>" "<TARGET>/$i"
done
echo ""
echo "===End==="
echo "RAM cache:"
vmstat
Result:
===Start===
RAM cache:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 24244052 61712 2351780 0 0 17946 1531 10831 7 2 2 95 1 0
===1.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 518,21MB/s 0:00:03 (xfr#1, to-chk=0/1)
===2.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 675,34MB/s 0:00:02 (xfr#1, to-chk=0/1)
===3.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 138,89MB/s 0:00:14 (xfr#1, to-chk=0/1)
===4.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 55,68MB/s 0:00:36 (xfr#1, to-chk=0/1)
===5.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 57,13MB/s 0:00:35 (xfr#1, to-chk=0/1)
===6.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 24,94MB/s 0:01:20 (xfr#1, to-chk=0/1)
===7.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 24,53MB/s 0:01:22 (xfr#1, to-chk=0/1)
===8.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 26,23MB/s 0:01:16 (xfr#1, to-chk=0/1)
===End===
RAM cache:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 1 256 430224 36460 25875708 0 0 7662 34079 4241 3 1 1 88 10 0
RAM showed small free space after the transfer. I know now that the transfer speed depends on the temperature of my USB stick. When temperature is higher, speed will be reduced more dramatically.
I would suggest a solution. The USB stick needs a pause of 1 min due to overheating after transferring data 10 GB. After the pause then it goes faster.
Edit:// The cache transfer was used in my test, which is a trick. My test is unfair.
i already pointed out that a lot of usb-storages really suffer due to thermal problems, the resulting transfer-errors and the additional error-correction that must be done with more syncing, but the TO declined. there is one pro-argument of the TO that he reports no issues with MS-Windows.
there is in fact a different between the handling of large file-transfers between windows and linux.
this is a interesting thread. i was googeling for a long time. can it be that one main problem is the synced and cached transfer to the usb? i’ve got no usb-mass-media for testing this but if anyone can test to mount the usb-device in async mode and flush to a folder, copy a large amount of data inside this and check if the speed doesn’t collapse ?
mount -o async,flush -t ntfs /dev/sdb1 ~/somewhere
You are right after my tests. The cached transfer is a trick.
sudo mount -t exfat -o async,uid=1000,gid=1000 /dev/sda1 /mnt/
With cache:
Result:
===Start===
RAM cache:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 0 18555080 86648 6537648 0 0 3640 6544 4136 3 1 1 97 2 0
===1.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 674,21MB/s 0:00:02 (xfr#1, to-chk=0/1)
===End===
RAM cache:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 16010948 86648 9024236 0 0 3632 6530 4149 3 1 1 97 2 0
===Start===
RAM cache:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 18078912 94088 6702120 0 0 2958 6543 4306 3 1 1 97 1 0
===1.repeat===
Rsync your data:
video_test_2GB.mkv
2.108.896.285 100% 37,40MB/s 0:00:53 (xfr#1, to-chk=0/1)
===End===
RAM cache:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 15544208 94204 9188888 0 0 2870 7507 4267 3 1 1 97 2 0
well i found an old usb-storage-device with ntfs on it and did the following:
first i mounted the device manually in async mode as below and copied for ~800 seconds
to it. then i unmounted the device, mounted it via the ordinary mount option of the system with standardsettings as removable/external usb-device and copied another ~800 seconds
a new file to it. the difference is indeed significant, it’s 6,9 GB against 4,8 GB in standard-mode.
note: this is an old usb-storage device with no modern caching-electronics inside. therefore the difference must be the simple fact of syncing.
i was testing a little bit more. the TO reported that the changes of the vm_dirty settings had no effect. there is a complete difference to my tests. i redo the test that i already posted, same hardware, no change to the hardware.
i edited a
I havent rechecked the values, etc on this proof-of-concept in a little while
(from back when garuda first started lobbing these configs with a hard value on all systems)
using the shell-script and accepted the given recommended values are perfect.
redoing the test, there was no drop of file-transfer-bandwith while copying and the result:
that brings up a bandwith of 31,7 MB/s while the bandwith of a single file is limited to ~15MB/s and does not exceed this. is there anything that limits a single file copy to the half of the max-bandwith ?
Hm. I couldnt say for sure - thats for you grubby testers to do
Of course my first guess is just good utilization of cache - you cant achieve quite as much speed as a single operation but multiple times at once (not quite as fast, but greater than the simple sum of single speed divided by number of operations).
well from what i found out till now:
this is a USB-2 device. the bandwith is 480MBit/s,therefor the theoretical bandwith would be about 50 MByte/s BUT every USB-connector at the PC is shared with some others (2-4 connections share one controller). this and the overhead of the usb-protocol lead to a bandwith of ~30-36 MBytes/s @USB2. all in the drive shows up that it is correct working and produces the output of 32 MBytes/s. the only thing i can’t understand till now is that the bandwith at a single-file copy is limited to ~ 1/2 of the max speed. but it is constant and i can reproduce it 100% at my system.
i also tried to drip the last bytes out with ionice (used c1 c2 c3…) but there was just a few more output.
Is this a serious question? Well it can limit them, thats for sure.
Why is it that everyone misses the fact that it works fine under windows and works fine using dd on linux(though a bit slower)? Is windows and dd magically influencing the hardware? Are we supposed to just live like this because we are used to it? 2 hours vs almost 2 days is big difference when transferring files.
The flash drive i am now writing to is the last usb device on the list.I am booted using a liveISO, so no swap that I am aware of as the report shows.
I reran the dd test, and on the 3.0 port with it now registering correctly. All other usb devices are on a different port/hub of the motherboard.
still using exFAT filesystem
bash test.sh
Executing: time dd if=/dev/urandom of=/mnt/test.img bs=1G count=100 status=progress
107374182400 bytes (107 GB, 100 GiB) copied, 2391 s, 44.9 MB/s
100+0 records in
100+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 2391.01 s, 44.9 MB/s
This test is 5MB/s faster, but otherwise the same results. I would be okay with this, except for the fact that I get higher speeds under windows, benchmarks above 60MB/s, transfers real files between 25 and 50 with mixed small files, and I get less than 5MB/s on linux when actually transferring anything real yet max 42MB/s on a benchmark.
Rerun of the test using ext4
bash test.sh ✔
Executing: time dd if=/dev/urandom of=/mnt/test.img bs=1G count=100 status=progress
107374182400 bytes (107 GB, 100 GiB) copied, 4331 s, 24.8 MB/s
100+0 records in
100+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 4330.76 s, 24.8 MB/s
real 72m10.820s
user 0m0.009s
sys 8m18.656s
this is a full half hour longer, at 1hr 12 min.
I ran the test on the external USB HDD (NTFS), but gave up as it dropped to <5MB/s after a few minutes
I copied that data while it was operating to show the low numbers, they consistency were staying that low, and occasionally hit zero while it was updating on the screen. It wasn’t zero the whole time, just incredibly low… KB to <4MB most of the time. I didn’t realize it said zero when i posted it. This is what it does, either A. it starts out that slow, or B. Goes fast for a while (20 min?) before dropping. I know it is an actual problem, or “issue” lol, because it obviously works fine sometimes for that first 20 minutes. During that initial transfer the speeds are closer to reality. Even when it says “0” beside actual write, the drive is still blinking.
That’s awesome that it worked in your case! But strangely did not in mine. I can hit about 42MB/s consistently with my 3.0 USB flash drive with dd, but not transferring actual data. I have yet to see it transfer any large amount (greater than 80gb, often much less) at proper speeds.
-----Its been a day or two…sorry
I have attached the 128gb 3.0 USB drive to a 3.0 port and booted Windows 7
formatted back to exFAT 128gb drive in 46min.
Transferred 101GB of payload data (from several large folders containing 68465 files or 1.47MB a piece on average) in 66 minutes (the drive stopped blinking immediately, write cache is disabled for this drive under device manager) for a total of 25.4MB/s. This is on par with my earlier tests on windows, except I am transferring smaller files mixed in. The original post about the 262GB was mostly larger contiguous files which should bench higher, and does in fact transfer faster on windows as originally stated. Especially to a USB 3.0 hard drive.
The same USB flash drive formatted a second time and benchmarked using “USB Flash Benchmark” on windows reports this data for the 3.0 flash drive:
The external 3.0 Hard drive benches even faster, with an average sequential read/write of 95MB/s, and random read/write of about 55MB/s.
Someone please post an eyeballed stopwatch test of transferring a giant folder over 70GB (steam games folder would suffice) to a usb 3.0 drive or non-SSD hard drive for a real world test and show how long it takes. sorry for the long post.
No, I didn’t miss it. But unfortunately, something that a lot of people either don’t know or, sometimes conveniently, forgets, is that Windows is not Linux. Windows does things very differently. There’s a lot more donee to make it “idiot proof” and in so doing causes mindsets to become “idiotized” (is that even a word? Apparently so.) Linux is a much more hands-on, no-nonsense and real operating system.
But I’m not here to go on about that. Nor am I here to insult anyone or anything. (Even if it doesn’t look that way.)
Yes and yes, but the limits are software, not hardware. And I’m guessing if that was thee case, there’d be mention of it somewhere in the logs.
But, as I said, I’m not here to insult anyone, so I’ll be off now. TYVM and
Tip:
When posting terminal output, copy the output and paste it here, wrapped in three (3) backticks, before AND after the pasted text. Like this:
```
pasted text
```
Or three (3) tilde signs, like this:
~~~
pasted text
~~~
This will just cause it to be rendered like this:
Sed
sollicitudin dolor
eget nisl elit id
condimentum
arcu erat varius
cursus sem quis eros.
Instead of like this:
Sed sollicitudin dolor eget nisl elit id condimentum arcu erat varius cursus sem quis eros.
Alternatively, paste the text you wish to format as terminal output, select all pasted text, and click the </> button on the taskbar. This will indent the whole pasted section with one TAB, causing it to render the same way as described above.
Thereby increasing legibility thus making it easier for those trying to provide assistance.
For more information, please see:
Additionally
If your language isn’t English, please prepend any and all terminal commands with LC_ALL=C. For example:
LC_ALL=C bluetoothctl
This will just cause the terminal output to be in English, making it easier to understand and debug.
No matter how you turn the numbers - the transfer rate is device dependent.
Real world usage never reaches neither specification nor industry tests.
Specification is defining what is possible.
Industry test is using hardware designed to come as close to specification as possible.
The results you can get in real world depends - roughly speaking - on how much money you paid for the device - the good old quantity vs. quality concern.
Most vendors will construct their devices so the apply where the largest demand is - and that is usually Windows.
There is a package in the repo f3 which can check your stick - storage and performance.
$ sudo pacman -Syu f3
$ su -l root
# mount /dev/sdxY /mnt
# f3write /mnt
All I’m saying is that I consistently get 50% better speeds (60 vs 40mb/s) under windows vs linux, which would mean the underpinnings either have a bug or are not properly configured/written. 50% is only the benchmark difference. I wouldn’t call that idiot proofing so much as properly engineered. I understand linux doesn’t have that luxury when it comes to reverse engineering or writing from scratch. It’s amazing it works as well as it does considering the lack of support from the industry.
But I’m getting 2 hours to transfer 262GB under windows vs
rsync has been running since about 2 AM here, so 9 hours. Rsync confirms this. It has only transferred 43 GB.
out of the 262 total. This means at the worst it would have to have run for 54 hours on linux, which is unacceptable. That is not a 50% difference. It is a 2600% difference…2days. Worse, it apparently doesn’t affect everyone, making it hard to track down.
Me either and I appreciate your input and all the tips on how to use the forums
I ran f3 on the first day, sorry I forgot to mention it. I thought it may have been a knockoff flash drive. It checked out okay. I did rerun f3write for you though.
Average writing speed: 39.81 MB/s
Thanks but no thanks on the article about 3.0’s theoretical limits, or theoretical limits in general. I know what those are, which is why I stick to real world examples and benchmarks. This flash stick under performs on benchmarks by half, the hard drive performs worse, and both slow to a crawl to a point of 2 hours vs 2 days. I haven’t posted the results from every machine here, but it affects all of them so far. I have used 2 different external hard drives (the third was SMR, yikes!) and 6 flash drives.
Interestingly I have a sandisk cruzer blade 16GB 2.0 from a few years back that performs pretty normally, on par with Olli’s tests. But then again, it’s only 16GB, so maybe the bug affects it but it just doesn’t run long enough to show it because of the smaller capacity. It bounces between 5-10MB/s which is pretty good.
Mine works fine as is, so it’s not a bug. And I haven’t changed anything regarding the USB or it’s transfers. But then, I might be missing something, since I don’t use any external storage a lot. I used to, but that’s in the past, before I switched to Linux full-time. Now I use the interwebz.
I think you’re misunderstanding things. AFAIK Linux engineers don’t, as you call it or it might actually be, reverse engineer everything. The Kernel is full of drivers, both free and proprietary, as well as some binary blobs, developed by manufacturers. That’s how I have it, anyway. I know the Nouveau Nvidia drivers is free, open source and reverse engineerd, but they don’t work nearly as well as the proprietary ones, and that is a very good example of reversed engineering.
This is, partially at least, true. Linux doesn’t have as big a community as Windows. However, it’s growing. And I don’t know if you know this, but IIRC one of the biggest contributors to the kernel is a developer paid by no one other than Micro$oft itself.*
My reply was not aimed directly at you or your problem. And I’m not accusing anyone of being an idiot, I just mentioned Micro$oft tries to make Windows idiot proof. However, if the shoe fits, by all means put it on.
* I’m paraphrasing here, and this might not be at this time. However it is according to my knowledge and understanding so any incorrect information is completely unintentional.