can you report
can you report
Share dmesg to pastebin.com when Eth is working.
export dmesg log to file:
dmesg > ~/dmesg.log
It will be in
Here’s a pastebin of dmesg while rsyncing a huge file over my network: https://pastebin.com/MRkCmmBJ
Here’s the output of inxi -Nxx
Resuming in non X mode: xrandr not found. For package install advice run: inxi --recommends Network: Card: Intel Ethernet Connection (2) I219-V driver: e1000e v: 3.2.6-k bus-ID: 00:1f.6 chip-ID: 8086:15b8
you are using vbox & vboxnet ?
vboxnetflt: 30 out of 119 packets were not sent (directed to host)
8693.416060] vboxdrv: ffffffffc17b8020 VMMR0.r0 [ 8693.497709] IPv6: ADDRCONF(NETDEV_UP): vboxnet0: link is not ready [ 8693.548793] VBoxNetFlt: attached to 'vboxnet0' / 0a:00:27:00:00:00 [ 8693.548804] IPv6: ADDRCONF(NETDEV_CHANGE): vboxnet0: link becomes ready [ 8693.548965] device vboxnet0 entered promiscuous mode [ 8693.612658] vboxdrv: ffffffffc0000020 VBoxDDR0.r0 [ 8897.260198] device vboxnet0 left promiscuous mode [ 8897.293155] vboxnetflt: 30 out of 119 packets were not sent (directed to host) [ 8911.535836] vboxdrv: ffffffffc18d4020 VMMR0.r0 [ 8911.653396] VBoxNetFlt: attached to 'vboxnet0' / 0a:00:27:00:00:00 [ 8911.653572] device vboxnet0 entered promiscuous mode [ 8911.734245] vboxdrv: ffffffffc0021020 VBoxDDR0.r0
Yes, using vbox/vboxnet with vagrant for development. Can probably disregard this as I still observe the slow ethernet even if I never start a VM.
did you test an “virtio” bridge?
I’m not really sure what you are asking. I haven’t done any testing with virtual machines/bridged networking. Even if I do not start a VM, I still observe poor ethernet performance.
If I do a dmesg right after rebooting, the vbox stuff doesn’t even show up in the log. So I imagine it is probably not really relevant to the problem. In fact, I was observing the slow ethernet even before I installed virtualbox/vagrant.
i219-V reach 12.7Mo/s with connexion Full Duplex 1000gb and line fiber -100mpb
You gave the results of
inxi -Nxx but I would be interested in the results with a lower case n,
inxi -nxx does it report a speed of 1000Mbps and full duplex?
Yes 1000Mbs full duplex
Network: Card: Intel Ethernet Connection (2) I219-V driver: e1000e IF: enp0s31f6 state: up speed: 1000 Mbps duplex: full mac: 70:85:c2:5b:64:f3
I219-V is for motherboard desktop , i219-LM is for laptop , not the same chipset
If you would took 10sec to follow the link and had a look at the pdf, you would have seen that i218LM, i219LM, i218V AND i219V are affected.
My usual bet would be trying a older (or even better newer) kernel might help.
I am using the latest (4.15), but still same issue. Hope will be solved soon. Not being able to use wired ethernet is very annoying!
I’ve tried the 4.15 kernel, same poor performance. I max out at around 300 mbps. Oddly, on Antergos, I got much better performance around 850 mbps.
I’m testing with the same huge file being transferred over my network and then benchmarking it with bmon.
So it seems very strange I get “expected” performance on Antergos, yet much poorer performance in Manjaro - same kernel, etc. I am stumped.
I have the problem too. When downloading the same files from the same source via ethernet it’s about 500-600 kb/s, over Wifi ca. 2-4 MB/s.
Here the output of inxi -nxx
Network: Card-1: Intel 82579LM Gigabit Network Connection (Lewisville) driver: e1000e v: 3.2.6-k port: 6080 bus-ID: 00:19.0 chip-ID: 8086:1502 IF: enp0s25 state: up speed: 100 Mbps duplex: full mac: 3c:97:0e:db:70:e1 Card-2: Intel Centrino Advanced-N 6205 [Taylor Peak] driver: iwlwifi bus-ID: 03:00.0 chip-ID: 8086:0085 IF: wlp3s0 state: up mac: e0:9d:31:13:5e:f0
Updated precisely to kernel 4.15.3-2. So the problem is not solved by an update
see patch from benjamin poirier
Thank you for the link. I’m hoping this will find it’s way into the official kernel.
In any case it’s good to know that the issue is known and a solution is near.
I have same problem on KDE, little fix for me is just replug ethernet.