[Stable Update] 2018-12-02 - Kernels, Plasma, Mesa, Cinnamon, Gnome, Deepin, XFCE, Vulkan

update
stable

#143

Yes absolutely - especially because the change in Manjaro happened not with a major new release, but with a point release.
Sometimes Manjaro’s gitlab is a better announcement thread than the one here on the forum :wink:

I was also reluctant to switch over to mq, but I found that it now works well enough (bfq for HDD, mq-deadline for SSD).


#144

Oh noooo, i cry at the beginning, to avoid the rush.


#145

For some reason, when I started my laptop up, I noticed right off the bat is that there is an error about kernel module not loading. Also, I couldn’t reconnect the network unless I restarted my laptop. I never noticed these last night since I was just trying to get the update done.


#146

Manjaro Gnome suspend-to-RAM it`s works!


#147

OK, let me rephrase: Suspend to RAM itself is not broken, but something that has changed between 4.19.4 and 4.19.6 (likely something USB related) made it impossible to work under certain conditions and/or on certain hardware.

The only unusual in dmesg is:

dpm_run_callback(): usb_dev_suspend+0x0/0x10 returns -16
PM: Device usb1 failed to suspend async: error -16
PM: Some devices failed to suspend, or early wake event detected

Unfortunately I have no idea how to analyze that problem any further.

Is there a way to keep or reinstall kernel 4.19.4?


#148

There have been quite a lot of changes upstream, best way is to look at the changelogs over at kernel.org.
On the Manjaro side, there weren’t many changes except blk_mq and a patch for Raven Ridge.

You can try disabling autosuspend for USB devices with this boot parameter:
usbcore.autosuspend=-1

Also check which device is usb1, maybe with lsusb.


#149

Try lowering with command:
sudo pacman -U linux419-4.19.4-1-x86_64.pkg.tar.xz

But it will work if the package is in the cache -> /var/cache/pacman/pkg


#150

Thanks for the amazing work Manjaro team. :slight_smile:

I got one issue after the update. My Wireless card shows Ethernet Network (Intel Ethernet): disconnected and hence my WiFi is not working. Running on USB to Ethernet Adapter now. Would be great if someone could help me with this.

DE - XFCE
Kernel - linux419 (in-use) & linux414

My inxi -Nzzx information is as below:

Network:   Device-1: Intel Ethernet I219-LM vendor: Lenovo driver: e1000e v: 3.2.6-k port: efa0 bus ID: 00:1f.6 
           Device-2: Intel Wireless 8265 / 8275 driver: N/A port: efa0 bus ID: 04:00.0 
           Device-3: Realtek RTL8152 Fast Ethernet Adapter type: USB driver: r8152 bus ID: 1-2:3

WiFi not working after update. Shows Ethernet Network (Intel Ethernet) Disconnected
#151

Unfortunately that did not change anything.

Using lsusb or usb-devices does not reveal which device is identified as usb1 in dmesg (at least to me).

Something strange when running lsusb -v is the following second line shown for each device:

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Couldn’t open device, some information will be missing
Device Descriptor:

I really hope someone with the same or similar issue can help resolve that problem. I don’t want to be stuck at an old kernel from now on.


#152

My laptop uses a NVM.e SSD.

$ cat /sys/block/nvme0n1/queue/scheduler 
[none] mq-deadline kyber bfq

Is the system supposed to choose the scheduler automatically based on the type of storage medium? How do I set it manually (other than using a boot parameter)? And why is it showing [none] in my case (on a fresh installation upgraded with the 2018-12-02 update and kernel 4.19.6)?

I’m sorry, I’m such a newbie, it’s been a long time that I have played around with an Arch based system and such settings in general.


#153

‘none’ is the default for NVMe.
Set manually with elevator= boot parameter or udev rule, as I already wrote further up.


#154

So is this now good or bad? Does switching to mq-deadline result in any performance change, or is using something else than none bad for a NVMe?


#155

Why don’t you try it out for yourself?
Or read the benchmarks over at Phoronix?

No, but ‘none’ should give the best raw throughput performance, if that is what you’re looking for.


#156

linux419-4.19.4-1-x86_64.pkg.tar.xz isn’t in the cache anymore. Seems like it’s not possible to have more than one 4.19 kernel available?


#157

I’m not trying to geek out the system, I was just curious if there was a deeper reason that it defaults to none though you have mentioned mq-deadline for SSDs and bfq for HDDs earlier.


#158

I personally think your making a mountain out of a mole hill. I have run into several kernel issues when on the most recent kernel in the last year. Each time the problem on the new kernel was resolved in a fairly short time. The bugs generally get ironed out fairly quickly. Using an older kernel for a little while is hardly the end of the world. Just sayin.

If you are bound and determined to resolve your suspend issue, search the forum. I have helped numerous people solve their suspend issues by writing a sytemd service.


#159

Not entirely sure, but NVMe SSDs have already a good queuing mechanism, and that’s (supposedly) why they do not really need an additional one on top for best performance.

(correct me if I’m wrong)


#160

That’s why I was hoping that someone else with more experience is having the same issues as well, otherwise development just goes on and that’s it.

I am not having any of the usual suspend issues, because it has been working with 4.19.4 and now it’s not with 4.19.6, so there is nothing in the forum from previous issues that would fit as it’s related to changes between these two kernels.


#161

Man, terrible update this time. I updated my main machine the correct way and there it failed as well and now my PC is broken. Anyone know what to do?


#162

There is usually a commonality on many suspend issues regardless of which kernel is involved. Often the problem can be worked around, depending on what is causing it. As I stated search the forum.