I’m trying to update stellariumstellarium-lite, which I installed with yay. I noticed it uses all of my cores at 100% during the build process then OOM kills my session and I have to log in again. I didn’t understand the first time so I ran yay again, same thing. My laptop has 12 cores and 16GB RAM.
EDIT: The actual package I am trying to update is stellarium-lite, not stellarium. All subsequent messages relate to that specific package. Sorry for the typo.
I’ve checked /etc/makepkg.conf and I found this line:
MAKEFLAGS="-j2"
However stellarium build uses cmake instead. Trouble is I am absolutely not familiar at all with the latter. How do I reduce the number of cores (which I suspect is the reason for OOM to kick me off) while building stellarium with yay?
I don’t have any $HOME/.config/makepkg.conf or $HOME/.makepgk.conf, only /etc/makepkg.conf.
This is usually set to the number of cores plus one and it did work for me so far every time.
It’s MAKEFLAGS="-j5" for my VM, currently.
OOM killer kicks in when there is not enough memory.
Try compiling in a TTY - so the process doesn’t get affected when your graphical session fails for some reason.
Does it also have swap?
Out of curiosity I will now try to do what you try to do - on a Xfce4 VM with four cores and only 4 GB of RAM.
Will report back when it either failed or finished.
There is a stellarium-bin package in the AUR as well - no compilation …
The problem is stellarium build precisely does not respect/follow that MAKEFLAGS clause. It should use only two cores yet uses all of them.
That does not address the main issue, which is controlling the number of cores the build process uses with stellarium (and all the ones based on the same toolchain, I guess). Compiling in a TTY doesn’t solve the issue either because all compiling jobs are killed, too. (tried that, too.)
I don’t since I’m using an NVME drive and I have enough memory to spare the SSD from using swap at all. This is the first time I encounter this issue, which is about controlling the number of spawned compile jobs.
I know. However that’s not the issue I’m trying to solve. Besides stellarium-lite is the package I want to install, the other one is too large to my taste.
FTR, here’s stellarium’s PKGBUILD file, zoomed on the build function:
That is good to know - albeit a bit late - the compilation of stellarium is already running for about 20 minutes.
… now I have to start over with the one you where actually talking about …
You think you do - but that might not be true.
My compilation of the wrong package did not fail so far - I’ll have it continue till it either breaks or finishes, then I’ll do stellarium-lite
Of course but then there’s no way to know if this is because the package overrides the number of cores or because you set it so. The idea is to set the number of jobs to less than the number of cores-1 and see if/how the build process conforms to the directive.
You don’t really have to wait till the build process finishes as it rapidly fills up the number of cores, i.e. a couple of seconds, as per htop or conky on my machine.
You need that package indeed. It is a dependency of stellarium.
ninja does not require a –j flag like GNU make to perform a parallel build. It defaults to building cores +2 jobs at once (thanks to Matthew Woehlke for pointing out that it is not simply 10 as I had originally stated.). It does however accept a –j flag with the same syntax as GNU make, -j N where N is the number of jobs run in parallel. For more information run ninja –help with the ninja you have built.
Nvm. It doesn’t explain why the number of jobs maxes to the number of cores on my system :-/ .
It doesn’t need to be… My laptop absolutely never uses all its cores. It is then obvious that stellarium-lite build process does not use two jobs as per /etc/makepkg.conf. You don’t need that level of precision to visually witness the directive is ignored, which is the problem I’ve reported here.
I just tried building stellarium on my AMD system & it failed with the same ninja error:
[1090/1114] Building CXX object src/CMakeFiles/stelMain.dir/gui/ConfigurationDialog.cpp.o
ninja: build stopped: subcommand failed.
==> ERROR: A failure occurred in build().
Aborting...
It did use up 100% of my CPUs, however I experienced no system slowdown or any other issues.
Importantly though, my swap usage increased by more than a gigabyte, even though I have 32GB of RAM:
Before the build process started:
free -h
total used free shared buff/cache available
Mem: 28Gi 13Gi 954Mi 777Mi 15Gi 15Gi
Swap: 28Gi 83Mi 28Gi
and a few seconds after the build process started:
free -h
total used free shared buff/cache available
Mem: 28Gi 16Gi 6.0Gi 617Mi 7.0Gi 11Gi
Swap: 28Gi 1.2Gi 27Gi
So I would recommend to @RygelXVI that they set up swap on their machine. They don’t have to set up a swap file or partition - zram should be sufficient.