[distcc] - volunteers RAM usage and Makeflags

Hello,
I just configured some distcc volunteers to build package for my pinebook pro (to build faster).

One of my volunteers is an LXC container on my proxmox server, as some build may need a lot of memory (when build locally) I was wondering how much RAM could be used by the volunteers as I don’t want to allocate too much RAM for the LXC container.
I know I can allocate enough RAM and just monitor the RAM usage and then decrease the memory afterwards, but I was wondering if anyone already knew the answer.

The LXC container accept 4 jobs. I tried to build qownnotes and for now I only had a peak usage around 650MB.

I guess the RAM usage will depends of the jobs numbers :thinking:

an other question is about the makeflags…

if I look at this linux arm wiki

Change the MAKEFLAGS -j flag to reflect the total number of processors available on the master system. The common wisdom is to set this to the number of physical cores + 1. Keep in mind that only compiles are distributed; preprocessing and linking still takes place on the master system. Therefore, this number should reflect the capabilities of the master system and not the total number of distributed cores available.

MAKEFLAGS="-j3"

or the arch wiki

Adjust the MAKEFLAGS variable to correspond roughly twice the number max threads per server. In the example below, this is 2x(9+5+5+3)=44.

BUILDENV=(distcc fakeroot color !ccache check !sign)
MAKEFLAGS="-j44"
DISTCC_HOSTS=“localhost/9 192.168.10.2/5 192.168.10.3/5 192.168.10.4/3”

they don’t say the same and it seems if I put the number of core of the “client” it does not use all the available jobs in the different volunteers… so which is right… arch wiki or linux arm wiki :thinking:

Good question. I stopped using DISTCC a while back, but I know @Darksky still uses it. Maybe he knows for sure, but I have always used the “double core count” as my makeflags with distcc.

May I ask you the reason?

All of the RPi kernels are compiled here using distcc. I have 3 arm boards and my x86 desktop with a cross compiler hooked in. My vim3 is the server and have a rock64pro, pi4 8G and my desktop as slaves.

vim3 6 cores
rock64pro 6 cores
pi4 4 cores
desktop x86 4 cores

I have a ~/.distcc/host file with this. The actual cores on each device +1 (so each device will be flooded) so the MAKEFLAGS variable would be MAKEFLAGS="-j48":

vim3/7 rock64/7 desktop/5 pi4/5

I then have a script in usr.local/bin that is executable named distcc-makepkg.

#!/bin/bash
export PATH=/usr/lib/distcc/bin:$PATH 
export MAKEFLAGS="-j48"
time makepkg -s

All I do then is issue the command distcc-makepkg and it builds the package and when it is done the package can be installed on what ever device it is needed.

1 Like

Because it’s wa easier to setup a gitlab runner swarm, since it’s only a few packages we maintain, that would actually benefit from having DISTCC available. :slight_smile:

1 Like

yes for your use case it worse the effort to setup this… for my use case distcc will be ok for now…

@Darksky thanks for your answer…

1 Like

As far as memory on the slaves it is not much as compared to the master as the master does all of the administration part as well as compiling.

I do not use the git runner. I build the pi kernel configs from scratch each time. Some times new features get ignored and old features stay if you start with the prior config even to the point some times the kernel will not boot. Then I merge in the features people requested here. Using the RPi kernel they have their own customized defconfig for each board where other kernels do not. So There is some intervention stoping the build to do the things I want.

1 Like

I gave my LXC container 2G and I will let it like that for now and check the memory usage history and maybe change to something else, less or more according the history… the other volunteers have access to all the mermory so… :wink:

@Darksky just an other quick question…
about the files the volunters build… they are certainly saved somewhere… does it need clean up or maintenance somehow to don’t fill the disk with them? or everything is done in RAM?

I have no clue about saving files. Why would it? The master sends data to be compiled to the slaves in it’s memory and the slaves compile and sends it back to the master and the master writes the results in the tree it is compiling. There is no ccache involved on the on the slaves.

was just wondering… as there is no mention about it in the arch wiki, so I guess all is done in RAM… it does not save the source and header on disk before building it… :thinking:

The slaves do not have the source tree to deal with.

I don’t know exactly the inner working of distcc,. but gcc need the header needed by the file to actually be able to build… :thinking: as it will just call gcc against a c or cpp file… :thinking:

All the info needed is sent to the slave. The depends are installed on the master.

yes I checked the distcc site… the master send preprocessed source code so all is done on master side… my answer is then answered…

Just to drop a note, the Gentoo site covers distcc fairly extensively.

1 Like

Yeah. It meantions double core count too:

set the value of N to twice the number of total (local + remote) CPU cores + 1