Speeding up build times

This documentation is outdated. See this page for updated instructions


gold is a linker for ELF files present in binutils. gold was developed by Ian Lance Taylor and a small team at Google. The motivation for writing gold was to make a linker that is faster than the GNU linker bfd, especially for large applications coded in C++.

gold is faster and uses less RAM memory than bfd. It is used to link WebKit and its dependencies by many WebKit hackers in a daily basis.

However, have into account that gold is a new linker and that means that you may find some problems using it every now and then. Specific bugs for gold have already been reported and fixed in WebKit before, like in [94285]. If you get into troubles with the linker when using gold, don't forget to check this list of possible compilation issues and solutions.

  • For using it, install it in your distro of choice:
    • debian / ubuntu:
      $ sudo apt-get install binutils-gold
    • Fedora:
      $ sudo yum install binutils-gold

You may want to check that the default ld binary in the system is the one provided by gold:

$ ls -l /usr/bin/ld
lrwxrwxrwx 1 root root 7 Sep 25  2012 /usr/bin/ld ->

Or you can configure update-alternatives:

$ update-alternatives --install "/usr/bin/ld" "ld" "/usr/bin/" 20
$ update-alternatives --install "/usr/bin/ld" "ld" "/usr/bin/ld.bfd" 10

Then choose the preferred one with the following command:

$ update-alternatives --config ld

Be aware that colorgcc might cause linking issues when using


With ccache, you can speed up your compilations by having a way to reuse object files when they do not change across different compilations, thanks to its cache system.

  • Install it in your distro of choice:
    • debian / ubuntu:
      $ sudo apt-get install ccache
    • Fedora:
      $ sudo yum install ccache
  • Make sure it provides wrappers for gcc/g++:
    $ which gcc
    $ which g++
  • If that's not the case you can prepend /usr/lib/ccache/ to PATH:
    $ export PATH=/usr/lib/ccache/:$PATH

I personally would recommend to set ccache's cache size to something between 4GB and 8GB:

$ ccache --max-size=8G

Check ccache's stats:

$ ccache -s
cache directory                     /home/user/.ccache
cache hit (direct)                     5
cache hit (preprocessed)               0
cache miss                          3637
called for link                      100
called for preprocessing              14
compile failed                         2
bad compiler arguments                 5
autoconf compile/link                 14
no input file                         12
files in cache                     10880
cache size                         426.5 Mbytes
max cache size                       8.0 Gbytes


With distcc, you can offload your machine from building some files and let distribute the work among other (hopefully more powerful) machines in your local network. Ideally, you should be connected with a wired connection for maximum awesomeness.

To make it work, you need to install the distcc client in your local machine and the distcc server in all the machines that you want to use as remote nodes for your "build farm".

distcc server(s)

You need to install and configure it in every remote machine you want to use from your local machine (the client):

  • Install it in your distro of choice:
    • debian / ubuntu server:
      $ sudo apt-get install distcc
    • Fedora server:
      $ sudo yum install distcc-server
  • Run the server with something like this (will run on port 3632 by default):
    $ distccd --daemon --allow --allow --listen --nice 10 --jobs 12

(You can also configure it to start on boot. Check distcc documentation)

  • Check it's running and listening:
    $ # netstat -nuta | grep LISTEN
       tcp        0      0*               LISTEN     

distcc client

You need to install and configure it in your work local machine:

  • Install it in your distro of choice:
    • debian / ubuntu server:
      $ sudo apt-get install distcc
    • Fedora server:
      $ sudo yum install distcc
  • Make sure you integrate it seamlessly with ccache by defining the CCACHE_PREFIX variable:
    $ export CCACHE_PREFIX=distcc
  • Define the list of hosts (see also "Using a dynamic set of hosts") you want to compile in (order is important, first ones are always preferred):
    $ export DISTCC_HOSTS='remote-powerful-machine localhost'  # DON'T USE but 'localhost' instead!!!
  • Now you just build webkit specifying an "interesting" number in '-j' (typically -j2N, where N is the number of available cores):
    $ Tools/Scripts/build-webkit --makeargs=-j24 --gtk

Some considerations:

  • By default distcc only sends at most *four* jobs per host. If a distcc server can handle more processes you can increase the number like this: DISTCC_HOST='remote-powerful-machine/8'.
  • If the connection to the distcc server is slow you can compress the data: DISTCC_HOST='remote-powerful-machine/8,lzo'. Depending on your case this can actually make things slower, so try first to see if it's worth it in your case.
    • For example, it has been checked that using distcc through a VPN (OpenVPN) doesn't pay off.
  • Versions of gcc/g++ should *match* in the client and the servers. Otherwise remote compilation would probably fail, falling back to building locally instead.
  • You need to perform a full rebuild before having distccpower available in webkit. So, you rm -rf WebKitBuild/Release (or Debug) first.
  • It's interesting to have distccmon-gnome installed to check if it's properly distributing the work among the servers.
    • debian / ubuntu:
      $ sudo apt-get install distccmon-gnome
    • Fedora: already included in the distcc package

Using a dynamic set of hosts

This is very useful if you're not working always at the same location. distcc comes with avahi support (at least in Debian) which means that it can automatically discover distccd servers. So instead of defining the DISTCC_HOSTS variable just edit your ~/.distcc/hosts (or /etc/distcc hosts if you want system wide changes) and add something like this:


That will instruct distcc to search for distccd servers using avahi.


Icecream was created by SUSE based on distcc. Like distcc, Icecream takes compile jobs from a build and distributes it among remote machines allowing a parallel build. But unlike distcc, Icecream uses a single central server called scheduler that dynamically assigns the compile jobs to multiple distributed daemons, choosing the fastest free one. This advantage pays off mostly for shared computers, if you're the only user on x machines, you have full control over them.

  • You should have only one scheduler in your network.
  • The scheduler and one of the daemons can be in the same host.


  • Execute:
    $ sudo apt-get install icecc
  • You can install icecc monitor too:
    $ sudo apt-get install icecc-monitor

icecc scheduler

  • Configure scheduler to start by default (see /usr/share/doc/icecc/README.Debian):
    $ sudo update-rc.d icecc-scheduler enable
  • After configuring the scheduler to start, it will do so on next reboot, but not sooner. You can start it manually:
    • Ubuntu
      $ sudo service icecc restart
    • Debian
      $ sudo service icecc-scheduler start

icecc daemon(s)

  • iceccd (daemon) has to be able to find the scheduler in the network. By default, it makes so by sending a broadcast message. This may not work depending on your network topology (routers, firewalls, etc.). You can specify the actual address of the icecc-scheduler by stating it in the icecc config file:
    $ cat /etc/icecc/icecc.config
    # If the daemon can't find the scheduler by broadcast (e.g. because
    # of a firewall) you can specify it.
    ICECC_SCHEDULER_HOST="<icecc-scheduler IP or host name>"
  • Also, you may not want your local icecc daemon to run works from other hosts. For example, I want my laptop to be helped by my tower but I don't want my laptop to take works from my tower:
    $ cat /etc/icecc/icecc.config
    # Specifies whether jobs submitted by other nodes are allowed to run on
    # this one.
  • Make sure you integrate it seamlessly with ccache by allowing enough jobs your local icecc daemon:

(CCache run preprocessor with calling icecc, which connects to your local icecc daemon, which run preprocessing locally. Don't set ICECC_MAX_JOBS="0" to forbid accepting remote jobs, because in this case CCache can preprocess only on one thread. The ideal setting for ICECC_MAX_JOBS is the number of your processors.)

$ cat /etc/icecc/icecc.config
# You can overwrite here the number of jobs to run in parallel. Per
# default this depends on the number of (virtual) CPUs installed.
# Note: a value of "0" is actually interpreted as "1", however it
# also sets ICECC_ALLOW_REMOTE="no".
  • You may want to restart the service after setting all the configuration appropriately:
    • Ubuntu
      $ sudo service icecc restart
    • Debian
      $ sudo service iceccd restart
  • Then compile WebKit normally:
    $ Tools/Scripts/build-webkit --gtk

icecc + ccache

To use icecc and ccache together the steps are:

  1. Export the CCACHE_PREFIX variable:
    $ export CCACHE_PREFIX=icecc
  2. Set as first dir of your PATH the ccache dir.
    $ export PATH="/usr/lib/ccache:${PATH}"

Tips for using icecc

  • Increase the number of parallel jobs to the double or triple of your number of cores:
    $ export NUMBER_OF_PROCESSORS="$(( $(nproc) * 3 ))"
  • To ensure that the slaves are going to use the exact same compiler version than your machine, send your toolchain to the build slaves:
    $ export ICECC_VERSION="$(pwd)/$(icecc --build-native|tee /dev/stderr|grep creating|awk '{print $2}')"

Using icecc with clang

This are the steps to use icecc with clang

  1. If you were using GCC previously, then start with a clean build.
    $ rm -fr WebKitBuild/Release
  2. Ensure that you have installed the symlinks to clang on the icecc dir
    $ cd /usr/lib/icecc/bin
    $ sudo ln -s $(which icecc) clang
    $ sudo ln -s $(which icecc) clang++
  3. Export the CC and CXX environment variables accordingly:
    $ export CC=clang
    $ export CXX=clang++
  4. Build a tgz with your clang toolchain:
    $ export ICECC_VERSION="$(pwd)/$(icecc --build-native clang|tee /dev/stderr|grep creating|awk '{print $2}')"

4.1. If you are using Debug Fission (is the default now on Debug builds) you need to also add objcopy to the tarball:

$ export ICECC_VERSION="$(pwd)/$(/usr/lib/icecc/icecc-create-env --clang /usr/bin/clang /usr/lib/icecc/compilerwrapper --addfile /usr/bin/objcopy|tee /dev/stderr|grep creating|awk '{print $2}')"

If experiencing problems with Debug Fission and iccec you can disable it by using the following cmakeargs option:

  1. Now you have two options: use ccache in combination with icecc or use icecc alone.

5.1 If you don't want to use ccache in combination with icecc, then export your PATH to include first the icecc directory:

$ export PATH="/usr/lib/icecc/bin:${PATH}"

5.2. On the other hand, if you want to use ccache, do the following:

5.2.1 First set the appropriate symlinks to clang on the icecc directory

cd /usr/lib/ccache
$ sudo ln -s $(which ccache) clang
$ sudo ln -s $(which ccache) clang++

5.2.2 Finally export your PATH to include first the ccache directory and set the CCACHE_PREFIX environment variable:

$ export CCACHE_PREFIX=icecc
$ export PATH="/usr/lib/ccache:${PATH}"
  1. Ensure that the icecc daemon is running and that you have exported the environment variable NUMBER_OF_PROCESSORS to the double or triple of your number of cores, and start the compilation as usual.
    $ export NUMBER_OF_PROCESSORS="$(( $(nproc) * 3 ))"
    $ Tools/Scripts/build-webkit --gtk

icecc troubleshooting

  • If jobs are not being distributed, then check WebKitBuild/Release/ is using /usr/lib/ccache/g++.
      depfile = $DEP_FILE
      command = /usr/lib/ccache/g++   $DEFINES $FLAGS -MMD -MT $out -MF "$DEP_FILE" -o $out -c $in
      description = Building CXX object $out
  • If you get strange errors when building try to clear the ccache cache (ccache -C) and start with a clean build
    $ ccache -C
    $ rm -fr WebKitBuild/Release
Last modified 4 years ago Last modified on Aug 5, 2020 4:44:00 AM