Quantcast
Channel: CodeSection,代码区,Linux操作系统:Ubuntu_Centos_Debian - CodeSec
Viewing all articles
Browse latest Browse all 11063

Faster C++ builds

$
0
0

The C++ language is known for its slow build process. While that is largely true (compared to many other languages), C++ is also one of the most mature languages out there, with rich tool support. There are now a lot of tools and good practices that can be applied to most C++ projects to help minimize build times.

Goals

When trying to minimize build times, it is good to set your goals. Some reasonable goals are:

Local incremental builds should be very quick, to allow efficient development iterations (e.g. during debugging). Local clean builds should be quick (e.g. to allow fast build times when switching between branches etc). CI builds should be fast enough so that integrations to mainline are not delayed. CI build & test results should be reported in a timely manner.

In general, build times should be short enough to keep developers focused on the current task (i.e. prevent context switching to distracting business such as XKCD ). IMO the following limits are reasonable:

Maximum time for anincremental local build : 10 seconds Maximum time for a CI build (including tests etc): 2 minutes

Of course these are not hard limits (and you may want to use other limits in your project), but if your build takes longer, there’s value in trying to optimize your build system.

The test case

To benchmark different build optimizations, I selected a decently sized C++ project with a clean CMake-based build system: LLVM (thanks to molsson for the tip!).

Unless otherwise noted, all time measurements are of clean re-builds. I.e. the output build directory was created from scratch, and CMake was re-run using the following configuration:

cmake -DLLVM_TARGETS_TO_BUILD=X86 -DCMAKE_BUILD_TYPE=Release -G [generator] path/to/source

(However, the cmake part is excluded from the timings).

Using this configuration, the build consists of 1245 targets (most of them being C++ compilation units).

The machine I’m using for the benchmarks is a hexa-core Intel i7-3930K (12 hardware threads) linux machine from 2011 with a couple of decent SSD drives.

About windows and Mac OS X…

I’m really a Linux person, and the limited Windows experience that I have has only made me conclude that in the world of C++ development, Windows is slightlyless mature than Linux (and Mac OS X) in terms of build tools.

In this article I will focus mostly on Linux. However, many concepts are transferable to Windows and Mac OS X, but you may have to find other resources for figuring out the exact details.

The easy wins Do parallel builds

Are you using all of your CPU cores?


Faster C++ builds

Make sure that you use a parallel build configuration/tool. A simple solution is to use Ninja (rather than Make or VisualStudio, for instance).It has a low overhead, and it automatically parallelizes over all CPU cores in an efficient way.

If you use CMake as your main build tool (you probably should), simply specify a Ninja-based generator (e.g. cmake -G Ninja path/to/source ).

If you are limited to using Make, then use the -j flag to do parallel builds (e.g. make -j8 ).

For VisualStudio, things are not quite as simple, but it is possible to use parallel compilation for VS too . However, if your project isCMake-based, it is actually possible to use Ninjawith VS too (just make sure to run cmake in a command prompt with a full VS environment,and select Ninja as the CMake generator). It can be very useful for CI build slaves, for instance.


Faster C++ builds

On my six-core machine, using ninja is 7 times faster than regular single-core make.

Use a compiler cache

Using a compiler cache can significantly improve build times. Have a look at:

ccache (great for GCC/Clang under Linux and MacOS) clcache (for VisualStudio)

A compiler cache is especially useful when you make clean builds or switch between branches. In particular, on a build server it can workwonders, where most commits and branches are very similar (lots of re-compilation of the same C++ files over and over).

If you are on Ubuntu, ccache is very simple to install and activate:

sudo apt install ccache
export PATH=/usr/lib/ccache:$PATH

(Also add the latter line to your .bashrc, for instance).

Now, when you run CMake it will automatically pick up ccache as your compiler (instead of GCC or Clang, for instance).

Make sure to place the compiler cache on a fast SSD drive.


Faster C++ builds

The benchmarks above show the effect of using ccache: The second build (“ccache (warm)”) is 30 times faster than the first build (“ccache (cold)”).

Use a fast linker

In large projects the linking step can be a big bottleneck, especially for incremental builds. To make things worse, the linking step is single threaded, so throwing more CPU cores at it will not help.

If you are on an ELF system (such as Linux) you should know that there is a faster alternative than the default GNU linker, and that is the gold linker .

Here is a simple piece of code for your CMake project that activates the gold linker when possible (I forgot where I found it):

if (UNIX AND NOT APPLE)
execute_process(COMMAND ${CMAKE_C_COMPILER} -fuse-ld=gold -Wl,--version ERROR_QUIET OUTPUT_VARIABLE ld_version)
if ("${ld_version}" MATCHES "GNU gold")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fuse-ld=gold -Wl,--disable-new-dtags")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -fuse-ld=gold -Wl,--disable-new-dtags")
endif()
endif()
Faster C++ builds

In this case the gold linker saved about 15% in total compared to the default linker.

For incremental builds, the difference is usually even more pronounced (since linking constitutes the lion’s share during incremental builds).

Do distributed builds

If your project is big, it can help to do distributed builds (i.e. let several computers in a fast LAN work on the same build).

A tool that is free and easy to set up on Linux is icecc/icecream . If 10-20 developers in a LAN install icecc, you have a formidable compile farm!

Provided that the network and the client disks are fast enough, distributed compilation can be several times faster than local compilation. Another nice thing about icecc is that it is polit

Viewing all articles
Browse latest Browse all 11063

Trending Articles