Quantcast
Channel: CodeSection,代码区,Linux操作系统:Ubuntu_Centos_Debian - CodeSec
Viewing all articles
Browse latest Browse all 11063

Resource management in Docker

$
0
0
The basics

Docker uses cgroups to group processes running in the container. This allows you to manage the resources of a group of processes, which is very valuable, as you can imagine.

If we run an operating system which uses systemd as the service manager, every process (not only the ones inside of the container) will be placed in a cgroups tree. You can see it for yourself if you run the systemd-cgls command:

$ systemd-cgls ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22 ├─machine.slice │ └─machine-qemu\x2drhel7.scope │ └─29898 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name rhel7 -S -machine pc-i440fx-1.6,accel=kvm,usb=off -cpu SandyBridge -m 2048 ├─system.slice │ ├─avahi-daemon.service │ │ ├─ 905 avahi-daemon: running [mistress.local │ │ └─1055 avahi-daemon: chroot helpe │ ├─dbus.service │ │ └─890 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation │ ├─firewalld.service │ │ └─887 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid │ ├─lvm2-lvmetad.service │ │ └─512 /usr/sbin/lvmetad -f │ ├─abrtd.service │ │ └─909 /usr/sbin/abrtd -d -s │ ├─wpa_supplicant.service │ │ └─1289 /usr/sbin/wpa_supplicant -u -f /var/log/wpa_supplicant.log -c /etc/wpa_supplicant/wpa_supplicant.conf -u -f /var/log/wpa_supplica │ ├─systemd-machined.service │ │ └─29899 /usr/lib/systemd/systemd-machined [SNIP]

This approach gives a lot of flexibility when we want to manage resources, since we can manage every group individually. Although this blog post focuses on containers, the same principle applies to other processes as well.

Note

If you want to read more about resource management with systemd I highly recommend the Resource Management and linux Containers Guide for RHEL 7. A note on testing

In my examples I’ll use the stress tool that helps me to generate some load in the containers so I can actually see the resource limits being applied. I created a custom Docker images called (surprisingly) stress using this Dockerfile :

FROM fedora:latest RUN yum -y install stress && yum clean all ENTRYPOINT ["stress"] A note on resource reporting tools

The tools you are used to using to report usage like top , /proc/meminfo and so on are not cgroups aware . This means that they’ll report the information about the host even if we run them inside of a container. I found a nice blog post from Fabio Kung on this topic. Give it a read.

So, what can we do?

If you want to quickly find which container (or any systemd service, really) uses the most resources on the host I recommend the systemd-cgtop command:

$ systemd-cgtop Path Tasks %CPU Memory Input/s Output/s / 226 13.0 6.7G - - /system.slice 47 2.2 16.0M - - /system.slice/gdm.service 2 2.1 - - - /system.slice/rngd.service 1 0.0 - - - /system.slice/NetworkManager.service 2 - - - - [SNIP]

This tool can give you a quick overview of what’s going on on the system right now. But if you want to get some detailed information about the usage (for example you need to create nice graphs) you will want to parse the /sys/fs/cgroup/… directories. I’ll show you where to find useful files for each resource I will talk about (look at the CGroups fs paragraphs below).

CPU

Docker makes it possible (via the -c switch of the run command ) to specify a value of shares of the CPU available to the container. This is a relative weight and has nothing to do with the actual processor speed. In fact, there is no way to say that a container should have access only to 1Ghz of the CPU. Keep that in mind.

Every new container will have 1024 shares of CPU by default. This value does not mean anything, when speaking of it alone. But if we start two containers and both will use 100% CPU, the CPU time will be divided equally between the two containers because they both have the same CPU shares (for the sake of simplicity I assume that there are no other processes running).

If we set one container’s CPU shares to 512 it will receive half of the CPU time compared to the other container. But this does not mean that it can use only half of the CPU. If the other container (with 1024 shares) is idle ― our container will be allowed to use 100% of the CPU. That’s another thing to note.

Limits are enforced only when they should be. CGroups does not limit the processes upfront (for example by not allowing them to run fast, even if there are free resources). Instead it gives as much as it can and limits only when necessary (for example when many processes start to use the CPU heavily at the same time).

Of course it’s not easy (and I would say impossible) to say how many resources will be assigned to your process. It really depends on how other processes will behave and how many shares are assigned to them.

Example: managing the CPU shares of a container

As I mentioned before you can use the -c switch to manage the value of shares assigned to all processes running inside of a Docker container.

Since I have 4 cores on my machine available, I’ll tell stress to use all 4:

$ docker run -it --rm stress --cpu 4 stress: info: [1] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd

If we start two containers the same way, both will use around 50% of the CPU. But what happens if we modify the CPU shares for one container?

$ docker run -it --rm -c 512 stress --cpu 4 stress: info: [1] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd
Resource management in Docker

As you can see, the CPU is divided between the two containers in such a way that the first container uses ~60% of the CPU and the other ~30%. This seems to be the expected result.

Note

The missing ~10% of the CPU was taken by GNOME, Chrome and my music player, in case you were wondering. Attaching containers to cores

Besides limiting shares of the CPU, we can do one more thing: we can pin the container’s processes to a particular processor (core). To do this, we use the --cpuset switch of the docker run command.

To allow execution only on the first core:

docker run -it --rm --cpuset=0 stress --cpu 1

To allow execution only on the first two cores:

docker run -it --rm --cpuset=0,1 stress --cpu 2 You c

Viewing all articles
Browse latest Browse all 11063

Trending Articles