There are a lot of tools that can be used to monitor docker as explained in this previous post, but it could be interesting to keep the history of the CPU and memory used by each individual container.
We will use Elastic Search and Python in order to build our dashboard.
The Python Code
There is a nice library that can be used directly by your python code in order to reach the docker daemon. This library simply connects to the Docker Socket and implements a few functions that are interacting with docker via Rest calls.
import docker client = docker.from_env()
Once the client started we simply enumerates the container list and push each container stats into elastic search.
for container in client.containers.list(all=True): print ("%s=>%s" %(container.status,container.id )) stats=container.stats(decode=True,stream=False);
The full code can be found there.
Note that this code will create the index template “docker_stats*” in elastic search.
Using the code in the container
In order to create the container, we need to write a Docker file as shown below:
FROM python:3.3 MAINTAINER snuids COPY ./bin/ /opt/monitordocker RUN pip install docker RUN pip install elasticsearch WORKDIR /opt/monitordocker CMD ["python", "monitordocker.py"]
You can then use your container in a docker-compose file as follows:
############################## monitordocker: image: snuids/monitordocker:v0.3 container_name: monitordocker links: - esnode1 environment: - ELASTIC_ADDRESS=esnode1:9200 - PYTHONBUFFERED=0 volumes: - /var/run/docker.sock:/var/run/docker.sock
2 Pingback