Tomdee

12Oct/150

Using Graphana with InfluxDB and Docker

Now that that I have InfluxDB running as part of my infrastructure, I need some way to display nice dashboards for the data I collect.

The best tool I can find for this job is Grafana. They have an active community, great graphing support, excellent support for InfluxDB and they even produce a good Docker image.

Running Grafana

docker run -d --restart=always -p 3000:3000 --link influxdb:influxdb -v /var/lib/grafana:/var/lib/grafana -v /var/log/grafana:/var/log/grafana -v /etc/grafana:/etc/grafana --name grafana grafana/grafana

Running Grafana is as simple as running the grafana/grafana container. The additional options are

  • To link to the influxdb container
  • The mount volumes for the data, logs and config
  • To expose the container on port 3000

Once the container is running, navigate to the http://host:3000 and login with admin/admin

Grafana documentation is great and it's very configurable; either by placing a config file in /etc/grapfana or by setting environment variables.

Upgrading Grafana

Since the container is stateless (all state is in volumes outside of the container), it can just be stopped, a new image pulled and a new container created.

Backing Up Grafana

Since I'm not using a custom config file, I just need to backup the /var/lib/grafana directory which contains the database storing the dashboards that I create.

e.g.tar -czvf grafana.$(date +%Y-%m-%d-%H.%M.%S).tgz /var/lib/grafana

 

Tagged as: , No Comments
10Oct/150

Deploying InfluxDB using Docker

There are a number of datastores aimed at the system monitoring space. These typically focus on storing time-series data. Currently, key players include

These different systems have different scopes; some focus entirely on the storage of data whilst others have additional features for graphing and alerting. Prometheus have a good overview.

I was looking for something that had a simple API for getting data in and simple installation and operational requirements were a must. I wasn't too keen on something that included too many bells and whistles, rather I preferred using the best tool for the job. If this meant using a different system for graphing or displaying the data then that was fine. Likewise for alerting.

Despite the wide support for Graphite, I settled on InfluxDB. It ticked the normal boxes of being modern, well supported and active. But crucially, I could dump data into it using curl, and there was a good Docker image for deploying it. The Tutum image lacks clustering and SSL support in the latest image but neither of these features are required.

Running InfluxDB

Creating a container running InfluxDB is as simple as

docker run -d --volume=/var/influxdb:/data --restart=always -e PRE_CREATE_DB="tomdee" -p 127.0.0.1:8083:8083 -p 127.0.0.1:8086:8086 --name influxdb  tutum/influxdb

This stores the data in a volume (/var/influxdb), ensures the container is restarted if it crashes or the server reboots, exposes the management APIs to the localhost and creates an initial database.

Managing InfluxDB

The influx CLI is available with  docker exec -ti influxdb /opt/influxdb/influx

The REST and Managment UIs are available over HTTP on ports 8083 and 8086. To connect to them remotely, an SSH tunnel can be used

ssh -L 8086:localhost:8086 user@example.com

Upgrading InfluxDB

The original container can be stopped and removed (since the data is in a volume).

docker rm -f influxdb

Then a new container can be pulled and started (without the need to create the database since it already exists)

docker pull tutum/influxdb

docker run -d --volume=/var/influxdb:/data --restart=always --name influxdb  tutum/influxdb

Backing up InfluxDB

Although influxdb has a hot backup feature I found it produced zero length files. Not good. Instead, I'm happy just taking a copy of the data on disk. If I was worried about data consistency I could pause the docker container whilst doing this.

e.g.  tar -czvf influxdb.$(date +%Y-%m-%d-%H.%M.%S).tgz /var/influxdb

 

Running InfluxDB on a single server for simple monitoring applications is easy and the operational requirements are low. Bugs in the backup code are a concern but at my scale I'm happy taking the risk.