Skip to Content

Docker secrets management

A more secure way

So if you've had any experience with docker you will realise the awesome power of having an image you can deploy into multiple environments and pass in different configuration options to it.  Typically this is done with environment variables.

For example, you could have a bunch of feature flags you can turn on and off and when you deploy into production you disable all the dev tools, enable encryption or tighter security features, etc. etc.

Probably one of the most important configuration options will be things like database credentials or other secrets.

Docker Inspect

I don't mind admitting the first time I ran docker inspect and saw ALL my secrets and credentials there in plain text, I was a little alarmed.

Now, while this container is running on a server that only the IT team have access to, so in theory it should still be safe, I didn't like it.  In an ideal world humans can never even see the credentials and it should all be hidden away in encrypted secrets vaults and such like.

So I started furiously reading around for solutions, and quite frankly I didn't find anything that seemed any better.  The main alternative proposed is to mount a secrets file directly into the container.  But in my mind this isn't much better, because now you are just exposing the secrets somewhere else - on the hard drive.

Surely there has to be a better way.

Secrets managers?

My first line of thinking was to put all the secrets into a vault like 1password, hashicorp vault, Infiscal etc.  But you wind up with the same problem which is, how to authenticate with the secrets manager?

So that API key or whatever still has to be passed into the docker container so it can make an API call and fetch the secrets.

Infiscal has the ability to dump the secrets into a temporary .env file and pass that to docker, but then we are back to the docker inspect problem of dealing with env vars.

I would say one nice thing about secrets managers is the other functionality they provide like secrets rotation, environments and so on.  So I think they are a good option for managing the secrets, but you still have the issue of how to securely get them from the manager to the container runtime.

I think my solution could be adapted nicely as a secure way to pass the secret manager API key to docker, then using that to bootstrap the rest of the secret acquisition process over https, keeping the secret transfer secure.

Temporary Secrets

So the solution I came up with I think is quite simple, and while it is constrained by the limitations of the docker technology, does quite a good job of minimising how much the secrets are exposed.

The basic idea is to create a temporary file of secrets, mount it into the container at boot, have the container read these into the bash ENV, then wipe the file.  Pretty nifty!  Let me explain

Temporary File

So the approach I used here is to just export the secrets to a file in the CI pipeline.  This way you can store all your precious secrets in an encrypted vault like GitHub Actions Secrets, Hashicorp vault, AWS Secrets manager etc. etc.  Choose your poison.

So in GitHub Actions for example, you could do something along these lines:

  steps:
- name: Export Secrets
- run: |
touch /tmp/.env
echo "DB_PASSWORD=${{ secrets.DB_PASSWORD" >> /tmp/.env
echo "API_KEY=${{ secrets.API_KEY" >> /tmp/.env

So these will now for a very short time be living on the disk.  You can choose a suitable location for the file.

If you are going to launch the container on a remote server, how do you safely get the secrets there?  You could construct the file locally then transfer it using SCP to a known destination on the host machine.  So maybe you generate a unique ID (based on the GitHub Run ID) to namespace the file, something like that.

Volume mount

So then you mount the local file from /tmp/.env into a known location inside the container, like /opt/secrets/.env or /tmp/.env or wherever you decide.  Doesn't matter too much except I'd say try and keep it consistent with whatever approach you are already using for volume mounts (consistent naming schema etc).

Make sure it is a read-write mount so that the docker process can remove the secrets once they've been read.

Custom entry point

So if you override the default entry point to be your own bash script, then you can do whatever you want, before continuing with the normal boot process.

This works even for pre-built containers like nginx, or a particular framework you are using (like the official Odoo docker image for example).  You just need to know what it uses as the main entry point, which is often a custom bash script, or some executable.

First of all, once you know where the secrets are, you can simply do this:

## Load Secrets
source /tmp/.env
# Rename any if container has specific requirements
export MYSQL_PASSWORD=${DB_PASSWORD}

## Wipe Secrets
echo "" > /tmp/.env

## Launch process
source /original-entrypoint.sh
# OR (Depends on the original container architecture)
/usr/local/bin/original-bin

Now all your secrets will be part of the process environment, and the file will be blank.

You application can access them as if you had passed them in as normal docker -e environment variables.

Except this time, no human will ever get to see them, they won't show up in docker inspect, and they are not stored on the disk either.

I think this is an improvement

Cleanup

If running in CI pipeline, you will want to ensure you have a step like this:

  - name: Cleanup
if: always()
run: rm /tmp/.env

Or do a remote cleanup via SSH on the remote server.  Basically however you do this you want to make sure this cleanup step always runs even if the docker container failed to do it.

Local Development

So one issue is now your container won't work locally.  You have two options here:

Makefile

Personally I use <akefiles in my projects.  This allows me to bundle up complex setup into steps with dependencies.  There are many tools like this, but Make is universal and works anywhere with few dependencies.  Sometimes the oldest tools are still the best.

But if you want to use grunt, or a custom bash script or whatever you want, do your thing.

But for me, I can create a step in my Makefile like this:

secrets:
@rm ./.secrets; \
touch ./.secrets; \
echo "DB_PASSWORD=1234" > ./.secrets; \
echo "API_KEY=abcdefg" > ./.secrets;

This type of thing.  Or you could read those values out of a .env file.  You don't want to mount your .env file into the secrets location in the container because it will get wiped each time you start.

Then you just add .secrets to your .gitignore.

So then your run section can just have secrets as a dependency:

run: secrets
docker compose up -d

Or however you are doing things.

Different entry point

You could also just not use the custom entry point locally, and just pass the secrets in normally using docker compose or manually or whatever.

That way they are not getting wiped each time you run the container.

I think this really depends how much you are doing in your custom entry point and whether it is integral to the way your container boots.

You could also have a single environment variable flag to control if you are going to read and wipe a secrets file, or whether you will rely on docker environment variables.

So like locally you have SECURE_SECRETS_MODE=0, then your entry point checks this value to see if it should perform this step.

Summary

Well I hope this helps with a simple way to make things a little easier and a little more secure.

At the back of my mind I'm still thinking, if someone gets access to your server then they can still do a lot of damage, and so while I think this is an improvement I am questioning just how valuable this is from a security perspective.

Still it's quite an elegant solution I think and does keep things a lot tidier.  At least devs who have access to the server maybe for maintenance will still have a harder time extracting secrets and things are kept more secure from an internal perspective.

You can have secrets generated by the secrets manager and stored there, and permission only granted to CI pipelines etc. etc. so you can get to a point where credentials are just never exposed to anyone and you are only granting access to the software that needs it.

Why Agile Projects Fail
And who to blame