On-the-fly ad-hoc docker compose development stack

The problem

In this blog post we are going to discuss an application stack that uses MongoDB, Redis and NodeJS. We want a single docker compose command that launches the entire stack, still we want to the developer be able to use his/her favorite IDE to edit the source code without having to rebuild docker images. We call this on-the-fly ad-hoc stack because it does not involve building any docker images or hosting a docker registry, the reason for this is to have a better developer experience (DX).

Docker compose

It’s a single YAML file of Docker compose format tracked by git along side with the rest of the source code. This YAML can be used to deploy to docker Swarm cluster, but in this blog post we focus on the developer experience (DX). The desired experience is just “git pull” followed by “docker-compose up”. When possible we are going to use Alpine-based images because they are very small.

  • just “git pull”, no image rebuild, no “docker pull” of images
  • containers are immutable they are marked read only:
  • logs are sent to stdout/stderr not files
  • run-time writable directories are either mounted volume or volatile tmpfs (/tmp, /run, /var/run/)

Backing Services

Backing services in our case are Redis and MongoDB. We need to persist MongoDB, that’s why we attach a data volume to /data/db/ to the MongoDB container. We need to activate keyspace notifications in Redis that’s why we passed “–notify-keyspace-events Ex” to the container. The docker-compose.yml file looks like this so far

version: '2'
services:
  mongo:
    read_only: true
    image: mongo
    volumes:
    - ./data/mongo/db:/data/db
    - ./data/mongo/configdb:/data/configdb
    tmpfs:
    - /tmp
    - /var/run
    - /run
  redis:
    read_only: true
    image: redis:alpine
    command: ["redis-server", "--notify-keyspace-events", "Ex"]
    volumes:
    - ./data/redis:/data
    tmpfs:
    - /tmp
    - /var/run
    - /run

there is no magic in the above docker-compose YAML, just the image name (from docker hub), we mark it as read-only with “read_only: true”, we attach volume to /data (you need to read the docker file to identify those paths), and generic volatile temporary directories in /tmp, /run/ and /var/run

Our compose specifies

  • version: to specify docker compose file format version
  • services: under which we are going to define our containers
  • read-only: declare the containers root to be read-only
  • image: specify which image to use (redis:alpine in one of the cases)
  • command: override the default command to pass custom options
  • volumes: mount directories inside the container to persist data
  • tmpfs: volatile writable directories

 

The code itself

For the code we are going to use “node:8-alpine” image, and mount the code (current working directory) to “/usr/src/app/” then execute “node index.js”, to prevent permission errors we tell docker to run it just like the my usual user (which is running the IDE), in my case it has UID=1000. To make the code automatically reloads I use nodemon to watch changes in code and reload application. For convention, we add those to package.json so that we can run it using “npm run web” or “npm run jobs”, like this

  "scripts": {
    "lint": "eslint -c .eslintrc.json *.js lib/",
    "web": "nodemon -w reload.txt -w lib -w index.js index.js",
    "jobs": "node jobs.js"
  },

To make sure npm cache work properly we need to have .npmrc with “cache=npm_cache” which would NPM cache points to a persisted directory which is “./npm_cache”.

To pass configuration we use environment variables, we might store them in some env file. Here is the full docker-compose.yml file, the most important one is “XDG_CONFIG_HOME=/usr/src/app/.config/” because some NPM packages want to store data in “~/.code/” which won’t work because the home is either read only or temporary, this way config is part of code tree.

version: '2'
services:
  mongo:
    read_only: true
    image: mongo
    volumes:
    - ./data/mongo/db:/data/db
    - ./data/mongo/configdb:/data/configdb
    tmpfs:
    - /tmp
    - /var/run
    - /run
  redis:
    read_only: true
    image: redis:alpine
    command: ["redis-server", "--notify-keyspace-events", "Ex"]
    volumes:
    - ./data/redis:/data
    tmpfs:
    - /tmp
    - /var/run
    - /run
  web:
    read_only: true
    image: node:8-alpine
    command: ["npm", "run", "web"]
    user: "1000"
    working_dir: /usr/src/app
    env_file:
    - docker-compose-env.rc
    environment:
      XDG_CONFIG_HOME: /usr/src/app/.config/
      REDIS_HOST: redis
      MONGO_URL: mongodb://mongo:27017/gulpin_db2
    tmpfs:
    - /home/node/
    - /tmp/
    - /var/run/
    - /run/
    volumes:
    - .:/usr/src/app
    ports:
    - 3000:3000
    depends_on:
    - redis
    - mongo
    links:
    - mongo
    - redis
  jobs:
    read_only: true
    image: node:8-alpine
    command: ["npm", "run", "jobs"]
    user: "1000"
    working_dir: /usr/src/app
    env_file:
    - docker-compose-env.rc
    environment:
      XDG_CONFIG_HOME: /usr/src/app/.config
      REDIS_HOST: redis
      MONGO_URL: mongodb://mongo:27017/gulpin_db2
    tmpfs:
    - /home/node/
    - /tmp
    - /var/run
    - /run
    volumes:
    - .:/usr/src/app
    links:
    - mongo
    - redis
    - web
    depends_on:
    - redis
    - mongo

here we have used the following docker compose features

  • user: indicate as which user the command should be running, it should be the same as $UID you are using to run your favorite IDE
  • working_dir: before start the command, just change current working directory with a normal “cd” into project directory
  • env_file: a file that contains many environment variables
  • environment: another way to pass environment variables
    • make sure we pass the hostname of backing services in our case redis and mongo (those are name of service in compose yaml file)
  • “volumes: .:/usr/src/app” is how we passed the current working directory of the host (.) as the project directory that is to be used by the container to run the service (/usr/src/app)
  • “ports: 3000:3000” we exposed container’s port 3000 (nodejs’ express) to be the host port, you one can access it via a browser running in the host machine.
  • “depends_on” make compose wait for the named services to be ready before starting this one
  • “links:” names of the services to be linked to our container (so that our web can access redis and mongo)

To read environment variables we use something similar to the snippet below

function readenv(t, conf) {
    conf=conf || {};
    var k, v, e, i;
    var is_int=false;
    for (i in t) {
        if (!t.hasOwnProperty(i)) continue;
        k=i.toUpperCase();
        v=t[i];
        e=process.env[k];
        if (typeof(v)=='number' && v.isInteger) is_int=true;
        conf[i]=(is_int?parseInt(e):e) || v;
    }
    return conf;
};
var conf=readenv({redis_host:"redis", web_port: 3000})

that sample code would take default configuration that would be overridden by environment variables

First run and “npm install”

Make sure you create all needed directories (avoid them being created and own by docker daemon) then run

docker run --read-only --rm -ti -u $UID \
  -e XDG_CONFIG_HOME=/usr/src/app/.config \
  -v $PWD/npm_cache:/.node-gyp \
  -v "$PWD":/usr/src/app \
  -w /usr/src/app \
  --tmpfs /tmp/ \
  node:8 npm install --dev

for some reason we had to pass “/.node-gyp” as a volume which is needed when some NPM packages require compilation.

here we have used

  • -u to pass user
  • -e to pass environment variables
  • -v to attach volumes
  • -w to change working directory

 

and that’s all folks