Dockerizing Node Services

I

f you haven't jumped onto the docker bandwagon just yet, you are missing the boat. It has quickly become the de facto way for building, and deploying applications of all types and sizes. And it should be. It's easy to learn and makes deploying and scaling applications significantly easier. Linux containers are lightweight, start up very quickly, and are "throw away" resources. Most of all, Node.js applications are a breeze to get running containers

Set Up An App

The most common, and easiest way to create Docker images, is to use Dockerfiles. Much like a Makefile, Rakefile, Jakefile, etc, A Docker file is a simple set of instructions that is used to create the base image for you application container. For starters, lets create a simple Hapi.js application to get us started.

mkdir -p node-dockerapp  
cd node-dockerapp  
npm init  
npm install hapi --save  
touch index.js Dockerfile  

We have a bunch of empty files, an index.js which will serve as the entry point for the container when it runs. And a Dockerfile, which we will get to in a minute. Let create simple HTTP application in the index.js file

var hapi = require('hapi')  
  , server = new hapi.Server()
  , PORT   = process.env.PORT || 3000
  ; 

server.connection({  
    host:'0.0.0.0'
  , port:PORT
  , labels:['api']
});

server.route({  
    method:'get'
   ,path:'/'
   ,config:{}
   ,handler: function( request, reply ){
       reply({
           status:200,
           message:'hello world',
           date: ( new Date() ).toISOString()
       });
   }
})

server.start(function( err ){  
    if( err ){
        console.error( err.message );
        console.error( err.stack );
        process.exit( 1 )
    }

    console.log( 'server running at: %s', server.info.uri );
})

That's it for our example app. A single route that returns the current date and a friendly message. Let's get it running in a container! To do that we need to:

  • Pick an OS
  • Get our code into the Container
  • Install the app
  • Specify the main command

Make A Docker Image

A Docker file specifies the steps needed to create A Docker image. That is the Operating system, system packages, binaries and tools. This, along with your application code is everything that is needed to run your app in isolation. Very analogous to the way a Makefile which yields an executable - a Dockerfile yields an application image

Image ['im]-ij -n, --noun.

  1. a template for application instance
  2. a physical likeness or representation of a person, animal, or thing
  3. a mental representation; idea; conception

Our Docker file should look something like this

# Dockerfile
FROM alpine:3.2  
RUN apk update && apk upgrade  
RUN apk get nodejs

ADD . /opt/app  
WORKDIR /opt/app 

ENV NODE_ENV=production  
ENV PORT=3000

RUN npm install

EXPOSE 3000  
CMD ["node", "--harmony", "index.js"]  

Dockerfile ['Dok]r-fahyl -n, --noun.

  1. text file that containing all instructions to assemble an image

This is a rather simple, and common Dockerfile for node applications. Lets break this down.

OS

FROM alpine:3.2  

This line specifies another image to use as the base line for the image we are about to build. In most cases, this will be a Linux distribution of your liking. I've chosen alpine, which is a tiny ( 16MB ) Linux install and will keep our image size to a minimum.

RUN apk update && apk upgrade  
RUN apk get nodejs  

RUN tells docker to to run a command in a sub shell. apk is the package manager for alpine, and we are using it to just install node. This syntax is similar to doing:

/bin/sh -c "apk get nodejs"

Be careful, docker uses /bin/sh, this is not bash. If you want bash, you will need to install it and execute it explicitly.

Installing

ADD . /opt/app  
WORKDIR /opt/app  
RUN npm install  

ADD does much what you would think it does, add things into your container. Here we are taking the current directory ( . ) and adding it into the container at /opt/app and moving the current working directory to that directory using WORKDIR. All commands from here will be from that directory context. Then we use npm to install the project dependencies.

Environment Variables

ENV NODE_ENV=production  
ENV PORT=3000  

ENV set default environment variables for containers created from this image - They can be overridden. This isn't required as variables can also be set if not defined in the Dockerfile

PORTS

EXPOSE 3000  

EXPOSE informs Docker that the container listens on the specified network ports at run time. This is internal to the container and dockers network. This doesn't make the port accessible on the host machine. You can list more than one port if your app listens on more than one port

Execution

CMD ["node", "--harmony", "index.js"]  

CMD Set the default command when the container starts and a command is not specified. The first argument in the array is the executable to be invoked, everything else after that are arguments to be passed to that executable. Be Careful, this does not execute in a sub shell. This means there is no processing or expansion of the parameters in the array. Things like $PWD or $HOME will not be replaced. You need to be pretty specific here. A good rule of thumb is to keep this simple and let the application do any logic or processing that needs to happen. Configuration of the app can long way to making you containers more user friendly and less of a black box.

Important - Your containers will stop and exit when the process you've specified in CMD exits. This means you can not use this to start a daemon process. Whatever you put here must run in the foreground. You could use this to execute a bash script that starts a number of daemons and starts a single foreground process. That is legal.

Build

Now that we have a Dockerfile we can build using the build docker command using the -t flag give it a name and tag name ( version )

docker build -t dockerapp:0.0.1 .  

Container kuh n-['tey]ner -n, --noun.

  1. a virtual environment that has its own CPU, memory, block I/O, network, etc - Not a virtual machine
  2. a large, van like, reusable box for consolidating smaller crates or cartons into a single shipment

Done! If all goes well, we should have a new Docker image build and ready to ship. We can verify that it all worked by spinning up a container instance from our image using docker run. With the run command you will typically using the -e flag to set environment variables in the container and the -p flag if you want to expose ports on your host machine

docker run --name app -it --rm -p 3000:3000 dockerapp:0.0.1  

-ti creates an interactive tty ( so you can type & keep the container running )
--rm deletes the container when it stops

Now we can use something like curl to hit our app running on port 3000

curl http://0.0.0.0:3000/  

You can also poke around in a running container using bash using the exec command. This is really helpful for doing debugging before you ship or publish your images

docker exec -it app-1 /bin/sh  

Containers a lightweight, start fast, configurable and throw away. While this is a really simple use of docker and containers, I use containers to run multiple version of complex application stack, replicate production bugs, compile and build npm packages, manage deployments, and more. Try it, have fun with it!

api docker node.js