Dockerizing a MEAN-stack application

I created a small movie application which searches for movies in an open database, and the favourite movies can be saved to a local database. I decided to dockerize this application and share my experience on the process. Motivations, a bit of Docker in general and the Docker image for the Node/Express server. Part 1.

The source code of the project can be downloaded from my GitHub page. I’ll often refer to the files and folders, and it might be a good idea to at least open the repo in a browser window.

Background

As a side project, I created a movie application which is very loosely based on Netflix in the way that I can search for movies in an open database (it’s free and an API key is needed to use it) and then the selected movies can be saved to retrieve them later. The details of the movies (actors, plot etc.) are displayed in a modal on a button click.

Motivation

I wanted to simulate a production-like environment on my computer for this app. Docker is very popular and it provides the tools to achieve this goal.

When I decided to dockerize the project, I had to encounter some concepts I wasn’t really familiar with. One of these issues was to establish the communication between the client and the server side of the application inside the Docker network.

So I set up an nginx server for the front-end and connected it to the API container.

Unfortunately, the documentation on either side didn’t tell much on how to build a bridge between the two entities. I researched for long hours until I figured out how to set up the whole stack correctly.

Because I don’t think that my situation is unique and I’m sure that I’m not the only one in the world who wants to dockerize a MEAN-stack application, I decided to write a post on the exact steps.

Structure of the project

As I mentioned above, the project was created in MEAN-stack. That is, the front end (client) side is in Angular 6, for which a Node/Express server provides the API and the database where the favourite movies are stored is MongoDB.

The code was written in TypeScript. The reason for this is that the client side is in Angular (which also use TypeScript) and it’s nice to have the same language throughout the project.

On the other hand, TypeScript provides type checking before running the code, and therefore a lot of hard-to-find bugs can be prevented by using it.

This was, of course, not compulsory and the server side could have been written in plain JavaScript. But as I get more familiar with TypeScript, I start appreciating the values it provides when I write the code.

Environments

That brings me to the question of various environments for the project.

The post is about setting up a production-like environment. It’s production-like, because the app still runs on localhost, hence it doesn’t have everything what a production-ready app needs.

I didn’t set up the full Docker stack for development though, however, one of the main advantages of Docker is to simulate the production environment while developing the application.

The reason for this is that I decided to dockerize the application when the development was 98% done, and I didn’t want to change too many things then.

This fact doesn’t change much on the process of dockerization though, but if you want to use Docker for development, you will, for example, need to set up volumes in your API container. This will ensure that any changes made in the files of the application on the computer will be transferred to the container. Please note that this is not part of this post because of the reason I mentioned in the last paragraph.

Instead, I used the ts-node-dev package as a development server.

The third environment is set up for testing. I only created functional tests for the API and as such, I needed to create a database for local testing. I made a bash script that starts MongoDB in Docker, and the npm test script works for both local testing and continuous integration with Travis.

Development and test environments as well as setting up Travis for continuous integration are not part of this post as the focus is now on Docker.

Folder structure

I created two folders, one for the server (server) and one for the client (public) files.

The public folder has the Angular code where the scaffolding was done with the help of the Angular CLI. As such, this folder has its own package.json file with dependencies related to the client side of the application.

The server folder (not surprisingly) contains the backend code, and it also has its own package.json with the related modules and scripts.

The root folder contains the Docker- and nginx-related files, so every path defined in these files is relative to the root folder.

Advantages of Docker

There are a lot of them! I’ll just mention a few here which were the drivers of the dockerization.

First, Docker lets us simulate the production environment locally. Although my app is just a side project and it won’t ever get deployed live, it was great to play around with a possible deployment stack.

Second, I didn’t have to install everything on my computer. OK, Node.js is already installed but MongoDB and nginx are not. Docker made me possible to use these programs without placing hundreds or thousands of megabytes on my hard drive.

Third, Docker provided me with the option to start the whole stack with just one command. Without Docker, this wouldn’t have been possible. I would have started and managed the Node.js server, the nginx server and the MongoDB database separately in multiple terminal windows, which is a pain in the back. I just wanted to keep it nice and simple.

In the next few paragraphs, I’ll walk you through the steps I took to create a working full-stack MEAN app in Docker.

Dockerize the Node/Express API

The first task was to dockerize the Node.js server.

The image can be built from scratch but why shall we bother with it when guys at Node.js have already done that for us?

Lots of Node.js images are available from Docker Hub and I used the latest official version as it can be seen from the Dockerfile.

A Docker image is a set (or layer) of instructions which make an application complete and executable. A container is a running instance of that application, which is based on the image.
Dockerfile is the foundation of a Docker image. It tells how the image should be built and what actions (installations, folder creation, commands etc.) need to be taken to create our custom image.

So the first step was to create a file called Dockerfile in the root folder. Mine looks like this:

FROM node:10-alpine

LABEL description "API for the movie app built with Typescript, MEAN and Docker."

ARG mongodb_container
ARG app_env

ENV NODE_ENV=$app_env
ENV MONGO_URI="mongodb://${mongodb_container}:27017/db-${app_env}"

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

RUN mkdir server
COPY ./server/package.json .
RUN npm install --production

COPY ./server/lib/. ./server

CMD ["npm", "start"]

Let’s go over the content of the file step by step. More detailed explanation on dockerfiles can be found in the Docker documentation, which I found to be really useful and easy to read.

FROM: This command means which image (i.e. application version) is used for the server, in this case it’s the latest LTS Node.js (version 10).

I used the alpine version of the official image available from Docker Hub. Alpine is good because it only contains the most necessary things; therefore the image size is small.

LABEL: Just the description of the image, not compulsory.

ARG: They are build time arguments, which means that they are only available when the image is being built. Containers don’t have access to them. I use them to define the default values for the environment variables (mongodb_container and app_env).

ENV: The environment variables which are available inside the containers, in this case NODE_ENV and MONGO_URI. The mongodb_container and app_env variables will receive their values when the container derived from this image is started. This way both environment variables will be available for the Node.js server with the given values.

app_env will refer to the environment (it’s only production in my case), while the mongodb_container variable will be replaced - it’s shocking, but true - with the name of the MongoDB container (in Part 2).

A really good description and comparison of ARG and ENV can be found in this great post.

When all of these are declared, a folder inside the Docker container (RUN mkdir -p /usr/src/app) will be created. I didn’t come up with the name and path of the folder myself but got it from the Node.js image description in Docker Hub.

The WORKDIR instruction is really useful, because we can define the working directory for the rest of the Dockerfile (or, until it gets overridden by another WORKDIR instruction). If the WORKDIR is declared, it won’t be necessary to provide the /usr/src/app absolute path later in the image, because references to the folders inside the Docker container will be relative to the declared working directory (/usr/src/app).

Then I create a server directory inside the working directory (see last paragraph) of the container. So, instead of writing mkdir /usr/src/app/server, it’s enough to write mkdir server. Really cool!

COPY: This is an important one. As the name of the instruction implies, we copy the package.json file from the server folder to the container. The first path segment, ./server/package.json is relative to the Dockerfile, which is located in the project root folder. The second path, the . (watch the empty space before the dot) refers to the Docker container. In this case this is working directory defined by the WORKDIR instruction (/usr/src/app).

So far the working directory (/usr/src/app) inside the container looks like this:

| -- package.json
|
| -- /server

The next step is to install all dependencies by RUNning npm install --production. This command will ignore all devDependencies and only modules listed in the dependencies section will be installed.

The next COPY instruction copies the content of the lib folder inside the server directory on my hard drive to the server folder in the container.

Let’s stop here for a second.

If you look at the folder structure in the repo, you will see that the lib folder doesn’t exist. This is the folder where the TypeScript compiler writes the JavaScript files created from the .ts files.

Node.js doesn’t understand TypeScript, it will need JavaScript files, and this is why we need the content of the lib folder here and not the original .ts files.

Here’s the relevant part of the package.json file:

// ...
"scripts": {
  // ...
  "start": "node server/server.js",
  "prebuild": "rm -rf lib",
  "build": "tsc -p .",
  // ...
},
// ...

The outDir property of the tsconfig.json specifies the lib folder as the directory where the compiled files are written to. If this directory doesn’t exist, it will be created. You can read more about tsconfig.json in the official documentation for TypeScript.

npm run build needs to be run locally and the result (the compiled files from lib) will be copied into the Docker container.

The build run-script can be included in the image building process, but I didn’t do it because the app won’t ever be deployed anywhere. Not much change is needed in the Dockerfile to do so. Or, a script can be written which runs every command and does the deployment as well. A few options exist and I wanted to keep it really simple.

Last but not least, the CMD instruction will run the npm start command, which starts the server. By this time, the compiled files will all be copied to the server folder inside the container, hence the node server/server.js command in package.json for the npm start run-script.

When the container is started (more on this in the next post), this custom image will be the base for the container. The Node/Express API will run inside the container with the files and folders that I created.

The good thing is that the image doesn’t have to be recreated. Once it’s built, the container can be started any time.

Conclusion

This concludes Part 1, in which I wrote about my motivations to dockerize the application and described the Dockerfile I created for the image of the Node/Express server.

This is a way of doing it and it’s definitely not the only way. This is how I did it and it works really well with the rest of the containers running.

Right now, it will throw an error if the container is instantiated from this image because the database is not set up yet. But this post is already long enough, so that’s it for today.

In the next part, I’ll move on to setting up the other two blocks (client side and database) of the application in Docker.

Thanks for reading and see you next time.