Dockerizing a MEAN-stack application - Part 3

The back end of the application now works, but the user interface is still missing. Without this, an application (especially a movie application) is not viable. In this post, I will write about the nuances of wrapping the front end in a container and make the entire app work in Docker containers. Part 3.

This is the final part of the series on how to dockerize a MEAN stack app.

Part 1 and Part 2 can be accessed by clicking the links.

The source code of the project can be downloaded from my GitHub page.

Dockerize the client

So far so good, but the application has a client side built in Angular. It would be great if we could search, save and delete movies through the user interface.

Communication between the front end and the back end inside containers has its nuances. The browser is not part of the containers and therefore we can’t just simply add the name of the container to the request URL in the client-side app. This, by itself, won’t work.

So what is the solution?

First, I wanted the app to only accept requests on port 80 as discussed in the server side part. Therefore all URLs in the Angular app should refer to localhost or 192.168.99.100 on Windows. More on that later.

Second, the Angular app needs to be served by a simple web server. I chose nginx, which will also act as a reverse proxy, and forwards any database-related query to the API.

Dockerfile for the client

I needed a dockerfile for the client as well. I created this simple one in a file called client.Dockerfile in the root folder:

FROM nginx:alpine

LABEL description "Angular client side"

COPY nginx.conf /etc/nginx/nginx.conf

WORKDIR /usr/share/nginx/html
COPY dist/ .

The first interesting line is the first COPY command. I created an nginx.conf configuration file in the root project folder (see below), which is copied to the relevant folder in the Docker container. Again, I took the /etc/nginx/nginx.conf path from the official nginx image description in Docker Hub.

Similarly, I define the working directory, and then copy the content of the build from the dist folder (created with ng build --prod) to the working directory.

nginx configuration

The dockerfile above refers to a simple nginx.conf file, which looks like this:

events {
  worker_connections 1024;
}

http {
  types {
    text/html html;
    text/css  css;
    text/plain  txt;
    image/jpeg  jpeg jpg;
    image/png png;
    image/x-icon ico;
    image/svg+xml svg svgz;
    application/x-javascript  js;
  }

  server {
    location / {
      root /usr/share/nginx/html;
    }

    location /api {
      proxy_pass http://node-api/api;
      proxy_redirect off;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real_IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
  }
}

The important configurations are in the http directive.

In the types directive, I defined the mime types that are used in the application. I needed to do this so that the relevant files display correctly.

The server directive contains the web and proxy server configurations. If the / (i.e. localhost) endpoint is hit, the content from the /usr/share/nginx/html will be served. This path fragment refers to the Docker container and was defined in the client side dockerfile as working directory (see above).

All route endpoints defined in server.ts start with /api. When these endpoints are hit, the server makes the relevant query (GET, POST or DELETE) to the database. In these cases, I want to map the client side request to the server.

The second location block is a simple proxy pass, which is responsible for the redirection. The most important part of the configuration is the proxy_pass directive.

As it can be seen, it’s not https://localhost/api where the redirection occurs. This would work fine if we didn’t have Docker set up for the application.

In our case though, we need to refer to the name of the service (the Node/Express server) in Docker, and this is node-api (see the docker-compose.yml in Part 2). This is one of the most important things that can go wrong. If the names don’t match, the whole application won’t work.

Final docker-compose.yml

Now that the nginx server is set up on the client side, I’ll need to create a Docker image and a container, so that they could work together.

I added a third service and a second network to my docker-compose.yml file, which looks like this:

version: '3'

services:
  db:
    image: mongo:latest
    container_name: mongo-movie-app-container
    volumes:
      - mongo-data:/data/db
    networks:
      - api-network
  node-api:
    build:
      context: .
      args:
        - mongodb_container
        - app_env
    image: node-express-movie-api
    container_name: node-movie-api-container
    env_file: .env
    networks:
      - movie-network
      - api-network
    depends_on:
      - db
  client:
    build:
      context: .
      dockerfile: client.Dockerfile
    image: client-side-movie-app
    container_name: client-movie-app-container
    ports:
      - "80:80"
    networks:
      - movie-network
    depends_on:
      - node-api

volumes:
  mongo-data:

networks:
  movie-network:
  api-network:

The new service is called client, which refers to the client side served by nginx (it can be named differently).

The build instruction is similar to the node-api. The context is the root folder (.) and here I have to specify the dockerfile, because the default filename Dockerfile is already used for the Node server container.

I named both the image and the container to make it easier to identify them.

The ports part is important. I want the app to be accessible through one port only, so I opened up port 80 both in the host (i.e. the computer, first 80) and inside the container (second 80, after the colon). With this, when I just type localhost (or 192.168.99.100 on Windows) in the browser, the Angular client will be seen.

In the next step, I connect the client to a new network called movie-network, which includes both the Node/Express API and the front end build, but not the database. The database is only connected to the API, but not to the client.

This means that a search for a movie in the search box of the client build will be submitted to the API (through movie-network) and the API will forward this request to the database through api-network. The only point of connection of the MongoDB database is the server, and every query to the database has to go through the server. The database is not directly accessible from the client.

The new network also needs to be declared as a network in the networks section.

The last instruction is the depends_on, and the client side container is depended on the API (node-api, the name of the API service).

Start the app

With this, the app is now fully dockerized and ready to start.

If I run the docker-compose up command for the project folder, Docker downloads and builds the images (if it hasn’t already done so), and by navigating to localhost (or 192.168.99.100 on Windows) in the browser, I can now search, save and delete movies.

As I mentioned above and in Part 2, the TypeScript code of the server needs to be compiled and the Angular build should be built, because these codes will be copied to the container.

Conclusion

This concludes Part 3 and the series on dockerizing a MEAN stack app.

The process itself was not particularly hard, but I encountered some minor issues that took me some time to figure out.

The key issue is to connect the containers in the appropriate way, so that they can talk to each other.

I hope that this series of posts are useful and can save some time for others trying to do something similar.

Thanks for reading and see you next time.