Dev Environment With Docker Compose

Cover Image for Dev Environment With Docker Compose

Dev Environment With Docker Compose

When collaborating on a project, no one likes debugging an error which only occurs in a certain environment, yet "it works on my machine". This is solved via containers.

Containers bundle all code, system libraries, and everything else that an application requires to run in an isolated, reliable, reusable package. This means that you can run your container on any host and have the same results, whether it is running locally for development or in a large production Kubernetes cluster. In this post I will use Docker Compose to create a small application using NodeJS and MongoDB. The code for this post can be found on GitHub.

Docker and Docker Compose

Docker is an open-source platform used for building, managing, and deploying containers.

Docker Compose is a tool for defining and running multi-container applications.

Prerequisites

That's it! Aren't containers awesome?

Project

As this post focuses on containerizing an application to use for local development, we will consider the actual application out of scope and throw together a small Express app in a single file. We will run MongoDB locally so that we don't have to use a remote database during development.

Server

Create the project with the JavaScript package manager of your choosing, I'll use pnpm here. Create a directory for your project then:

  • set up the project: pnpm init
  • install dependencies: pnpm add express mongoose

Next we'll create our index.js. As this is not a JavaScript focused post, I'll use comments in the file rather than going over it:

const express = require("express");
const mongoose = require("mongoose");

const app = express(); // create our express server
app.use(express.json()); // make sure we can return JSON
// pull out some secret stuff from environment variables
const { DB_PORT, DB_URL, DB_NAME, PORT } = process.env;

// connect to MongoDB
mongoose
  .connect(`${DB_URL}:${DB_PORT}/${DB_NAME}`)
  .then(() => console.log("mongoose connected"))
  .catch((e) => console.error(`error connecting to monogodb ${e}`));

// create a schema
const userSchema = new mongoose.Schema({
  lastName: String,
  firstName: String,
});

// create the model
const User = mongoose.model("User", userSchema);

// add some routes
app.get("/", (req, res) => {
  res.send("Hello");
});

app.post("/user-add", async (req, res, next) => {
  const { lastName, firstName } = req.body;
  if (!lastName || !firstName) {
    res.send("please include first name and last name in request");
  } else {
    const user = new User({ firstName, lastName });
    try {
      await user.save();
      res.status(201).send(user);
    } catch (e) {
      res.status(400).send(e);
      next();
    }
  }
});

app.get("/users", async (req, res, next) => {
  try {
    const users = await User.find();
    res.send(users);
  } catch (e) {
    res.status(404).send("error finding users");
    next();
  }
});

// start the server
app.listen(PORT, () => console.log(`server listening on ${PORT}`));

Before we forget, let's create our .env file with those environment variables we referenced:

MONGODB_DATABASE_URL=mongodb://mongodb
MONGODB_DOCKER_PORT=27017
MONGODB_DATABASE_NAME=test
PORT=8080

Next, let's create the Dockerfile so we can containerize the application.

FROM node:20-alpine3.17

WORKDIR /app

COPY package.json .

RUN npm install

COPY . .

EXPOSE 8080

CMD ["npm", "start"]

We start with a base image, here it is node:20-alpine3.17. It is prudent to choose a small image here such as an Alpine Linux in order to keep the container light weight. The Dockerfile instructions, as the keywords FROM, WORKDIR, COPY, etc. are rather self-explanatory so I won't go through what each one does, however it is good to know that each instruction adds an image layer. This among other things enables faster builds as each layer is cached locally and can be reused between images. The full Dockerfile reference can be found in the official docs.

Container Time!

Now we can build the Docker container. From within the application directory run:

docker build -t express-api-docker .`

docker build tells Docker to build an image. -t is used to tag the image. We provide the context to the docker build command via the "."

services:
  server:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: express-api-docker
    ports:
      - "8080:8080"
    env_file: ./.env
    environment:
      - DB_PORT=$MONGODB_DOCKER_PORT
      - DB_URL=$MONGODB_DATABASE_URL
      - DB_NAME=$MONGODB_DATABASE_NAME
    depends_on:
      - db

  db:
    image: mongo:noble
    container_name: mongodb_server
    env_file: ./.env
    ports:
      - "27017:27017"
    volumes:
      - $HOME/tmp/datadir:/data/db

volumes:
  datadir:

The upstream documentation for the docker-compose file contains full references to each element, but let's take a quick tour of docker-compose.yaml.

We define two services, mongodb and server. A service is a definition of computing resources that can be independently scaled or replaced from other components. Services are available to each other via the service name, hence using the Mongodb url mongodb://db where db is the name of our database service as defined on line 17.

The server service is our Express application. Lines 3 - 5 define the build steps which will be executed when docker-compose up is run, note the similarity with the above commands to build and tag the container. Lines 7 & 8 define the port that is exposed on the host from the container. The syntax is $HOST_PORT:$CONTAINER_PORT. Though in this example port 8080 on the container is exposed to the same port on the host, so we could say expose port 8080 on the container to 27189 on the host if we chose with 27189:8080. Next the .env file is noted and the environment variables are set. Finally we say that the server service depends_on the db service, which will ensure that the db service is up before starting the server.

Next we define the MongoDB container which rather than being defined by a local Dockerfile is pulled from our container registry, in this case Docker Hub. The thing to notice here is the volumes, where we give our server service access to the volume defined on line 27. Line 24 is where we mount the directory from the host $HOME/tmp/datadir to the container at /data/db. The directory on the host has to exist before the containers are brought up.

Run The Application

Now we can run the application with docker compose up. Test that the endpoints are working as expected.

curl localhost:3000 should return "Hello". We can add a user with

curl -X POST -d '{"firstName": "", "lastName": "Gill"}' \
-H "Content-type: application/json" \
localhost:8080 user-add

{"lastName":"Parker","firstName":"Peter","_id":"67e818cebe4ddee874a6e8f0","__v":0}%

Let's see if the user was added

curl localhost:8080/users
[{"_id":"67e818cebe4ddee874a6e8f0","lastName":"Parker","firstName":"Peter","__v":0}]%

Much success.

Wrap Up

Docker Compose is a great way to run containers that depend on each other and manage Docker resources for an application. Running an application in a container will ensure that the application runs in any environment, giving the same results. I hope that this post helps you in your containerization journey.

Cheers!