Mar 27, 2020

Cloud Native Approach For Software Engineering. Part 2 - Containerization

A Concept of Unified Environments Where If A Program Runs On Local, Will Run In Production Too.
Vasuki Vardhan G
Vasuki Vardhan GTech Lead - II
lines

The previous article in the series discussed what Cloud Native Approach is and the infrastructure behind it. Those concepts will serve as a foundation for the concepts discussed in this article.

If you missed the previous part, read it here.

So picking up from I left off, we know what exactly cloud-native applications are. Let's focus our attention to a specific layer of the Cloud Native stack, namely the scheduling and orchestration layer, or to be even more specific, a component of this layer.

Welcome to the World of Containers.

You would have probably heard about Dockerization or Containerization. Yes that's what we will be talking about today in this article. To understand containerization, we will go through a comparative case scenario between VM's and containers.

Figure 1 - Containerisation

Let's consider building a NodeJS application. In the VM realm on the bare metal, we have our OS and our Hypervisor (a software layer which acts as a bridge between your host and VM) which take a certain set of resources. So, in our traditional way, let's deploy our NodeJS application over a VM.

Figure 2 - Containerisation

As you can see, in the VM, when we deploy the app, we also have an OS known as the Guest OS being deployed with the required libraries and the application itself. This makes the deployment very heavy. I have not seen a small NodeJS VM being less than 400MB. Moving as per cloud-native approach, this deployment must be scalable. So, let's try and horizontally scale this deployment.

Figure 3 - Conatinerisation

As we can see, when we horizontally scaled the application, the server's resources are completely utilised and there is nothing left to work with anymore. Thus, we needed a better solution to do things.

That is where containerization comes into the picture.

Figure 4 - ContainerisationAs As evident in the above image, in the containers realm, we have an OS and a Container Runtime (ex. Docker Engine) on the bare metal server which this takes up a certain set of resources. But running a container is not a straight-up process like spinning up a VM Image and deploying code in it. It's a three-step process as shown in the image. So, first, we create a Manifest describing our deployment this can be your Dockerfile in case of Docker or manifest.yaml in case of Cloudfoundry.

A sample Dockerfile looks like this:

Then using this manifest, we create an image in the docker realm. This is a Docker image and ACI (Application Container Image) in case of Rocket. From the Image, we get the actual container itself which contains all the libraries and environment configs required to run your application. Basically, no matter you choice of solution be Docker, Rocker or even Cloudfoundry, this process remains the same. Now, let's go ahead and deploy out JS Container onto the server.

Figure 5 - Containerisation

As you can see, the deployment doesn't have an OS in it but only has the required libraries. This makes containers very lightweight and they consume less amount of resources. So, as per the cloud-native approach let us proceed to horizontally scale the deployment.

Figure 6 - ContainerisationAs Even after scaling we have plenty of resources to play with. Let's say you need to integrate with a Cognitive API like Google Image Recognition with a Python wrapper built around it. In this case, if we are using VM's we will be pushed towards having that python wrapper and the NodeJS application in the same VM to manage resources. But if the demand is to create a composite service out of the Python wrapper, we would want to create a separate VM for that Python application.

Figure 7 - Containerisation

In our case, we don't have any resources left in the VM server. But if we club Python and JS applications together, we will lose the ability to scale JS/Python app up or down individually, which is not truly cloud-native. In order to make it adhere to the cloud-native infrastructure, we have to clear out certain resources in our server and then deploy the Python app. That basically means that we have to bring down a JS instance and spin up a Python instance.

Now let's consider the same deployment in the container realm.

Figure 8 - Containerisation

In the container realm, we deployed the Python application with all the JS application and still have the ability scale every application independently, while having a bit of resources still left. And the best is that if we have a bit of resource freely available, containers can share it between them for optimization and even if a container is not using the resources, it'll be made available for other containers in the server.

With Containerization, we can truly take advantage of the cloud native architecture.

We spoke about the portability of containers, ease of scaling with containers and overall with the three-step way of pushing a container into the server that allows us to have a much more agile DevOps function with continuous integration and delivery.

Demo

Now that we know about containerization, let's see a few scenarios of containerization with Docker as our containerization tool.

Prerequisites to follow these set of demos:

  • Docker CE
  • Docker Hub Account
  • Text/Code Editor (I personally prefer VSCode but any editor that makes you comfortable is good to go)
  • NodeJS

We will be going through two scenarios:

Scenario 1: Containerizing a React Application with NGINX as our server

This is how our React Project would look like:

Project Structure

But to follow with this post you can just create a basic react app by running create react app

Next, we create a folder named nginx and inside that, we create a file name default.conf. This file holds the configurations required for nginx to host our React build.

This file points out to the port to listen on and that is port number 80 in our case and also stuff like what is the root directory that the server has to look into and also about the server-side compression of the payload before being transferred.

The next step is to create a Dockerfile for your project. This Dockerfile describes your container.

In the above example, we describe 2 containers for our react application where Stage 1 container is used for building and the Stage 2 container is used for serving.

In Stage 1, Container is built out of a prebuilt image of Alpine Linux with node 11 already installed in it. First, we move the package.json and package-lock.json to the container and we run npm install to install all the required node_modules. String the node_modules in a different layer will prevent unnecessary installs in every build. Then we switch our working directory and copy all our application code into the container and then we run npm run build to create a production optimized build of our React App.

In Stage 2, this is the container that will be handling our delivery. This container is built from pre-built Alpine Linus Image with only nginx in it (Mind that this container does not have anything to do with NodeJS). First, we will copy our Nginx configuration to the container and then remove the default website of nginx from the container. Next, we copy off all the build artefacts from our builder container into this container and then we issue the command to start the nginx server.

Coming on how to build this with Docker, navigate to your project root and run:

This will build your container and with the -t option provided, it'll tag the image with the provided name. DockerHub username is very important if you want to push it to DockerHub. So, to push your container to DockerHub we run:

Docker login just logs you into DockerHub and docker push will push it to your docker hub. Next we shall check if that worked with the following command:

you should see {dockerUsername}/reactdemo listed.

Now to run this image basically anywhere just run the command,

Here -d means the process will run detached as a Daemon process and -p gives the port binding. In this case, the port 8080 of the host is bound to port 80 of the container. If you are running this on your local you could just hit http://localhost:8080 and it'll point to your react app running in the container.

Scenario 2: Multi Container with NodeJS Backend and MongoDB.

In this scenario, your project would look something like this:

Multi Container Project Structure

In the root, create a folder for your node application and inside that folder run, npm init and create a file app.js.

Now if you try to run this project, it would throw an error like this:

This is because the DB is not defined. So let's add the DB in a containerized way.

We will first add a Dockerfile.

This Dockerfile describes that the container is built on node:lts-slim image. Then we switch the working directory and copy package.json and package-lock.json and then run npm install. This layering will prevent module installations every time.

Then copy the source files to the container and start it with npm start while exposing port 4007.

In the root directory add a docker-compose.yml file.

This file describes the composition of the 2 containers. We have defined 2 services here, namely express and db.

In express, we specify what to build. Here, we just specify the name of the directory with the Dockerfile. Next, we provide the host to the container port bindings and we link the express service to the db service. The image key is used to tag the generated image.

In db, using the image key we specify the prebuilt image with mongo and then we provide port binding.

Now just run

This will start the service with the DB dependency. Now you will see:

Now you can hit your service on http://localhost:4007 

POST

 GET

I hope that this gives a clear idea of how containerization can be implemented in 2 different scenarios. It is quite easy to unify our development and production environments, while increasing portability with containers.

That's it, guys. I hope you liked the article and learned something new today. 

See you soon.

Book a Discovery Call.

blog logo